Abstract: In deep learning-based dehazing strategies, attention mechanisms are widely used to refine feature representations and improve overall performance. However, conventional contextual attention ...
The scaling of inference-time compute has become a primary driver for Large Language Model (LLM) performance, shifting architectural focus toward inference efficiency alongside model quality. While ...
Abstract: This paper is concerned with the resilient distributed state estimation issue for smart grids under probabilistic encoding-decoding scheme and randomly occurring deception attacks. Due to ...