We tackle the anomaly detection problem within a given set of binary processes through a learning-based controlled sensing approach. This problem is particularly pertinent to applications related to the Internet of Things (IoT) that monitor multiple related processes. Each process is defined by a binary random variable indicating its anomalous status. To pinpoint anomalies, a decision-making agent observes a subset of processes at each time point, with each observation incurring an associated cost. We formulate a sequential selection policy that dynamically determines which processes to observe at each moment, aiming to minimize both decision delay and sensing cost. The conventional solution using model-based active hypothesis testing algorithms overlooks the joint statistics of the processes and fails to consider unequal sensing costs or errors in observations. To solve this problem, we, for the first time, pose it as a sequential hypothesis testing problem within the framework of Markov decision processes (MDPs), leveraging both a Bayesian log-likelihood ratio-based reward and an entropy-based reward. We address this problem through two approaches: 1) a deep reinforcement learning-based method involving the design of both deep Q-learning (DQN) and policy gradient actor-critic (AC) algorithms and 2) a deep active inference (AI)-based approach. Our model-based posterior updates to tackle the uncertainties in the observations, combined with the data-driven neural networks to handle the underlying statistical dependence between the processes, strike a balance between the model-based and the data-driven approaches. Our numerical experiments showcase the effectiveness of our algorithms, illustrating their capability to adapt to any unknown statistical dependence among the processes.
@en