A Reinforcement Learning-Based Control Strategy for Industrial Dust Collectors in a Simulation-Based Digital Twin Environment
Byun, Seungyoon, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Chang, Qing, EN-Mech & Aero Engr Dept, University of Virginia
This study proposes a control strategy to simultaneously enhance the safety and efficiency of dust collector systems used in industrial sites. Specifically, it aims to address the issues of pollutant removal efficiency, filter clogging prevention, energy consumption reduction, and sustained filtration performance by leveraging reinforcement learning-based approaches that can be resolved through real-time operation control in a digital twin environment.
To this end, an industrial field site was selected at Samsung SDI's secondary battery manufacturing plant, where the dust collector operates under specific conditions. A digital twin model was constructed based on empirical data, reflecting the actual on-site operating conditions. This digital twin model includes the core components of a dust collector system—Fan, Fume Inlet, Reactor, Filter, and Outlet—each represented by parameters such as pressure, flow rate, concentration, and operating conditions, and implemented as a physics-based simulation model.
On this model, the reinforcement learning algorithm Proximal Policy Optimization (PPO) was applied to autonomously learn the optimal control policy for fan speed and valve operation. The learning goal was to maintain the system's internal temperature below 50°C (323.15 K) while simultaneously ensuring pollutant removal efficiency without causing filter clogging. Additionally, the agent was designed to treat variable inflow conditions in real time by optimizing multiple factors such as temperature, flow rate, pressure, and pollutant concentration through a multi-objective structure.
The experimental results demonstrated that the PPO policy was able to maintain approximately 3.15 times faster response speed compared to the baseline and adapt to environmental changes. Furthermore, the trained agent maintained consistent filtration performance and system stability during repeated policy learning and evaluation phases. As a result, it was able to maintain the internal temperature below 315.15 K, achieve a pollutant removal efficiency of 99.9%, and maintain filter efficiency at 0.4. This implies that the proposed strategy can reduce the time required for system stabilization and enhance predictive performance for real-time control cycles.
Moreover, the proposed strategy is applicable not only to dust collectors but also to various manufacturing facilities, such as semiconductor manufacturing lines, by leveraging historical data, sensor data, and equipment status. It enables the establishment of a smart production environment based on digital twin and predictive maintenance, which facilitates the simultaneous optimization of both equipment-level performance and overall productivity and system efficiency.
In conclusion, this study contributes a practical approach that combines the latest AI technologies with existing facilities to improve the safety and efficiency of dust collector systems. It can also serve as a foundation for developing autonomous and optimized control strategies in simulation-based digital twin environments.
MS (Master of Science)
English
2025/04/17