Leveraging Neural Trojan Side-Channels for Output Exfiltration
Leveraging Neural Trojan Side-Channels for Output Exfiltration
Blog Article
Neural networks have become fresh n clean cream rinse pivotal in advancing applications across various domains, including healthcare, finance, surveillance, and autonomous systems.To achieve low latency and high efficiency, field-programmable gate arrays (FPGAs) are increasingly being employed as accelerators for neural network inference in cloud and edge devices.However, the rising costs and complexity of neural network training have led to the widespread use of outsourcing of training, pre-trained models, and machine learning services, raising significant concerns about security and trust.
Specifically, malicious actors may embed neural Trojans within NNs, exploiting them to leak sensitive data through side-channel analysis.This paper builds upon our prior work, where we demonstrated the feasibility of embedding Trojan side-channels in neural network weights, enabling the extraction of classification results via remote power side-channel attacks.In this expanded study, we introduced a broader range of experiments to evaluate the robustness and effectiveness of this attack vector.
We detail a novel training methodology that enhanced the correlation between power consumption and network output, achieving up to a 33% improvement in reconstruction accuracy over benign models.Our approach eliminates the need for additional hardware, making it stealthier and more resistant to conventional hardware Trojan detection methods.We provide comprehensive analyses of attack scenarios in both controlled and variable environmental conditions, demonstrating the scalability and adaptability of our technique across diverse neural network architectures, such as MLPs and CNNs.
Additionally, we sugar daddy and sugar baby candy explore countermeasures and discuss their implications for the design of secure neural network accelerators.To the best of our knowledge, this work is the first to present a passive output recovery attack on neural network accelerators, without explicit trigger mechanisms.The findings emphasize the urgent need to integrate hardware-aware security protocols in the development and deployment of neural network accelerators.