2021
Hu, Yinghua; Zhang, Yuke; Yang, Kaixin; Chen, Dake; Beerel, Peter A.; Nuzzo, Pierluigi
Fun-SAT: Functional Corruptibility-Guided SAT-Based Attack on Sequential Logic Encryption Proceedings Article
In: 2021 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), IEEE, 2021.
Abstract | Links | BibTeX | Tags: Extract Design Secrets, IP Piracy, Overproduction, Reverse Engineering Attacks, Vulnerability Detection
@inproceedings{Hu2021,
title = {Fun-SAT: Functional Corruptibility-Guided SAT-Based Attack on Sequential Logic Encryption},
author = {Yinghua Hu and Yuke Zhang and Kaixin Yang and Dake Chen and Peter A. Beerel and Pierluigi Nuzzo},
url = {https://doi.org/10.1109/host49136.2021.9702267},
doi = {10.1109/host49136.2021.9702267},
year = {2021},
date = {2021-12-01},
booktitle = {2021 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)},
publisher = {IEEE},
abstract = {The SAT attack has shown to be efficient against most combinational logic encryption methods. It can be extended to attack sequential logic encryption techniques by leveraging circuit unrolling and model checking methods. However, with no guidance on the number of times that a circuit needs to be unrolled to find the correct key, the attack tends to solve many time-consuming Boolean satisfiability (SAT) and model checking problems, which can significantly hamper its efficiency. In this paper, we introduce Fun-SAT, a functional corruptibility-guided SAT-based attack that can significantly decrease the SAT solving and model checking time of a SAT-based attack on sequential encryption by efficiently estimating the minimum required number of circuit unrollings. Fun-SAT relies on a notion of functional corruptibility for encrypted sequential circuits and its relationship with the required number of circuit unrollings in a SAT-based attack. Numerical results show that Fun-SAT can be, on average, 90× faster than previous attacks against state-of-the-art encryption methods, when both attacks successfully complete before a one-day time-out. Moreover, Fun-SAT completes before the time-out on many more circuits.},
keywords = {Extract Design Secrets, IP Piracy, Overproduction, Reverse Engineering Attacks, Vulnerability Detection},
pubstate = {published},
tppubtype = {inproceedings}
}
The SAT attack has shown to be efficient against most combinational logic encryption methods. It can be extended to attack sequential logic encryption techniques by leveraging circuit unrolling and model checking methods. However, with no guidance on the number of times that a circuit needs to be unrolled to find the correct key, the attack tends to solve many time-consuming Boolean satisfiability (SAT) and model checking problems, which can significantly hamper its efficiency. In this paper, we introduce Fun-SAT, a functional corruptibility-guided SAT-based attack that can significantly decrease the SAT solving and model checking time of a SAT-based attack on sequential encryption by efficiently estimating the minimum required number of circuit unrollings. Fun-SAT relies on a notion of functional corruptibility for encrypted sequential circuits and its relationship with the required number of circuit unrollings in a SAT-based attack. Numerical results show that Fun-SAT can be, on average, 90× faster than previous attacks against state-of-the-art encryption methods, when both attacks successfully complete before a one-day time-out. Moreover, Fun-SAT completes before the time-out on many more circuits.
2020
Xu, Nuo; Liu, Qi; Liu, Tao; Liu, Zihao; Guo, Xiaochen; Wen, Wujie
Stealing your data from compressed machine learning models Proceedings Article
In: 2020 57th ACM/IEEE Design Automation Conference (DAC), pp. 1–6, IEEE 2020.
Abstract | Links | BibTeX | Tags: Extract Design Secrets
@inproceedings{xu2020stealing,
title = {Stealing your data from compressed machine learning models},
author = {Nuo Xu and Qi Liu and Tao Liu and Zihao Liu and Xiaochen Guo and Wujie Wen},
doi = {10.1109/DAC18072.2020.9218633},
year = {2020},
date = {2020-01-01},
urldate = {2020-01-01},
booktitle = {2020 57th ACM/IEEE Design Automation Conference (DAC)},
pages = {1--6},
organization = {IEEE},
abstract = {Machine learning models have been widely deployed in many real-world tasks. When a non-expert data holder wants to use a third-party machine learning service for model training, it is critical to preserve the confidentiality of the training data. In this paper, we for the first time explore the potential privacy leakage in a scenario that a malicious ML provider offers data holder customized training code including model compression which is essential in practical deployment The provider is unable to access the training process hosted by the secured third party, but could inquire models when they are released in public. As a result, adversary can extract sensitive training data with high quality even from these deeply compressed models that are tailored for resource-limited devices. Our investigation shows that existing compressions like quantization, can serve as a defense against such an attack, by degrading the model accuracy and memorized data quality simultaneously. To overcome this defense, we take an initial attempt to design a simple but stealthy quantized correlation encoding attack flow from an adversary perspective. Three integrated components-data pre-processing, layer-wise data-weight correlation regularization, data-aware quantization, are developed accordingly. Extensive experimental results show that our framework can preserve the evasiveness and effectiveness of stealing data from compressed models.},
keywords = {Extract Design Secrets},
pubstate = {published},
tppubtype = {inproceedings}
}
Machine learning models have been widely deployed in many real-world tasks. When a non-expert data holder wants to use a third-party machine learning service for model training, it is critical to preserve the confidentiality of the training data. In this paper, we for the first time explore the potential privacy leakage in a scenario that a malicious ML provider offers data holder customized training code including model compression which is essential in practical deployment The provider is unable to access the training process hosted by the secured third party, but could inquire models when they are released in public. As a result, adversary can extract sensitive training data with high quality even from these deeply compressed models that are tailored for resource-limited devices. Our investigation shows that existing compressions like quantization, can serve as a defense against such an attack, by degrading the model accuracy and memorized data quality simultaneously. To overcome this defense, we take an initial attempt to design a simple but stealthy quantized correlation encoding attack flow from an adversary perspective. Three integrated components-data pre-processing, layer-wise data-weight correlation regularization, data-aware quantization, are developed accordingly. Extensive experimental results show that our framework can preserve the evasiveness and effectiveness of stealing data from compressed models.