Stage: Gate-level (ASIC)
Summary
Over 2800 Trojan inserted variants across 16 different gate-level designs generated using TRIT (more info). Benchmarks are provided in Synopsys LEDA 250nm or Skywater 130nm.
Contact
Trojan benchmarks are available at the following links:
LEDA 250nm Standard Cell Library
Skywater 130nm Standard Cell Library
Frequently Asked Questions
A: Currently, we have Trojan inserted variants for ISCAS 85 and ISCAS 89 designs.
With LEDA 250nm standard cell library, we use the following:
- c2670, c3540, c5315, c6288, s1423, s13207, s15850, s35932
- s953, s1196, s1238, s1423, s1488, s5378, s9234, s38417, s38584
A: We currently have LEDA 250nm standard cell library and a modified version of Skywater 130nm (https://github.com/google/skywater-pdk). The Skywater 130nm library used is provided for all Trojan-inserted Skywater benchmarks. LEDA 250nm is an older, academic library from Synopsys. Please reach out to Synopsys for further inquiries on obtaining LEDA 250nm.
A: We currently have 2 main types of Trojan effects: Denial of Service (DoS) and Always on Leakage. DoS Trojans will flip an internal bit upon activation whereas as leakage Trojans will leak internal signal information through side-channel. We also have combinational and sequential variants for both DoS and leakage.
A: Each Trojan-inserted benchmark has a log file which contains information on what signals are used for the trigger condition and their activation values, what payload signals are leaked or perturbed, and the structure of the Trojan.
A: All Trojan triggers are verified via Synopsys TetraMAX using full-scan assumption or Cadence JasperGold with non-scan assumption. Trojans in sequential designs verified with JasperGold are guaranteed to be triggerable via the primary inputs. More information can be found in the paper referenced below.
A: Unfortunately, due to various IP related issues we cannot release the tool. Despite these hurdles, we plan to continually update this page with more Trojan-inserted benchmarks across diverse designs.
References
Cruz, Jonathan; Huang, Yuanwen; Mishra, Prabhat; Bhunia, Swarup
An automated configurable Trojan insertion framework for dynamic trust benchmarks Proceedings Article
In: 2018 Design, Automation Test in Europe Conference Exhibition (DATE), pp. 1598-1603, 2018, ISSN: 1558-1101.
@inproceedings{Cruz2018automated,
title = {An automated configurable Trojan insertion framework for dynamic trust benchmarks},
author = {Jonathan Cruz and Yuanwen Huang and Prabhat Mishra and Swarup Bhunia},
doi = {10.23919/DATE.2018.8342270},
issn = {1558-1101},
year = {2018},
date = {2018-03-01},
booktitle = {2018 Design, Automation Test in Europe Conference Exhibition (DATE)},
pages = {1598-1603},
abstract = {Malicious hardware modification, also known as hardware Trojan attack, has emerged as a serious security concern for electronic systems. Such attacks compromise the basic premise of hardware root of trust. Over the past decade, significant research efforts have been directed to carefully analyze the trust issues arising from hardware Trojans and to protect against them. This vast body of work often needs to rely on well-defined set of trust benchmarks that can reliably evaluate the effectiveness of the protection methods. In recent past, efforts have been made to develop a benchmark suite to analyze the effectiveness of pre-silicon Trojan detection and prevention methodologies. However, there are only a limited number of Trojan inserted benchmarks available. Moreover, there is an inherent bias as the researcher is aware of Trojan properties such as location and trigger condition since the current benchmarks are static. In order to create an unbiased and robust benchmark suite to evaluate the effectiveness of any protection technique, we have developed a comprehensive framework of automatic hardware Trojan insertion. Given a netlist, the framework will automatically generate a design with single or multiple Trojan instances based user-specified Trojan properties. It allows a wide variety of configurations, such as the type of Trojan, Trojan activation probability, number of triggers, and choice of payload. The tool ensures that the inserted Trojan is a valid one and allow for provisions to optimize the Trojan footprint (area and switching). Experiments demonstrate that a state-of-the-art Trojan detection technique provides poor efficacy when using benchmarks generated by our tool. This tool is available for download from http://www.trust-hub.org/.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}