对抗性鲁棒性工具箱(ART)

对抗性鲁棒性工具箱(ART) -- 用于机器学习安全性的 Python 库 -- 规避,中毒,提取,推断。「Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference」

Github星跟蹤圖

Adversarial Robustness Toolbox (ART)

对抗性鲁棒性工具箱(ART)是用于机器学习安全性的Python库。 ART提供的工具使开发人员和研究人员可以针对以下方面评估,捍卫,认证和验证机器学习模型和应用程序: 逃避,中毒,提取和推断的对抗性威胁。 ART支持所有数据类型的所有流行的机器学习框架(TensorFlow,Keras,PyTorch,MXNet,scikit-learn,XGBoost,LightGBM,CatBoost,GPy等) (图像,表格,音频,视频等)和机器学习任务(分类,对象检测,生成,认证等)。

学到更多

开始使用 文献资料 贡献
- 安装
- 例子
- 笔记本电脑
- 进攻
- 防御
- 估算器
- 指标
- 技术文档
- Slack, 邀请函
- 贡献
- 路线图
- 引用

(Source:https://github.com/Trusted-AI/adversarial-robustne...


主要指標

概覽
名稱與所有者Trusted-AI/adversarial-robustness-toolbox
主編程語言Python
編程語言Python (語言數: 4)
平台Linux, Mac, Windows, Docker
許可證MIT License
所有者活动
創建於2018-03-15 14:40:43
推送於2025-04-22 07:19:41
最后一次提交
發布數64
最新版本名稱1.19.1 (發布於 )
第一版名稱0.1 (發布於 2018-04-25 22:13:06)
用户参与
星數5.2k
關注者數99
派生數1.2k
提交數12.6k
已啟用問題?
問題數900
打開的問題數17
拉請求數1166
打開的拉請求數16
關閉的拉請求數273
项目设置
已啟用Wiki?
已存檔?
是復刻?
已鎖定?
是鏡像?
是私有?

Adversarial Robustness 360 Toolbox (ART) v1.1

Build Status Documentation Status GitHub version Language grade: Python Total alerts

中文README请按此处

Adversarial Robustness 360 Toolbox (ART) is a Python library supporting developers and researchers in defending Machine
Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support Vector Machines, Random Forests,
Logistic Regression, Gaussian Processes, Decision Trees, Scikit-learn Pipelines, etc.) against adversarial threats
(including evasion, extraction and poisoning) and helps making AI systems more secure and trustworthy. Machine Learning
models are vulnerable to adversarial examples, which are inputs (images, texts, tabular data, etc.) deliberately crafted
to produce a desired response by the Machine Learning model. ART provides the tools to build and deploy defences and
test them with adversarial attacks.

Defending Machine Learning models involves certifying and verifying model robustness and model hardening with
approaches such as pre-processing inputs, augmenting training data with adversarial examples, and leveraging runtime
detection methods to flag any inputs that might have been modified by an adversary. ART includes attacks for testing
defenses with state-of-the-art threat models.

Documentation of ART: https://adversarial-robustness-toolbox.readthedocs.io

Get started with examples and tutorials

The library is under continuous development. Feedback, bug reports and contributions are very welcome.
Get in touch with us on Slack (invite here)!

Supported Machine Learning Libraries and Applications

Implemented Attacks, Defences, Detections, Metrics, Certifications and Verifications

Evasion Attacks:

Extraction Attacks:

Poisoning Attacks:

Defences:

Extraction Defences:

Robustness Metrics, Certifications and Verifications:

Detection of Adversarial Examples:

  • Basic detector based on inputs
  • Detector trained on the activations of a specific layer
  • Detector based on Fast Generalized Subset Scan (Speakman et al., 2018)

Detection of Poisoning Attacks:

Setup

Installation with pip

The toolbox is designed and tested to run with Python 3.
ART can be installed from the PyPi repository using pip:

pip install adversarial-robustness-toolbox

Manual installation

The most recent version of ART can be downloaded or cloned from this repository:

git clone https://github.com/IBM/adversarial-robustness-toolbox

Install ART with the following command from the project folder art:

pip install .

ART provides unit tests that can be run with the following command:

bash run_tests.sh

Get Started with ART

Examples of using ART can be found in examples and examples/README.md provides an overview and
additional information. It contains a minimal example for each machine learning framework. All examples can be run with
the following command:

python examples/<example_name>.py

More detailed examples and tutorials are located in notebooks and notebooks/README.md provides
and overview and more information.

Contributing

Adding new features, improving documentation, fixing bugs, or writing tutorials are all examples of helpful
contributions. Furthermore, if you are publishing a new attack or defense, we strongly encourage you to add it to the
Adversarial Robustness 360 Toolbox so that others may evaluate it fairly in their own work.

Bug fixes can be initiated through GitHub pull requests. When making code contributions to the Adversarial Robustness
360 Toolbox, we ask that you follow the PEP 8 coding standard and that you provide unit tests for the new features.

This project uses DCO. Be sure to sign off your commits using the -s flag or
adding Signed-off-By: Name<Email> in the commit message.

Example

git commit -s -m 'Add new feature'

Citing ART

If you use ART for research, please consider citing the following reference paper:

@article{art2018,
    title = {Adversarial Robustness Toolbox v1.1.1},
    author = {Nicolae, Maria-Irina and Sinn, Mathieu and Tran, Minh~Ngoc and Buesser, Beat and Rawat, Ambrish and Wistuba, Martin and Zantedeschi, Valentina and Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Molloy, Ian and Edwards, Ben},
    journal = {CoRR},
    volume = {1807.01069},
    year = {2018},
    url = {https://arxiv.org/pdf/1807.01069}
}