MlSecOps Tools list
NUM | Tool | Description | Link | Data type | PAID or NOT | Tags | Class |
---|---|---|---|---|---|---|---|
1 | LintML | lintML is a cli-based application that can provide a quick insight into potential risk in machine learning projects. It checks for credentials in code and uses static analysis to identify vulnerabilities. Plaintext credentials. Unsafe deserialization. Serialization to unsafe formats. Using untrustworthy assets. (WIP) | https://github.com/JosephTLucas/lintML | Code | FREE | Lint, sast | defense |
2 | Giskard | It is a Python library that automatically detects vulnerabilities in AI models, from tabular models to LLMs, including performance bugs, data leakage, false correlations, hallucinations, toxicity. | https://github.com/Giskard-AI/giskard | Code | FREE | library,llm | defense |
3 | TensorFlow Model Analysis | Model analysis tools for TensorFlow | https://github.com/tensorflow/model-analysis | Code | FREE | library | defense |
4 | Fickling | Tool, for backdooring Pickle files. | https://github.com/trailofbits/fickling | Code | FREE | backdooring | attack |
5 | CleverHans | Adversarial example library for attack construction, defense, and benchmarking. | https://github.com/cleverhans-lab/cleverhans | Image | FREE | adversarial,framework | defense |
6 | Foolbox | A Python library that makes it easy to perform attacks against machine learning models such as deep neural networks. It is based on EagerPy and works with models in PyTorch, TensorFlow and JAX. | https://github.com/bethgelab/foolbox | Image | FREE | adversarial,dl | attack |
7 | SecML | An open source Python library for evaluating the security of machine learning algorithms. It implements attacks for data evasion and poisoning, and can use models and attacks from various other frameworks. | https://secml.readthedocs.io/en/v0.15/ | Image | FREE | Evasion, adversarial | attack |
8 | Safetensors | Convert pickle to a safe serialization option | https://github.com/huggingface/safetensors | serialization data | FREE | pickle translator | defense |
9 | Citadel Lens | Quality testing of models according to industry standards | https://www.citadel.co.jp/en/products/lens/ | quality,blackbox | PAID | quality | defense |
10 | Garak | BlackBox vulnerability scanner for LLM. | https://github.com/leondz/garak | Model | FREE | llm, dast | defense |
11 | Vigil | LLM Vulnerability Scanner. | https://github.com/deadbits/vigil-llm | Model | FREE | llm, dast | defense |
12 | Pyrit | Library for evaluating LLM-endpoints for prompt injection, hallucinations and other LLM risks | https://github.com/Azure/PyRIT | Model | FREE | llm, library | attack |
13 | TextAttack | A framework for implementing Adversarial Attacks on NLP | https://github.com/QData/TextAttack | text | FREE | nlp, framework | attack |
14 | Adversarial Robustness Toolbox | Python library, to implement different attack vectors - evasion, poisoning, extraction and inferece. | https://github.com/IBM/adversarial-robustness-toolbox | Image | FREE | adversarial | attack |
15 | Copycat CNN | https://github.com/jeiks/Stealing_DL_Models | Image | FREE | attack | ||
16 | Counterfit | A framework for machine learning security with the ability to implement adversarial attacks. | https://github.com/Azure/counterfit | image, text | FREE | framework, cli | attack |
17 | Lakera AI red-teaming | A tool for automated assessment of risks associated with Generative AI | https://www.lakera.ai/ai-red-teaming | model | PAID | dast, llm | attack |
18 | ModelScan | A tool to detect vulnerabilities and flaws related to serialization in machine learning models. | https://github.com/protectai/modelscan | Model | FREE | sast, protectai | defense |
19 | Model-Inversion-Attack-ToolBox | A framework for implementing Model Inversion attacks | https://github.com/ffhibnese/Model-Inversion-Attack-ToolBox | Image | FREE | framework, cli, blackbox | attack |
20 | Rebuff | Prompt Injection Detector. | https://github.com/protectai/rebuff | text | FREE | llm, detector | defense |
21 | NeMo-Guardials | NeMo Guardrails allow developers building LLM-based applications to easily add programmable guardrails between the application code and the LLM. | https://github.com/NVIDIA/NeMo-Guardrails | text | FREE | llm, firewall | defense |
22 | Advertorch | AdverTorch contains modules for generating adversarial examples and defending against them. | https://github.com/BorealisAI/advertorch | Image | FREE | adversarial,python | attack |
23 | AdvBox | A tool to generate and defend against Adversarial Attacks created by Baidu. | https://github.com/advboxes/AdvBox | Image | FREE | adversarial, baidu | attack |
24 | MlSploit | A tool for generating adversarial attacks. | https://mlsploit.github.io/ | Image | FREE | adversarial, cloud | attack |
25 | AugLy | A tool for generating adversarial attacks. | https://github.com/facebookresearch/AugLy | Image | FREE | adversarial, facebook | attack |
26 | Knockoffnets | PoC to implement BlackBox attacks to steal model data. | https://github.com/tribhuvanesh/knockoffnets | Image | FREE | BlackBox, model stealing | attack |
27 | Robust Intelligence Continous Validation | Tool for continuous model validation for compliance with standards | https://www.robustintelligence.com/platform/continuous-validation | Model | PAID | Validation, Robust | defense |
28 | NB Defense | Scanner for Jupyter Notebook, can detect secrets, CVEs in imported libraries and misconfigurations in ipynb | https://protectai.com/nbdefense | code | FREE | SAST, notebooks | defense |
29 | AI Exploits | Exploits for MLOPS infrastructure | https://github.com/protectai/ai-exploits | infrastructure | FREE | exploits, mlops | attack |
30 | Machine Learning oooops ... Attack Tool | A tool for realizing attacks on MLOPS infrastructure. (пока что в разработке) | https://github.com/wearetyomsmnv/mlat | infrastructure | FREE | framework,cli, exploits | attack |
31 | Guardian | Model protection in CI/CD | https://protectai.com/guardian | code | PAID | ci/cd, mlops | defense |
32 | VGER | Jupyter Attack framework | https://github.com/JosephTLucas/vger | notebook | FREE | framework,cli,hash | attack |
33 | AIShield Watchtower | An open source tool from AIShield for studying AI models and scanning for vulnerabilities. | https://github.com/bosch-aisecurity-aishield/watchtower | code | FREE | secrets,modelscan | defense |
34 | Databricks Platform, Azure Databricks | Datalake data management and implementation tool | https://azure.microsoft.com/ru-ru/products/databricks | ALL DATA | PAID | datalake, azure | defense |
35 | Hidden Layer AI Detection Response | A tool for detecting and responding to incidents. | AI Detection & Response | HiddenLayer | Model | PAID | TIDR | defense |
36 | Hidden Layer AISEC Platform | The AISec platform provides automated and scalable protection specifically designed for GenAI, enabling rapid deployment and proactive response to attacks without the need to access private data or models. | https://hiddenlayer.com/aisec-platform/ | Model | PAID | hidden layer | defense |
37 | Guardrails AI | Firewall between AI API and user interaction to prevent PII leaks | Guardrails AI | Text | PAID | defense | |
38 | Privacy meter | A library for evaluating data privacy. | https://github.com/privacytrustlab/ml_privacy_meter | text,image,docs | FREE | privacy, data | defense |
39 | ARX -Data Anonymization Tool | Tool for anonymizing datasets | https://arx.deidentifier.org/ | image,text | FREE | anonymization, data | defense |
40 | xstest-v2-copy | Data set for testing LLM for prompt injection capability | https://huggingface.co/datasets/natolambert/xstest-v2-copy | text | FREE | dataset | defense |
41 | Syft | Syft separates private data from model learning using techniques such as federated learning, differential privacy, and encrypted computing. | https://github.com/OpenMined/PySyft | all data | FREE | library | defense |
42 | differential-privacy-library | It is a library designed for differential privacy and machine learning. Its goal is to allow experimentation, simulation and implementation of differentially private models. | https://github.com/IBM/differential-privacy-library | all data | FREE | library | defense |
43 | Data-Veil | Data masking and anonymization tool | https://veil.ai/ | image | FREE | anonymization, data | defense |
44 | Private AI | Synthetic data generation and anonymization tool | https://private-ai.com/ | image,text,audio,video,docs | PAID | defense | |
45 | Databricks Delta Live Tables | A tool to check the quality of streaming data | https://www.databricks.com/product/delta-live-tables | streaming,batch | PAID | quality, data | defense |
46 | dvc | Data versioning tool | https://dvc.org/ | all data | FREE | version | defense |
47 | Watermark papers | Repository with paper's about watermarking in LLMs and image generators | https://github.com/hzy312/Awesome-LLM-Watermark | image, text | FREE,arxiv | defense | |
48 | Adversarial Watermarking Transformer (AWT) | Tracking text provenance using data masking | https://github.com/S-Abdelnabi/awt | text | FREE | defense | |
49 | lsb-watermarking | Data watermarking by LSB method | https://github.com/saltyprogrammer/lsb-watermarking | image | FREE | defense | |
50 | DVC-DWT image watermarking | A digital image watermarking algorithm based on combining two transformations, DWT and DCT. | https://github.com/diptamath/DWT-DCT-Digital-Image-Watermarking | image | FREE | defense | |
51 | Convolutional-Neural-Network-Based-Image-Watermarking-using-Discrete-Wavelet-Transform | Creating a watermark for CNN | https://github.com/alirezatwk/Convolutional-Neural-Network-Based-Image-Watermarking-using-Discrete-Wavelet-Transform | image | FREE | defense |