Code for ML Doctor
-
Updated
Aug 14, 2024 - Python
Code for ML Doctor
Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)
Official Source Code of "Exploring Effective Data for Surrogate Training Towards Black-box Attack" and "STDatav2: Accessing Efficient Black-Box Stealing for Adversarial Attacks".
Implementations on Security and Privacy in ML; Evasion Attack, Model Stealing, Model Poisoning, Membership Inference Attacks, ...
An implementation to apply ActiveThief to steal cloud models.
Repository for my Bachelor Thesis at Karlsruhe Institute of Technology.
Official implementation of "Stealthy Imitation: Reward-guided Environment-free Policy Stealing" (ICML 2024)
Add a description, image, and links to the model-stealing topic page so that developers can more easily learn about it.
To associate your repository with the model-stealing topic, visit your repo's landing page and select "manage topics."