About ModelScan
An open-source static scanner (from Protect AI) that detects model serialization attacks by scanning model files for embedded or unsafe code before loading them into environments.
Key Features
- Static scanning of model files (Pickle, H5, SavedModel and others) to detect unsafe embedded code.
- CLI and Python API for local use and programmatic integration into CI/CD/ML pipelines.
- Severity ranking of findings (CRITICAL, HIGH, MEDIUM, LOW) and configurable reporting formats.
- Designed to be embedded into ML pipelines to scan models pre-load, post-train, and pre-deploy.
Use Cases & Best For
About Model Security
Protect AI models from attacks