Protect your model environment from untrusted or malicious data inputs — whether they occur during pre-training, fine-tuning, or the use of embedding data.
To protect the model environment from untrusted data, sandboxing measures are applied: The model runs in isolated containers or VMs with resource limits and read-only file systems; network and file access are tightly controlled to only allow whitelisted sources; and runtime behavior is enforced through policy-as-code tools like OPA to restrict data loading to approved locations.
This requirement is verified in following services
Plan | Supported |
---|---|
Essential | 🔴 |
Advanced | 🟢 |