Sandboxing to limit model exposure to unverified data sources

Sandboxing to limit model exposure to unverified data sources

Summary

Protect your model environment from untrusted or malicious data inputs — whether they occur during pre-training, fine-tuning, or the use of embedding data.

Description

To protect the model environment from untrusted data, sandboxing measures are applied: The model runs in isolated containers or VMs with resource limits and read-only file systems; network and file access are tightly controlled to only allow whitelisted sources; and runtime behavior is enforced through policy-as-code tools like OPA to restrict data loading to approved locations.

Supported In

This requirement is verified in following services

Plan Supported
Essential 🔴
Advanced 🟢

References

Vulnerabilities

Free trial message
Free trial
Search for vulnerabilities in your apps for free with Fluid Attacks' automated security testing! Start your 21-day free trial and discover the benefits of the Continuous Hacking Essential plan. If you prefer the Advanced plan, which includes the expertise of Fluid Attacks' hacking team, fill out this contact form.