The application builds LLM prompts using external input, but the structure fails to distinguish the line between user input and system instructions.
The consequences are entirely contextual, depending on the system that the model is integrated into. Leakage of sensitive information, remote code execution, execution of unauthorized actions, etc.
- Ensure proper sanitization of user-controllable input.
- User-controllable input must be identified as untrusted and potentially dangerous.
- Separate instructions from user input.
- Audit interactions.
- The model could be fine-tuned to better control and neutralize potentially dangerous inputs.
- Limit critical functions exposed to the model.
Authenticated attacker from the Internet.
⌚ 60 minutes.
Default score using CVSS 3.1. It may change depending on the context of the src.
Default score using CVSS 4.0. It may change depending on the context of the src.