Fix request: Automatic fixes can be requested from Retrieves via the 'Get Custom Fix' and 'Apply Suggested Fix' functionalities, or from Views via the 'How to fix' option in the vulnerability modal.
Subscriptions: The functionalities mentioned above always use one of these two GraphQL subscriptions: getCustomFix
and getSuggestedFix
, passing the required parameters accordingly.
Validation and prompt construction: After validating the provided inputs, the backend (Integrates) gathers the vulnerability context, which includes the URL of the S3 object where the vulnerable file is located.
Fixes API: Integrates sends a request to the Fixes WebSocket API Gateway, using a pre-signed URL for authentication and to transmit the previously obtained vulnerability context.
Fixes Lambda: Through the API, the Fixes Lambda is instructed to:
Retrieve the vulnerable code from S3.
Analyze the code.
Extract the vulnerable snippet.
Generate a prompt with instructions for the AI model, either to produce a remediation guide or to directly remediate the vulnerable code snippet.
Sending the prompt to the LLM: From the Lambda, the prompt is sent via the Boto client to a large language model (LLM) hosted on Amazon Bedrock, using an inference profile.
LLM response: The LLM processes the input and generates a response.
Since the complete output may take several seconds, it is returned as a streamed response over WebSockets, where the response is progressively delivered in chunks.
This technique improves the user experience by enabling partial results to be displayed as they are generated.
Transmission to the final client: Integrates relays the streamed response, which is then sent to the Retrieves or Views client through the initial GraphQL subscription.
Displaying the result: The response is shown to the user either:
As a Markdown-formatted remediation guide, or
As a structured text containing the remediated code snippet with placeholders to replace the vulnerable code.