Fixes

Fixes

The fixes feature allows users to remediate vulnerabilities leveraging the power of GenAI. By requesting fixes, users can either get a detailed step-by-step guide or the modifications to the code that should close the vulnerability.

Warning
Caution
As with all GenAI answers, the accuracy of the solution should be reviewed prior to being committed to the codebase.

Public Oath

Fluid Attacks offers a GenAI-assisted vulnerability remediation service; no sensitive or customer-specific information will be used or stored by a third party, customer code will not be used to train an LLM, and any results will be viewable by the customer only.

Architecture


  1. Fix request: Automatic fixes can be requested from Retrieves via the 'Get Custom Fix' and 'Apply Suggested Fix' functionalities, or from Views via the 'How to fix' option in the vulnerability modal.

  2. Subscriptions: The functionalities mentioned above always use one of these two GraphQL subscriptions: getCustomFix and getSuggestedFix, passing the required parameters accordingly.

  3. Validation and prompt construction: After validating the provided inputs, the backend (Integrates) gathers the vulnerability context, which includes the URL of the S3 object where the vulnerable file is located.

  4. Fixes API: Integrates sends a request to the Fixes WebSocket API Gateway, using a pre-signed URL for authentication and to transmit the previously obtained vulnerability context.

  5. Fixes Lambda: Through the API, the Fixes Lambda is instructed to:

    • Retrieve the vulnerable code from S3.

    • Analyze the code.

    • Extract the vulnerable snippet.

    • Generate a prompt with instructions for the AI model, either to produce a remediation guide or to directly remediate the vulnerable code snippet.

  6. Sending the prompt to the LLM: From the Lambda, the prompt is sent via the Boto client to a large language model (LLM) hosted on Amazon Bedrock, using an inference profile.

  7. LLM response: The LLM processes the input and generates a response.
    Since the complete output may take several seconds, it is returned as a streamed response over WebSockets, where the response is progressively delivered in chunks.
    This technique improves the user experience by enabling partial results to be displayed as they are generated.

  8. Transmission to the final client: Integrates relays the streamed response, which is then sent to the Retrieves or Views client through the initial GraphQL subscription.

  9. Displaying the result: The response is shown to the user either:

    • As a Markdown-formatted remediation guide, or

    • As a structured text containing the remediated code snippet with placeholders to replace the vulnerable code.

Data security and privacy

As this service requires sending user code to a third-party GenAI model, measures must be taken to ensure the safety of the whole process:

Amazon Bedrock

AWS infrastructure hosts the LLMs used by this service.

Amazon Bedrock doesn’t store or log prompts and completions. Neither does it use them to train any AWS LLM models nor distribute them to third parties. See the Bedrock data protection guide.

Data, both at rest and in transit, is also encrypted. See the data encryption guide.

As an additional precaution, this service has been disabled for vulnerabilities related to leaked secrets in code.

To Do

  1. Use AWS GuardRails to sanitize code snippets and remove sensitive information before feeding the prompt to the LLM.
  2. Instead of getting the context from criteria and adding it to the prompt, use RAG to give the model a knowledge base to consult, improve the quality of the results, and simplify the prompt.
  3. Consider using a provisioned, open-source LLM on transparency grounds.
Idea
Tip
Have an idea to simplify our architecture or noticed docs that could use some love? Don't hesitate to open an issue or submit improvements.