Caution
For this experiment, we do not use any real client data. All testing data is either artificially generated by our developers or extracted from open-source projects to ensure privacy and security.
Introduction
Sifts is a security analysis tool that leverages large language models (LLMs) to detect vulnerabilities in client code. The project utilizes a knowledge base built from vulnerabilities reported by our penetration testers. By referencing these known vulnerabilities, the algorithm analyzes previously unflagged code to identify similar security risks. This approach enhances the efficiency of vulnerability detection, ensuring a proactive approach to securing software systems.
Public Oath
At Fluid Attacks, we are committed to enhancing code security through AI-driven vulnerability detection. We pledge to continuously refine our detection algorithms, expand our knowledge base with real-world vulnerabilities, and ensure our analysis remains accurate and actionable.
Sifts will always adhere to the following principles:
- Utilize AI ethically and responsibly for security analysis
- Maintain a continuously updated knowledge base of reported vulnerabilities
- Ensure transparency and accuracy in vulnerability detection
- Prioritize the security and privacy of client data, ensuring that the agreements with our LLM model providers explicitly state that they do not store, use, or retain any data we send them
Architecture
This process is divided into multiple phases: the knowledge base feeding phase and the function evaluation phase.