This page provides answers to frequently asked questions about Fluid Attacks' API and plugin, especially privacy concerns regarding the features powered by GPT-4.
To begin using the API, we recommend you read our step-by-step guide in our Documentation. Bear in mind that to make requests to the API you will need prior knowledge of the GraphQL language.
When using the Fluid Attacks plugin with the Get Custom Fix and Autofix features connected to GPT-4, you may have questions about data security and privacy. Here are some frequent questions to consider when using these features.
We use Large Language Mode (LLM) technology, an artificial intelligence designed for advanced text processing and generation that can understand and generate natural language content, addressing various linguistic tasks with accuracy and consistency.
The OpenAi GPT-4 model is used.
The GPT-4 functionality in our extension plays a crucial role in generating code-based remediation guidelines (Custom Fix functionality) and automatic code correction (Autofix functionality). The process is initiated by extracting a specific code fragment from the selected file that currently exhibits the vulnerability. This fragment is sent securely to GPT-4 through safe API-backend connectivity, giving a return response with a new version of the fixed code without vulnerabilities.
The code sent to GPT-4 is interpreted using the context provided at the function/class level, specifically about the line of code containing the vulnerability. This IA has no global knowledge of the code or the business logic of the entire application. It is clarified that its access to the code is limited to a small piece representing a specific function.
It is relevant to mention that GPT-4 does not use the information sent for purposes other than those related to generating solutions for correcting vulnerabilities. In addition, the data transmitted will be retained for approximately 30 days to validate possible abuses in using artificial intelligence. This approach ensures a privacy and security policy, guaranteeing that the information provided is treated with the utmost respect and used exclusively for the established purposes.
We understand the importance of maintaining the confidentiality and security of our customers' code. We ensure that we follow strict privacy and data security policies by employing artificial intelligence technology, such as the GPT-4 API, for vulnerability detection and remediation.
Fluid Attacks' hacking team uses this tool in their daily work in vulnerability reporting.
It is crucial to note that the Fluid Attacks plugin will use or consider the repositories hosted in Scope. The functionality of this plugin is based on the information available in the associated repositories on the platform. Suppose you want to get details on creating a root and managing repositories in Scope. In that case, we invite you to get detailed information by clicking this link.
We use the GPT-4 API, independent of the Enterprise version OpenAI offers. The choice of this version ensures that we have the necessary control over the information processed and is not subject to the data storage policies of the web or Enterprise versions.
Our development initiative started with Visual Studio Code (VSCode), which is recognized as one of the industry's most widely used integrated development environments (IDE). This choice is supported by its outstanding popularity, extensibility, and robustness within the development community.
Creates a false sense of security. Although it is technically possible to test from the IDE, as several automatic tools do, they often lack security rigor.
Security testing is beyond the control of the management and security team, as it occurs at the developer's discretionary frequency.
What automatic tools manage to automate is usually no more than 30% of the total severity of the system.
You can enter the VS Code configuration file (JSON) and manually add the configuration key and value.
"fluidattacks.apiToken": "API_Token"