We use cookies to personalize content and ads, to offer features for social media and to analyze the access to our website. We also share information about your use of our website with our social media, weaving and analytics partners. Our partners may combine this information with other information that you have provided to them or that they have collected as part of your use of the Services. You accept our cookies when you click "Allow cookies" and continue to use this website.

Allow cookies Privacy Policy

Do AI code assistants give developers a false sense of security?

A new study of AI code assistants has found that developers who rely on such tools may be more likely to introduce security vulnerabilities into their code. The study found that those who had access to an AI assistant produced less secure code than those without access. In addition, participants with access to the AI code assistant were found to have a false sense of security, believing their code was more secure than it actually was. The results suggest that while AI assistants can increase productivity and lower the inhibition threshold for inexperienced users, they can also pose a risk to code security.

Security

AI code assistants like Github CopilotTabnine or Captain Stack have the potential to lower the barrier to programming and increase developer productivity. However, they also raise concerns about security vulnerabilities and copyright issues. To address these concerns, a user study was conducted to examine how developers interact with AI code assistants and what security risks they pose. 

The study focused on three main research questions:

1) does the distribution of security vulnerabilities introduced by users differ depending on the use of an AI assistant?

2) Do users trust AI assistants to write secure code?

3) How do users' language and behavior when interacting with an AI assistant influence the extent of security vulnerabilities in their code?

The study used the OpenAI Codex-Davinci-002 model as an AI code assistant and had participants solve five security-related programming tasks in different languages. The study found that participants with access to an AI code assistant were more likely to produce code with more security vulnerabilities than participants without such access. In addition, participants with access to the assistant were more likely to write secure code than those without access.

To better understand these results, the study analyzed participants' language and behavior when interacting with the AI assistant. The results showed that participants who trusted the AI less and were more concerned with the language and format of their prompts (e.g., rephrasing, adjusting parameters) were more likely to provide code with fewer security vulnerabilities. This suggests that AI code assistants, while lowering the barrier to programming for inexperienced users, can also provide a false sense of security.

One possible reason for this false sense of security is that participants who had access to the AI assistant may have relied more heavily on the AI's results and paid less attention to the language and format of its prompts. This behavior could lead to a lack of understanding or critical thinking about the written code, potentially leading to further security vulnerabilities. It is important that AI assistant users are aware of this potential pitfall and engage with the language and prompts to better understand the generated code and ensure its security.

Another factor that may have contributed to the higher rates of security vulnerabilities in the code is that these participants were less likely to modify the AI's output or adjust parameters and return values. This could indicate that the AI is given too much power, such as by automating parameter selection, which could lead users to be less diligent about protecting themselves from security vulnerabilities. It is important that code assistants are designed to encourage users to proactively ensure the security of their code, rather than relying solely on the results of the AI.

Evaluation of the study

This study has several limitations that should be considered when interpreting the results. One is that the participant group was primarily university students (66%), who may not represent the population most likely to use AI assistance on a regular basis, such as employed software developers. It is possible that developers with more experience and a stronger security background are less likely to introduce security vulnerabilities when using an AI assistant. In addition, the user interface for the study was designed to be as generic as possible, but aspects such as the location of the AI assistant or query latency could have affected the results. Finally, a larger sample size would be needed to evaluate more subtle effects, such as the impact of a user's background or native language on their ability to successfully interact with the AI assistant and produce safe code.

Despite these limitations, the results of this study provide important insights into the potential security risks associated with the use of AI code assistants. It is clear that users of AI code assistants, especially those with less experience, tend to have a false sense of security and are more likely to introduce security vulnerabilities into their code. It is important that developers are aware of this risk and take steps to ensure the security of their code, such as addressing the language and format of their prompts and proactively testing and verifying the security of AI output. In addition, it is important that AI code assistant developers consider ways to encourage users to proactively ensure the security of their code, such as by incorporating warnings and validation tests based on the generated code. By considering these aspects, it may be possible to maximize the benefits of AI code assistants while minimizing potential security risks.

Links

Share this article:

zauberware logo