Can ChatGPT and OpenAI Replace Application Security Engineers — Pros and Cons

Chevon Phillip
3 min readDec 15, 2022

--

AI within a computer screen
AI-Generated Image by Shutterstock

As the world becomes increasingly reliant on technology and the internet, the need for skilled application security engineers to protect our online systems and data grows. But can chatbots and large language models, such as ChatGPT and OpenAI, replace the need for human application security engineers?

On the one hand, these technologies have the potential to automate many of the tasks currently performed by application security engineers. ChatGPT and OpenAI, in particular, have been trained on a vast amount of data and can understand and respond to natural language inputs, making them well-suited for tasks such as identifying and flagging potential security vulnerabilities.

However, there are also several drawbacks to relying solely on chatbots and large language models for application security. One major issue is that these technologies are only as good as the data they have been trained on. If the data is incomplete or biased, the chatbots and models may not be able to accurately identify security vulnerabilities. Additionally, these technologies are not capable of the critical thinking and problem-solving skills that human application security engineers possess.

Research has also shown that chatbots and large language models struggle with tasks that require a deep understanding of context and meaning. For example, when given a sentence with multiple meanings, these technologies may not be able to accurately determine which meaning is intended. In the context of application security, this could lead to false positives or missed vulnerabilities.

Furthermore, chatbots and large language models are not capable of handling the complexity and nuance of real-world security threats. These technologies are trained on a limited dataset and may not be able to adapt to new or evolving threats. In contrast, human application security engineers have the ability to think critically and apply their expertise to a wide range of security challenges.

Another concern is that, while chatbots and large language models can automate many routine tasks, they are not able to replace the human element in application security. Security is a constantly evolving field, and human application security engineers are needed to stay up-to-date on the latest threats and develop new strategies to protect against them.

In fact, research has shown that the use of chatbots and large language models can actually increase the risk of security breaches. These technologies often rely on natural language processing, which can be easily fooled by attackers using tactics such as typosquatting or homoglyphs. Human application security engineers, on the other hand, are trained to spot these types of attacks and can take appropriate measures to prevent them.

In conclusion, while chatbots and large language models such as ChatGPT and OpenAI can assist with tasks related to application security, they cannot fully replace the need for human application security engineers. These technologies can automate routine tasks and flag potential vulnerabilities, but they lack the critical thinking and adaptability that only humans possess. It is important to continue to invest in human application security experts to ensure the safety and security of our online systems and data.

--

--

Chevon Phillip
Chevon Phillip

Written by Chevon Phillip

Application Security Engineer. I helped secure top companies and organizations.

No responses yet