Start Testing WordPress’ New Interactivity API
Aug 27, 2023Create a Chatbot Trained on Your Own Data via the OpenAI API — SitePoint
Aug 23, 2023World Rhythmic Gymnastics Championships 2023: All final results and medals
Aug 15, 2023Bruins have cap space options with offseason to
Aug 10, 2023Adopting an API Maturity Model to Accelerate Innovation
Jul 26, 2023Adding Generative AI as an API Protection Tool
Home » Security Boulevard (Original) » Adding Generative AI as an API Protection Tool
The security world is in the early stages of figuring out how to best use generative AI to enhance cybersecurity—while simultaneously defending against it as an emerging threat vector.
Take API security, for example. In a recent Security Boulevard feature, Bill Doerrfeld argued that generative AI will be a threat to API security. One concern is that generative AI can be used to abuse APIs, Doerrfeld wrote, due to the large attack surface. Many organizations now deploy hundreds of APIs, which already have visibility issues. Attackers know that organizations struggle to protect their APIs, giving them the opportunity to swoop in and use AI themselves to find vulnerabilities.
“Also, generative AI could be used to compromise credentials in many ways,” Doerrfeld wrote, adding that generative AI could be used to detect patterns to discover passwords or be used for credential stuffing.
Those security threats must be addressed sooner rather than later. However, just as threat actors are coming up with ways to use generative AI to launch attacks against APIs, security researchers are developing ways to use that same technology to protect APIs.
“Generative AI is clearly the buzzword today in many fields, including security,” said Subbu Iyer, VP of product management at Cequence Security, in a recent webinar. But the technology can be used to automate what were once manual tasks and to test API security.
Before you begin to apply generative AI as a security solution for APIs, you need to have a good understanding of what’s needed for API protection. Iyer explained that a unified API protection approach is built on three pillars: Discovery, which involves learning and classifying what your API attack surface looks like, compliance, which involves looking at the security posture of APIs and ensuring they conform to security best practices and protection, which means monitoring traffic coming to the APIs and blocking potential attacks.
Unfortunately, API protection is broken in many organizations. “You’d be surprised to find how many APIs we discover that are completely unattended and exposed publicly and exposing customer data,” Iyer said.
Finding vulnerabilities in APIs requires application security testing, but as Iyer pointed out in a blog post, it is very challenging “to generate test cases customized for the apps being tested so that their relevant business functionality can be tested before releasing them to production.”
This is where generative AI becomes a useful API security tool. You can ask the generative AI application to do things that would otherwise require a tremendous amount of manual work, Iyer explained.
For example, Cequence has built generative AI use cases around API security testing using an intelligent mode that automatically creates tests based on the type of API. With generative AI, you can string together the right test cases for each of the API endpoints within the application, said Iyer.
“This will eliminate hours and weeks of work that would be required from the security engineer to build that test case or that test plan manually,” said Iyer.
For better or worse, generative AI is going to change security. API security is already tricky because of its complexity and the sheer volume of APIs used across an organization. If generative AI can automate steps to seek out potential vulnerabilities and improve testing, it will be a positive step forward.