Editor’s note: This is the fourth and final part in a series on generative AI in cybersecurity. 

Since ChatGPT took the world by storm in the spring, the excitement for and the velocity of artificial intelligence (AI) product announcements have only grown. The hype and adoption around AI are unlike any previous disruptive technology (e.g., the iPhone, tablets, cloud). We are in the very early stages; businesses are quickly adopting AI, and security leaders have to secure it for the first time. In this blog, I want to provide CISOs and other security leaders with six recommendations for minimizing risk and maximizing AI’s potential regardless of whether your organization builds or buys AI solutions.

6 Steps to Safely Adopting AI. Step 1: Define Your Initial Adoption Approach.  Step 2: Discover and Baseline AI Usage . Step 3: Gain Understanding of AI Technology and Its Business Application. Step 4: Conduct a Risk Assessment . Step 5: Establish an AI Governance Model and Define Policies.  Step 6: Establish Architecture and Applicable Security Controls . image

Step 1: Define Your Initial Adoption Approach

I see a few potential approaches to initial adoption, although your mileage may vary with each.

  • Option 1: Block AI altogether. For most companies, this won’t be a viable long-term strategy. Your company will be at a competitive disadvantage, and you run the risk of your employees turning to “ShadowAI.”
  • Option 2: Permit the usage of AI while you work through these six steps. Your company culture may not allow a default block approach; of course, the risk here is the longer you take to work through this process, the more potential exposure you have.
  • Option 3: Block the use of AI until you determine the best way forward, informing the business you are assessing the new technology and working towards a path of enablement.

Step 2: Discover and Baseline AI Usage

At this phase, you should establish a baseline on the use of AI within your organization. You cannot assess the risk of something you cannot observe, so look at the traffic in your NGFW, Proxy, CASB, or DNS logs. Make a distinction between someone visiting the webpage of an AI application versus using the actual application—you will inflate your metrics if you don’t. You also need to look beyond ChatGPT; there are many other AI models that employees could leverage. Once you can capture this data, include it in your risk assessment in step 4.

Step 3: Gain Understanding of AI Technology and Its Business Application

Like any technology, you can only effectively defend it if you understand it. Security teams must ramp up their general knowledge of AI models and implementations.

Another critical component is understanding your company’s and industry’s potential AI business applications. Connect with your lines of business colleagues to gain their understanding of AI benefits and use cases. Leverage information security liaisons or Business Information Security Officers if you have them.

Step 4: Conduct a Risk Assessment

Now that you understand the initial exposure and the underlying technology, it is time to conduct a risk assessment. Use whichever risk assessment methodology has been adopted by your organization; for example, you might leverage “NIST Special Publication 800-30 Guide for Conducting Risk Assessments.” CISA also provides a helpful document, “Guide to Getting Started with a Cybersecurity Risk Assessment.” Take the risk assessment results to the risk committee so the business can collectively decide how to proceed with AI adoption.

Step 5: Establish an AI Governance Model and Define Policies

Approving AI usage isn’t a one-time activity; you’ll have to provide ongoing AI governance. Align this governance to your organization’s overall governance model. Establish an AI governance subcommittee with key stakeholders from across the business, data, IT, and security teams. This group will be responsible for any material changes to AI strategy and policy for the organization.

Step 6: Establish Architecture and Applicable Security Controls

You have identified the AI threats, vulnerabilities, likelihood, and impacts at this stage. You’ve shared the assessment results, and you will continue to permit or enable AI usage. Now it’s time to consider your architecture.

When it comes to architecting your AI solutions, in “Building a Generative AI Strategy for SecOps,” my colleague Dylan Hancock provides AI architecture guidance. You should be prepared to secure a hybrid world where some AI is leveraged on-premises while other AI solutions are in the cloud. Apply the relevant security controls for your deployment architecture.

Bonus Step: Monitor AI Integrations and Plug-ins

There are many risks to consider when it comes to leveraging AI—too many to include in this blog—but I want to point out third-party risks, which I’m particularly concerned about. The addition of agents and plug-ins increases your potential attack surface.

ChatGPT has released twelve initial plug-ins, and Google Bard will also have similar integrations. These third-party integrations are here to stay and will only grow in quantity and capabilities. OpenAI says this of plug-ins: “Plugins can be eyes and ears for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data.”

It’s true that there are benefits to these plug-ins, but just as malicious Chrome extensions and WordPress plug-ins introduce risks into their respective ecosystems, third-party integrations will introduce risks to the AI models that leverage them. Regardless of whether you use generic or custom models, the need for interfacing with external services will be there, so you must ensure you risk-assess any potential third-party plug-ins.

Keeping an Eye on AI

In conclusion, you must get ahead of AI as soon as possible. Find a way to securely implement AI so that it is transparent to the employees. Finally, stay aware of the latest news and trends. The pace of change is unprecedented. You will need to stay updated on the technology trends and how adversaries are targeting AI.