13 May 2025
by David Brauchler III

Building Security-First AI Applications: Best Practices for CISOs 

Guest blog from David Brauchler III, Technical Director at NCC Group, as part of our #SeizingTheAIOpportunity campaign week 2025.

AI Adoption: New Challenges, Same Fundamentals 

With 2 out of 3 organisations now regularly using generative AI in their business and 1 in 5 Dev Ops professionals leveraging it across the software development lifecycle, AI adoption is deeply woven into enterprise solutions. 

The skyrocketing deployment of AI is exposing a critical need for a security-first approach in AI. While it might feel like uncharted territory, the reality is the way we approach security hasn’t changed. The fundamentals are the same; it’s just a different application. 

Here’s a quick look at some of the risks: 

  • Statistical Variance: We can count on traditional applications to behave the same way every single time (barring any bugs or other exotic situations). But AI is a statistical model by definition, so it introduces statistical variance to our applications. That means we can have a general idea of how it will behave but can never be certain because state-of-the-art models are constantly evolving. Not to mention, consistency can be a concern. Models can respond in many different ways to a prompt, but they have to pick one, and it may not always be the same way, especially if the question changes even slightly. And that’s not even considering the risk of someone intentionally manipulating it to misbehave. 

  • Software Supply Chain Considerations: A growing number of third-party business solutions rely on AI, and users may not even realise it. If these systems are being used to handle sensitive, personal, protected information, that’s a significant risk to your organisation. Do you know what systems are in use across your company? How well do you trust those third parties to handle your data responsibly? What if their systems get compromised? The chain of liability in the event of a breach can get extremely messy for both parties. 

  • Data Permanence: Not only do organisations need to be aware of how the models and/or vendors they choose use their data, but also the fact that, after training, data is permanently baked into the model. With a traditional application, if a mistake happens, you can contact the vendor and ask them to remove that log. But with AI, your data is training the model, and it becomes ingrained. Removing it is virtually impossible. 

  • Lack of Standards: While there are no industry-wide compliance standards in place, there are emerging frameworks like ISO/IEC 42001, NIST AI Standards, and the EU AI Act, for example. But until these are solidified, that leaves organisations to parse out their own model and limits the “hive mind” or crowdsourcing benefits of industry standardisation. Instead, organisations should lean on proven security best practices and architecture guidelines for designing secure AI applications. 

  • Copyright Implications: Currently, the U.S. Copyright Office has ruled that all output of AI systems is public domain unless you can prove that a human was instrumental in its generation. That means for creative organisations, or companies that use AI to generate assets like a logo or content, this content is likely not protected. 

 

How to build a secure AI adoption framework 

Adopting AI can feel overwhelming, but you're not alone. Many CISOs are aware of the risks but often lack the resources or consultancies to guide them. Let's explore some best practices for secure AI adoption. As a leader in AI application and integration security, we can help illuminate a path forward for your organisation, even if you're already well on your way. 

Here are eight best practices for adopting AI securely: 

  1. Start with a Vision: Establish clear, value-driven AI integration goals around how you intend to “do AI” in your organisation. What are you trying to achieve? Think about how you’ll protect data, review and select tools, and what architectures you’ll use. Just having an awareness of what you’re doing, why, and a thorough understanding of the risks will stop you from putting AI in place just for the sake of AI. 

 

  1. Draft a Security Reference Architecture: Create a document that outlines the policy and standard protocols for AI implementation in your organisation. It should set rules around security boundaries and controls, integration patterns, and best practices. There are some great models that already exist, which you can customise for your own unique situation. NCC Group can help to provide that foundation and customisation guidance. 

 

  1. Determine Data Management and Governance: Set parameters for model selection criteria and track changes over time to ensure data provenance. Understand how your data is being used by models you rely on—or conversely, if you’re building the model, set policies around how you’ll use your customers’ data. Create a process for handling sensitive data, including extraction if necessary. 

 

  1. Establish Model Output and Behaviour Monitoring Strategies: Since AI models are unpredictable (for reasons noted above), you’ll want to create a process for validating that your model is staying true to its expected scope and guidelines. You can do this manually by prompting it with a set of benchmarks to evaluate and classify the output by hand. Another option is to design a flag or complaint button into the UX that allows users to manually report non-compliant output. Some organisations set their model to capture and log outputs at regular intervals (every 30 prompts, for example) and manually review them for quality. Or, you could randomly sample the outputs it provides to verify they’re in line with your expectations. 

 

  1. Build in Trust by Design: How you structure AI systems can make or break the security posture. Start from a position of trust with some basic principles, including data-code separation that segregates trusted and untrusted models to prevent their interaction. This can prevent exposing your entire application assets to a compromised model or data set. 

 

  1. Conduct Threat Modelling: Make sure your DevOps team understands how AI changes threat models and that every project goes through a threat modelling exercise to ensure you’re managing risks the right way before you put it into production. Understanding how the model might behave can help you prepare for the unexpected. 

 

  1. Perform Dynamic Testing: Bring in AI red teamers to put your model and applications through some paces with a wide scope of analysis. Because bias is a well-known concern, too often organisations focus on testing their model for bias but ignore the potential for misuse of its capabilities. To be frank, you should be more concerned about your AI model enabling a bad actor access to delete your user accounts or manipulate your model than about it insulting someone. 

 

  1. Train and Validate: AI can be uncharted territory for even the most experienced engineers. You’ll want to bring in AI application security experts to train your team on the risks and strategies for protecting their builds from a holistic perspective. Once you’ve established foundational knowledge, validate protocol compliance with checklists and integration review processes. 

 

Remember the Fundamentals 

While AI is exciting, and it’s tempting to jump on the bandwagon to incorporate this emerging technology into your applications, products, and software stack, it’s essential that CISOs and their organisations balance innovation with security. Engineers, developers, and IT security managers need to proactively think about security from both data trust and access control points of view, rather than blindly throwing these components into their environments. 

While AI is changing in real-time and organisations may need to constantly remodel their architectures to secure their environments, the good news is that none of this is new from a security perspective. It changes our approach, but the fundamentals are the same. 

Authors

David Brauchler III

David Brauchler III

Technical Director , NCC Group

David Brauchler III is an NCC Group Technical Director in Dallas, Texas. He is an adjunct professor for the Cyber Security graduate program at Southern Methodist University with a master's degree in Security Engineering and the Offensive Security Certified Professional (OSCP) certification