Agentic AI

DAMSIC v1.0 - A Secure Agentic AI Adoption Framework

Max Corbridge
Cofounder
July 24, 2025
Agentic AI

DAMSIC v1.0 - A Secure Agentic AI Adoption Framework

Max Corbridge
Cofounder
Cofounder

I’ve been closely following agentic AI for some time now, and the pace it is moving fills me with excitement for the future. Huge players like JP Morgan and CrowdStrike have already revealed how their agentic AI systems are solving real business problems and interfacing with clients. Only last week OpenAI rolled out ‘ChatGPT Agent’ which is a general-purpose agentic assistant that promises to help automate any number of tasks.

These successes and leaps forward prove one thing: autonomy is the future. But autonomy had its start in the worlds of social media, marketing and sales, where the ‘worst case scenario’ for something going wrong might be a dodgy social media post or double booked sales meeting. When we are talking about production environments which are underpinning our nations infrastructure and daily running we have an entirely different (and much smaller) appetite for risk.

So, what would it take for this technology to be widely adopted in enterprise production environments, including those like Critical National Infrastructure (CNI)? Well, I present to you version 1 of my ‘secure agentic AI adoption framework’ which is based on modern standards (regulation, codes of conduct, guidelines), previous research, and over 6 years of delivering security testing work in these production environments. Together, the 6 domains spell DAMSIC, for easy memory. I’d like to caveat the below with the fact that this is 1) a work-in-progress and 2) a very high-level version so it is in keeping with a newsletter / blog format.

1. Data privacy & sovereignty - a non-negotiable

The first question on everyone’s mind regarding AI usage is, “Where does the data go?”. Experience testing production AI systems for CNI clients says two approaches are acceptable. First, the DIY approach: run open-source models inside your own boundary, take the pain of having to deploy everything yourself and getting less performant AI, but sleep easy knowing you have retained total data sovereignty. Second, leverage existing trust boundaries: services such as Azure OpenAI keep the model and data inside your existing Microsoft tenancy, giving you access to the latest frontier models but not introducing a new data leakage avenue. Most CNI clients already trust cloud providers like Microsoft and Google with their data, and the NCSC even encourage it, so option 2 here is far more common and performant.

2. Autonomous security - limiting the blast-radius

I’m yet to come across an AI system which I cannot jailbreak. Sadly, that is just the current state of play with all frontier AI models. That doesn’t stop us using them, but it does mean that we need to be extremely careful when giving these systems the ability to take actions in production environments.

My rule here is incremental autonomy with a ton of security controls added on top. Human-in-the-loop for critical and irreversible actions, strict guardrails around agents (limited tools, access control, role management, failsafe’s, kill switches), and utility-based agents which assess actions against a list of desirable outcomes rather that goal-based agents with superficial goals for fear of reward hacking. Start small (single tool, minimal impact) and grow from there. As we develop into multi-agent systems, we’ll need to double down on things like secure communication and handoffs, and scopes of authority.

This is perhaps the most intricate of the domains and the one which will require the most work to secure, but years of red teaming in production environments makes me believe that with the right approach this is certainly possible.

3. Monitoring & red teaming - assume compromise, detect fast

On the topic of red teaming, let’s start with an assumption: no control survives first contact with production. Logs and monitoring for anomalous behaviour therefore matter more than models. A forensic audit trail is not a nice-to-have, but a core requirement for secure usage, incident response, compliance and much more. Continuous logs should feed that straight into the SIEM and clients will need custom detections for anomalous tool sequences, spikes in usage, and use-case-specific concerns.

The second pillar is adversarial testing. This is where we stress-test the system, introducing novel or unforseen operating conditions which may well be the spanner in the works which causes the behaviour from agents that keep us up at night. Quarterly red-teams probing the agentic system will uncover behaviours normal unit tests never touch, and gives us the opportunity to test and improve our incident response playbooks.

4. Supply-chain & SBOM - securing the weakest link

Agentic systems can be made up of any number components: vector databases, document stores, third-party tools, Python packages, and foundational models. Each dependency adds to the attack surface. To keep tabs on this we’ll follow the NCSC’s guidance around secure AI system development and maintain a Software Bill of Materials (SBOM) for our AI systems. Once we map out each aspect which makes up our agentic system we can introduce a Supply-chain Levels for Software Artifacts (SLSA) which prevents tampering or unauthorised modification of these critical components. Finally, always have a manual fallback: when we are dealing with potential life-or-death scenarios we can’t accept ‘that service is down’ as an acceptable outcome if everything grinds to a halt.

5. Model Integrity & robustness - resilience against bad actors

As previously mentioned, the uncomfortable truth of using AI systems right now is that given time, every frontier model can be prompt injected, jailbroken and misaligned. Don’t forget, whilst agentic AI promises autonomy via tools usage it is still using fronter LLMs under the hood for action planning, reasoning, tool calling and more. Until the research world solves this issue, we compensate with layers of security controls.

  1. Input and output guardrails (topical alignment, malicious input sanitisation, enforced languages, hallucination detection)

  2. Adaptive lockout based upon nature and quantity of interactions

  3. Hashed and signed training data

  4. Secure memory and state management

  5. Secure authentication and authorisation of agent identities

  6. Autonomous action verification, perhaps via supervisor agent

  7. Anti-automation (rate limiting, contextual quotas)

  8. Prompt engineering (system message, data spotlighting, etc.)

Whilst this isn’t a perfect solution, we can massively reduce an attackers operating space whilst trying to target the underlying LLMs.

6. Compliance & Regulation - rapidly approaching

Regulators are currently scrambling to put in appropriate regulation on AI as we’re advancing and leveraging the technology at a pace which far exceeds our ability to use it safely. In Europe the EU AI Act adds sweeping requirements from 2026, NIS 2 tightens operational controls, and ISO 42001 is already the audit checklist of choice. Whilst UK clients don’t face mandatory compliance right now, sector-specific codes of conduct are expected to become mandatory in the short term and it is safe to assume that compliance requirements are coming thick and fast for AI usage. DAMSIC calls for evidence by design: risk assessments, auditable design and compliance artefacts baked into SDLC practices should allow us to proactively and continually evidence against relevant compliance requirements.

Putting it together

DAMSIC is not perfect, and nor is it finished. I see it as a minimum viable safety-net. Any one of these domains are old problems applied to new technologies. However, when the stakes of the technology are so high we cannot afford to be picky, but should require all of these domains to be in place before widescale adoption of agentic AI can take place. If you think I’m missing something I’d love to hear it! Please ping me on LinkedIn and we can start a conversation.

If made it this far and you enjoyed this blog then check out my other posts around all things AI security. If you like what you see, consider subscribing (I’m trying to hit 100 subscribers!) and get weekly updates on AI security delivered straight to your inbox. Until next time folks!

blogs
Our Latest Thoughts
Interviews, tips, guides, industry best practices, and news.
SECURE YOUR AGENTS

Be first to secure your agents

We’re opening access gradually to a limited group of partners.
We care about your data in our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.