The global economy of the 21st century revolves around industries rapidly adopting technology, particularly artificial intelligence (AI), to enhance productivity and ensure future readiness. However, if this adoption is not handled correctly, both businesses and their clients may face “digital risks,” primarily concerning cybersecurity. To tackle these issues, there has been a surge of start-ups specialising in a field known as “Security for AI.”
We have Israeli start-up Noma and United States-based competitors Hidden Layer and Protect AI. However, in today’s episode of the “Start-up of the Week,” International Finance will talk about British University spinoff Mindgard.
In the words of Professor Peter Garraghan, the CEO and CTO of the start-up, “AI is still software, so all the cyber risks that you probably heard about also apply to AI. But, if you look at the opaque nature and intrinsically random behaviour of neural networks and systems.”
The Mindgard Way Of Ensuring AI Security
Established in 2022, the start-up first hit the headlines in 2024, as it emerged as the winner of the “Cyber Innovation Prize,” at Infosecurity Europe 2024. Mindgard’s approach to ensuring “Security for AI” is a thing called “Dynamic Application Security Testing for AI” (DAST-AI), which targets vulnerabilities that can only be detected during runtime. The process involves continuous and automated red teaming, a way to simulate attacks based on Mindgard’s threat library.
Mindgard’s technology has been a brainchild of Professor Garraghan’s academic background as a researcher focused on AI security. For him, LLMs (Large Language Models, type of AI programme that can generate and recognise texts) are rapidly changing, and so do the threats around these models. Using his ties with Lancaster University, Professor Garraghan envisions Mindgard automatically own the IP to the work of 18 additional doctorate researchers for the next few years.
While it has ties to research and development activities in the “Security for AI” field, Mindgard has very much become a commercial product already, and more precisely, a SaaS (Software-as-a-Service) platform. Despite having enterprises as clients, Professor Garraghan’s company also works with AI start-ups, with many from the United States, that need to show their customers they do AI risk prevention.
After raising a 3-million-pound seed round in 2023, Mindgard is now announcing a new USD 8 million round led by Boston-based .406 Ventures, with participation from Atlantic Bridge, WillowTree Investments, and existing investors IQ Capital and Lakestar. The funding will help with building the team, product development, research and development, but also expand into the United States.
Key Products And Services
Talking about Mindgard’s R&D activities, we have DAST-AI or “Dynamic Application Security Testing for AI” to begin with, which, powered by the world’s largest attack library for AI, enables red teams (group of security professionals who simulate cyber-attacks to test an organisation’s security), security and developers to swiftly identify and remediate AI security vulnerabilities.
Tech professionals can find and remediate their AI vulnerabilities on a proactive basis, by integrating into existing CI/CD automation and all SDLC stages, as DAST-AI provides extensive model coverage beyond LLMS, including image, audio and multi-modal, thereby empowering the red teams to Identify AI risks that static code or manual testing cannot detect.
Also, DAST-AI helps its users to reduce testing times on their AI models from months to minutes, by helping them to gain actionable visibility with the most accurate AI security insights, thereby empowering teams to swiftly address emerging threats.
To access DAST-AI, the users need to point the Mindgard platform to their existing AI products and environments, following which the tool starts its things by effortlessly running custom or scheduled tests on the client’s AI models, generating a detailed view of scenarios and threats to the model, apart from quickly analysing them. The clients can integrate report viewing smoothly into their existing systems and SIEM (Security Information and Event Management).
DAST-AI works on the “Testing, Remediation and Training” model where world-class AI expertise from academia and industry is providing continuous security testing across the technology lifecycle, apart from integrating into existing organisational workflow and automation, thereby helping Mindgard’s clients to safeguard their AI assets by continuously testing and remediating security risks, ensuring the security of both third-party AI models and in-house solutions.
Next is “Artifact Scanning,” which ensures AI systems are secure and function as intended in live environments. It’s a real-time threat response tool that protects AI models with continuous monitoring and advanced security testing. Mindgard’s “Run-Time Artifact Scanning” identifies vulnerabilities, analyses risks, and integrates seamlessly into the user’s workflows to keep AI investments secure and compliant.
If a client connects his/her AI models with Mindgard for run-time artifact scanning, the process supports a variety of frameworks and deployment environments. “Artifact Scanning” carries out comprehensive tests on the AI model including adversarial attacks and configuration checks, to identify weaknesses in real-time, apart from getting a detailed view of scenarios and threats. The tool then integrates results into the client’s existing systems for streamlined monitoring and incident response, helping businesses gain immediate visibility into their AI security posture.
Artifact Scanning’s offline profiling leverages analytics and Mindgard’s AI threat intelligence repository to identify vulnerabilities and attack patterns that can be addressed before deployment. Run-time testing builds on this foundation by evaluating ML model artifacts in a secure staging environment, detecting dynamic risks such as prompt injection that static analysis cannot uncover.
Together, these processes ensure that both known and emerging threats are addressed, providing robust protection for businesses’ AI investments. Continuous monitoring ties everything together, enabling proactive threat detection and ongoing security assurance.
AI Red Teaming And Pentesting As A Service
Mindgard’s red teaming services combine deep expertise in cybersecurity, AI security, and threat research to complement its DAST-AI solution. The start-up’s security experts specialise in adversarial testing techniques that are tailored to 21st century enterprises’ specific business objectives and AI environments. By leveraging its unique skill set, Mindgard is empowering its clients’ data science and security teams with actionable insights to strengthen defences and fully protect commercial AI systems.
Mindgard conducts a thorough analysis of a business’ AI/ML operations lifecycle, along with a deep review of the client’s most critical models to identify risks that could threaten the organisation. The findings are mapped to industry best practices, including NIST, MITRE ATLAS, and OWASP, delivering actionable guidance to strengthen cyber defences and reduce organisational risk.
Mindgard delivers a training programme designed to equip data science and security personnel with a deep understanding of adversarial machine learning tactics, techniques, and procedures (TTPs), along with the most effective countermeasures to defend against them. The training includes actionable insights on integrating ML model testing into a company’s internal processes and an overview of leading offensive AI tools, such as PyRIT, Garak, PINCH and more.