Artificial Intelligence isn’t new. It’s been around for over 50 years but only recently has it become accessible, powerful, and, let’s be honest, marketable. The rise of large language models (LLMs) like ChatGPT and Gemini has sparked a wave of adoption across all industries, including health and safety.
These tools are undoubtedly powerful. But they are not without limitations and in safety-critical environments, those limitations carry real consequences.
The Power and the Problem with LLMs
LLMs are trained on vast swathes of publicly available data. They’re excellent for general tasks: writing summaries, drafting emails, even explaining technical concepts. But there’s a fundamental issue: they often hallucinate.
A hallucination is when the AI confidently gives you information that sounds right but simply isn’t. That might be acceptable in casual or marketing contexts. But in a safety-critical function? It’s unacceptable.
Take British Standards as an example. Many are paywalled by BSI, so LLMs like ChatGPT haven’t actually read them. When asked about them, these tools might return an answer with apparent authority but it’s often based on third-party blogs or user forums. Those summaries might be wrong. And if you’re using that information in a fire risk assessment, a method statement, or policy development, you could be making decisions based on flawed data.
The Data Privacy Question
Another serious concern is data security. Uploading your internal documents, reports, or risk assessments to a public LLM platform brings a major unknown:
Where is your data stored?
Who has access to it?
The truth is, it’s hard to get clear, enforceable answers to these questions. For organisations dealing with confidential or commercially sensitive material, that’s a risk most can’t afford.
Why Intuety Took a Different Path
We built Intuety with these realities in mind. Our AI is not a generic LLM it’s a custom AI platform built on a structured knowledge graph, trained exclusively on trusted, verified content:
-
Your own policies and procedures
-
Legislation from sources like the HSE
-
Sector-specific guidance and standards
-
Your past risk assessments and reports
This means we can assure the accuracy and traceability of everything the system outputs. You know exactly what the AI is referencing and it never hallucinates or guesses. If the data isn’t there, it doesn’t pretend it is.
We’re also ISO 27001 compliant and operate in a tightly controlled, auditable environment. You know where your data is stored. You retain ownership. And you maintain oversight.
The Bottom Line: Act Now
Staff are already using tools like ChatGPT every day including for work that affects safety-critical decisions.
That means LLMs are being used, often without oversight, to draft assessments, review technical content, and even make operational recommendations. This is a significant governance risk and it’s happening right now inside your organisation, whether you’ve sanctioned it or not.
Now is the time to act.
This isn’t just about embracing AI, it’s about ensuring the AI your teams use is safe, accurate, and aligned with your legal and operational responsibilities.
Waiting is no longer an option. If you’re serious about safety, it’s time to bring AI under control.
See how Intuety can support safe, structured, and compliant AI use in your organisation.