Not 100% Accurate, but 100% Useful

Here’s a little secret about AI: it is not 100% accurate.

Shock horror, I know. This means it regularly outputs information that is just plain wrong. While that might not truly be a secret, it is a critical fact that often gets buried under the hype of ever-improving models. If these models were perfect, they wouldn’t need to keep improving.

As you integrate Large Language Models (LLMs) or other AI into your business systems, you have to account for one core truth: AI is probabilistic in nature.

This is fundamentally different from the deterministic software we’ve used for decades. In traditional coding, a specific input leads to a clearly defined, predictable output. With AI, you get the output that is statistically most likely according to a probability distribution. Sometimes, that most likely answer isn't the right one.

I don’t point this out to be negative - AI is unquestionably powerful and can bring immense value to your business. Rather, I want to highlight the importance of thinking differently. To succeed, you have to build deterministic systems around probabilistic engines.

The Notiv Lesson: 100% Useful

When we started developing our AI meeting assistant at Notiv back in 2018, we had to face a harsh reality: speech-to-text transcription at that time was only about 80-85% correct. Stated differently: two words out of every ten were garbage.

We coined a phrase that framed our entire product design: How do you take something that isn’t 100% accurate and make it 100% useful?

The answer lay in focusing on the "job to be done”. Our users weren't actually looking for a perfect, word-for-word transcript. Their real goal was to quickly review key points they missed or follow up on action items. A less than perfect transcript became 100% useful when combined with thoughtful user experience (UX) design that highlighted summaries and tasks rather than just raw text.

The Magic of Aggregated Statistics

Another powerful tool in the quest for utility is aggregation. To understand the reliability of an AI system, it is helpful to distinguish between individual precision and aggregate accuracy.

Think of AI predictions like a large crowd guessing the weight of an object. If you rely on a single person’s guess, they might be off by 20% with some guessing too high, others too low. You wouldn't base a critical business decision on that one data point. However, when you average the guesses of 1000 people, the high and low errors tend to cancel each other out. This "wisdom of crowds" results in an average that can be incredibly close to the truth.

This same principle can be applied to AI model outputs. For example, imagine a system that detects customer complaints on business calls with 80% accuracy. While the system may have a margin of error on any single call, those inaccuracies can smooth out when analysed over a large dataset. Whether you are estimating total weekly complaint volumes to staff your support centre or comparing complaint rates across different store locations to identify training needs, the aggregate statistics eliminate the noise to reveal the true signal of your business performance.

Old-Fashioned Engineering (Guardrails)

A further way to handle AI inaccuracy is through "guardrails” - system and technical controls that ensure outputs remain within safe, predictable bounds.

Using the same example of a complaint-detection system: if you simply ask an LLM if a call contained a complaint, the accuracy will fluctuate. However, if you augment the task by requiring the AI to provide a direct citation from the transcript, you can wrap this in a layer of deterministic engineering.

By using a simple text-parsing block to verify that the citation actually exists in the source transcript, you gain certainty. You can then use UX design to allow a human user to click that citation and listen to the audio at that exact timecode. This turns a less than certain AI output into a verifiable, grounded insight.

Final Thoughts

In delivering the benefits of AI to your business, you must remember that probabilistic models are an entirely different beast to traditional software.

Any given output may be imprecise. Your job isn't to chase a 100% accuracy rate that may never come. Your job is to ensure the system is 100% useful. Reframing your perspective through this lens is another powerful way to ensure your AI adoption actually succeeds.

Next
Next

The Power of Defining Success