AI Trust and Use Case Thoughts 1
How much we can or should trust AI has been on my mind a lot this week. It’s been relevant at work in going through risk assessments of AI tools and features. It grabs my attention even more outside of work because there’s just so much noise about this topic all the time. There has been since ChatGPT exploded as the fastest selling software ever, and it seems to be always getting louder.
It’s a critically important topic, no doubt; but it’s also overblown and confused in some areas I think.
It’s critically important …
That organizations use AI governance principles to select and use AI tools or AI features in applications in their environments. Risk assessments of the tools, features, vendors to ensure their data and people are safe and that their usage does everything possible to avoid biased results and support fairness and responsible, ethical AI use cases.
That businesses and individuals make sure that their data is not being used to train the AI model, and that features that allow AI tools to remember things about have clear guidance on how choose to opt-in or out, and make it easy to delete all of that “memory” at any time.
For individuals to remember that to “trust, but verify” is essential. AI tools, like human beings, do not always get things right. Sometimes they get them badly wrong. If an AI tool gives you a brilliant response that says tortoises can run faster than greyhounds, look to verify that. Often the tool will cite its source links that can be reviewed. When they don’t, we can do our own searches to learn more.
To recognize boundaries in AI Assistant usage. Remember that they are software, not close friends or most trusted advisors.
There are more examples of course, but most of them are pretty basic things like the ones I’ve listed if we just take time to think about them.
It’s overblown and confused when …
AI is blamed for human error, and error is a gentle word in many cases. A lawyer gets torn apart by a judge for showing up with legal documents and key components of their case created entirely by AI. That shouldn’t register on the scoreboard as untrustworthy AI, it should be scored under some sort of “Dummy Lawyer” category. The same goes for journalists and writers who get caught out publishing something as their work with chunks that are easily discovered to have been written by AI. Dumb writers if they don’t state that AI was used in X portion of their piece.
Reactions to hallucinations - confidently stated, wildly wrong responses from AI - are overly dramatic. I think most of us know quite a few human beings - friends, colleagues, teachers, “influencers” - who are outstanding at presenting wildly wrong and sometimes dangerous information.
AI tools are not perfect. That shouldn’t be a shock to any of us. This is why working with AI - not letting it do all of our work, make all of our decisions, count more than our own judgement and critical thinking - is the best way to work with AI.
Of course there are some much bigger, much more complex issues around trusting AI. Top of that list might be interpretability, the ability to how AI models work internally. That’s something even the creators of the best models have not fully solved as yet.
But … a lot of this can still be figured out with common sense and the use of long standing principles and practices to measure and assess software (hat tip to my boss, who will never read this, for reminding me of this).



I saw a stat comparing mistakes (hallucinations) AI makes today versus mistakes (misdiagnosis) doctors make. Doctors win.