Read More
In conversations about artificial intelligence, most attention goes to capability. What can the system do? How should it be used?
But capability is only half the equation.
ADVERTISEMENT
SCROLL TO CONTINUE WITH CONTENT
The other half is human judgment, and judgment does not come pre-calibrated.
When someone brings home a new Nintendo Switch, one of the first things to do is calibrate the controllers before starting a game.
Nothing is broken; the system just needs a proper center point so movement reads correctly. Without calibration, stick drift creeps in, aim goes off, and the game turns frustrating fast. Human judgment in AI collaboration works the same way.
Without a clear understanding of what generative AI actually is, judgment drifts. Fluency gets mistaken for truth. Confidence gets mistaken for competence. Convenience quietly replaces thinking. The real problem is not AI itself, but the mismatch between human expectations and how the system actually operates.
So what does calibration actually require?
Four things. Understanding the nature of AI – what it is and how it produces output. Understanding its potential – what it can genuinely do well. Understanding its limits – where it fails, hallucinates, or misleads. And understanding its risks – the exposures that emerge when judgment is not applied. None of this requires technical depth. It requires the kind of informed awareness we bring to any new tool or environment before relying on it.
Once judgment is calibrated, AI collaboration changes. You know when to trust the output and when to challenge it. You know what questions to ask and what to verify. You use AI to expand what you can do – not to replace the thinking you should be doing yourself.
In coming columns, I will explore each of these four dimensions in turn – and what they mean for us navigating the AI era.
Frank Ng is a retired NASDAQ CEO, who co-authors this column with his son Ryan after publishing their book Hey AI, Let’s Talk!












