Read More
Ask someone what a calculator does and they will answer instantly: It computes. Ask what a search engine does: It finds. Ask what generative AI does, and most people pause.
ADVERTISEMENT
SCROLL TO CONTINUE WITH CONTENT
Some say it thinks. Others say it knows things. A few say it predicts.
Only the last answer is close.
That gap is not trivial. The mental model you bring to artificial intelligence determines what you trust, what you verify, and how badly it can mislead you without your noticing.
Most people have spent their careers with systems that are deterministic: input goes in, instructions execute, consistent output comes out. That behavior is stable by design.
Generative AI breaks from that paradigm entirely. It does not execute instructions. It infers. Given what has come before, it generates the most statistically plausible continuation. Its own creators acknowledge that in crucial ways these models remain black boxes, even to them.
This is why the familiar mental models fail. Treat AI as a search engine, and you assume it is retrieving facts from a reliable source. Treat it as a database, and you assume the answer already exists in stored form. Treat it as a domain expert, and you assume the conclusion rests on verified reasoning. None of these hold.
The system encodes patterns, not truth. It can surface a useful answer and a confident fabrication in exactly the same tone, with exactly the same fluency.
What matters is the structural implication. Generative AI industrializes pattern exploration, surfacing possibilities at a scale and speed no human or team could match. That is the source of its power.
But it has no internal mechanism for determining whether what it generates is correct. Plausibility and accuracy are not the same thing. The system cannot tell them apart. You have to.
Understanding the nature of AI is not a technical exercise. It is the difference between knowing what instrument you are playing and assuming it works like the last one.
Frank Ng is a retired NASDAQ CEO, who co-authors this column with his son Ryan after publishing their book Hey AI, Let’s Talk!













