The work looks better now. That is not proof the person is better.
I call it “synthetic competence” – high-quality output with low ownership. A bilingual deck that sounds sophisticated. A memo that reads board-ready. Then the first real question lands: Why this assumption? What did you trade off? What would change your mind? Silence is the tell.
Artificial intelligence does not force this. It invites it. The model absorbs ambiguity, fills gaps with plausible guesses, and delivers polished text in seconds. The temptation is not to cheat. It is to skip. Skip defining success. Skip testing your own logic. Skip wrestling with uncertainty long enough to form a real position. Because the output looks right, people stop checking. Managers confuse fluency with competence. Individuals start believing the voice on the page is their own. Then they are asked to defend it, live, in front of someone who matters.
The deepest risk is not that AI becomes capable. It is that humans outsource judgment, not just output.
The alternative is what I call “augmented agency” – higher-complexity outcomes with AI, while retaining ownership of goals, reasoning, and responsibility. These are proven, not claimed. Define the target before the model runs. Explain choices under critique. Revise independently when the logic fails.
The test is uncomfortable. Before opening the model, know what decision the work must support. After it responds, demand the skeleton: assumptions, evidence, failure modes. Then try to break it. Change the audience. Flip a constraint. Withhold a key input. If you cannot reconstruct the logic without the model, the competence is synthetic. If you cannot explain it, do not claim it.
The next promotion may still go to the polished deck. The final winner will be the one who can always defend it.
Frank Ng is a retired NASDAQ CEO, who co-authors this column with his son Ryan after publishing their book Hey AI, Let’s Talk!