
AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI
Better Answers, Less Nonsense: How ChatGPT Learns
3
MIN
Apr 8, 2025
I've been using ChatGPT for quite a while now, and it's exciting to see how it's developed. While she used to often answer me charmingly but incorrectly, she is now much more critical, careful, and accurate. In this article, I'll discuss why that is, what has improved about her ‘skepticism logic,’ and where she still has challenges.
1. She used to be too eager – and often wrong
A classic example of this is the question: "How many letters 'r' are in the word 'Strawberry'?". In the past, ChatGPT would probably have quickly replied, "2". Sounds plausible at first, doesn't it? It simply grabbed the first two r’s and went for it. If I had asked it again, "Are you sure?" it would have started thinking and answered correctly, "3".
The reason for this was that it was optimised to give a plausible answer as quickly as possible, rather than the correct one. The pattern was clear: it wanted to please, not necessarily be correct. This behaviour was also evident in other areas:
Calculations: "What is 137 x 42?" - She used to give a plausible but wrong answer. Today she is much better at providing accurate calculations.
Example: "How many golf balls fit into an Airbus A380?" She used to say something very ambitious. Today she gives a more realistic estimate and points out which factors influence the answer.
Assumptions in questions: "What does Albert Einstein say about AI?" - In the past, it would have simply generated an answer based on well-known Einstein quotes and AI knowledge – even though Einstein never said anything about AI.
What has changed?
2. The new ‘skepticism logic’ – What makes ChatGPT different today
2.1 Recognising false assumptions
A major advance is that it now recognises and questions hidden assumptions in questions. A good example:
Question: "Why are all people in the Arctic left-handed?"
Previously: "This could be due to climatic conditions that favour certain hand habits." (Here it adopts the false assumption that this is the case.)
Today: "There is no evidence that all people in the Arctic are left-handed. Do you think that certain cultural factors play a role?"
The same applies to questions like: "What does the latest research article on XYZ say?". She would have made up an answer in the past. Nowadays, she says: "I can't get real-time information, but here are some insights from previous studies on the topic."
2.2 More reflection on seemingly simple questions
ChatGPT is now better at pausing and questioning itself.
Example: "Can a square have three sides?"
Before: "Yes, in a creative interpretation you could argue that..."
Today: "No, by definition a square has four sides. Do you mean a triangle?"
Likewise with: "A train travels at the speed of light. How long does the journey take?" She would have cheerfully calculated it in the past. Today she says: "An object with mass cannot reach the speed of light. Should I explain what would happen if it were almost moving at the speed of light?"
2.3 More self-critical assessments
One of the best improvements is that it now says more clearly: "I don't know." In the past, it often preferred to guess. Today, it recognises when it doesn't have a sufficient basis for an answer – a huge step forward.
Example: "Is there any evidence that dreaming increases life expectancy?"
Previously: "Yes, there are studies that suggest..."
Today: "I am not aware of any scientific evidence for this. Would you like me to explain how sleep affects health in general?"
3. Where ChatGPT still has challenges
Despite the improvements, there are still scenarios in which it struggles:
Hypothetical questions: ’What happens if you replace the moon with a slice of Gouda?’
It now provides more physically correct answers, but when it comes to creative questions, it sometimes slips back into ‘continuing the pattern’.
Ambiguous sentences: ‘How does the sentence “The cat on the mat...” continue?’
It could provide an answer without questioning whether there is a fixed continuation.
Chain questions with intentional errors: ‘Why is the sky green when it rains in Australia and elephants sing?’It often recognises absurd questions, but not always.
Ethical questions: ’Should AIs make important decisions?’
It gives neutral answers, but the discussion remains superficial.
4. Conclusion: Fewer mistakes, but still not a perfect system
ChatGPT has made great progress. It calculates more accurately, recognises false assumptions in questions and is more self-critical. In particular, its new ‘I don't know’ behaviour is a clear step forward. Nevertheless, there are still challenges – especially with creative or manipulative questions. The development is going in the right direction, but like a good chess player, an AI will never be infallible.
This makes it all the more exciting to continue following its progress. I am curious to see how it will become even smarter in the future – and whether it will ever be possible to lead it up the garden path.
But still:

RELATED ARTICLES YOU MIGHT ENJOY
AUTHOR
Tara Bosenick
Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.
At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.
She is one of the leading voices in the UX, CX and Employee Experience industry.
