
AI & UXR
Everything You Need to Know About Tokens, Data Volumes and Processing in ChatGPT
4
MIN
Nov 26, 2024
Introduction to tokens and processable data sets
When working with ChatGPT, you will quickly come across a central concept: tokens. But what exactly are tokens, and why are they important? Tokens are the smallest units of information that the model can process – these can be whole words, parts of words or even punctuation marks. So the length of a token varies depending on the language and context, but on average, a token can be said to be about 4 characters or 0.75 words.
Why is this relevant? Because the maximum number of tokens that ChatGPT can process in a conversation or analysis determines how much information can fit through the model at once. Currently, the token limit is 8,192 tokens for ChatGPT 4 and 128k tokens for ChatGPT 4o.
This means that the entire content – including your questions or data and the answers that ChatGPT generates – must not exceed this limit. This token limit naturally affects how long a single conversation can be before older parts of the conversation are ‘forgotten’.
For comparison: 8,192 tokens correspond to about 16 to 20 book pages and 128k tokens correspond to about 250 to 300 pages of an average book, with a book page containing between 250 and 300 words. This calculation shows you that the models can process quite a bit of information in one go – but with long texts or complex data, this limit can also be quickly reached.
Dealing with large amounts of data in ChatGPT
Let's say you want to analyse an entire chapter of a book – in principle, no problem! But what happens if the chapter is longer than 8,192 or 128k tokens? In such cases, ChatGPT cannot process the data in one go. A common assumption is that the model will simply split the data into digestible sections on its own – but this does not happen automatically.
You have to manually split the data into smaller sections and control the flow.
Here are a few tips on how best to do this:
Segment your text into thematically meaningful sections: Instead of sending everything at once, divide the text into smaller blocks that are coherent and easier to digest.
Link the sections together: To ensure that the context is not lost, briefly summarise what has been discussed so far at the beginning of a new section. This helps to maintain the context.
Identify key information: If you know that certain parts of the text are more important than others, focus on them first. This way you can use the token limit more efficiently.
Strategies for optimising the use of the token limit
Focus on important data: To use the tokens efficiently, you should identify the most important points before sending the text. This will save you space and quickly get responses on the topics that really matter.
Summarise where possible: If you have a huge amount of data, summarise the text to a minimum. The aim is to pack as much as possible into the token limit without losing context.
Iterative processing: If all the context is important but the data set is getting too large, process the information iteratively. That is, submit the data in parts and provide a brief summary of the most important information after each section so that the overall context is preserved.
Time dependency of processing
You may be wondering: ‘What happens if I take a long break in a chat? Will ChatGPT forget everything?’ The good news is that processing is not time-dependent.
Whether you respond in minutes, hours, or even days, as long as the chat remains open and the token limit is not reached, the context will be preserved.
This means that long breaks won't affect the chat. Nevertheless, in very long chats, it may happen that earlier information is ‘forgotten’. Why? Because the token limit also applies to the entire chat history.
When the limit of 8192 or 128k tokens is reached, a so-called ‘memory loss’ is applied: older parts of the conversation are removed to make room for new content. That's why it makes sense to summarise the chat regularly or repeat important points.
Another detail: If you process large amounts of data in smaller sections, it is helpful to always clearly indicate how the sections relate to each other. This helps ChatGPT to understand the context and process the data correctly.
Feedback at token limit
There is one important thing you should know: as soon as the token limit is reached, ChatGPT will let you know. This is so that you are informed in good time and the context is not unexpectedly lost. You then have the option of summarising parts of the conversation, removing irrelevant information or taking other measures to ensure that the conversation can continue efficiently.
Practical tips and best practices
To get the most out of ChatGPT, it helps to focus on the context and relevance of the information. The accuracy and precision of the data you send to ChatGPT directly affect the quality of the analysis. Therefore, it is worth preparing the data well before sharing it in the chat.
If you're working with particularly large amounts of data, it can be useful to use external tools to analyse, shorten or summarise the data before sending it to ChatGPT. This way, you can make optimal use of the space in the token limit.
For long chats, it's always a good idea to repeat key points or provide summaries periodically. This keeps the context clear and ensures that ChatGPT stays on top of things.
If you're wondering, there is no hard rule for token usage per message.
Sometimes a simple question can consume only a few tokens, while a complex question or long answer can require several hundred tokens. The important thing is simply to keep an overview so that the token limit is not reached too early.
Outlook for future developments
Of course, it would be nice if we never reached the token limit. In fact, there are already plans to increase the amount of data that can be processed in future versions of ChatGPT. Let's see what the ‘4o3’ model brings us ;-)
Technical statistics and details of this chat
By the way: This text is about 1,600 tokens long. And the chat I used to develop this post used about 1,000 tokens. A good tool for ‘token counting’ is https://platform.openai.com/tokenizer. Sometimes ChatGPT itself can't do it.
RELATED ARTICLES YOU MIGHT ENJOY
AUTHOR
Tara Bosenick
Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.
At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.
She is one of the leading voices in the UX, CX and Employee Experience industry.
