AI tokens are the basic units of text that large language models (LLMs) process when generating or understanding language. A token can be as short as a single character or as long as a word or sub-word, depending on the model’s tokenizer. For example, the sentence “Hello world” might be split into two tokens (“Hello” and “world”), while more complex words or punctuation may be broken into multiple parts.
In AI systems like GPT models, tokens are used to measure input and output size, determine processing cost, and manage context limits. The total number of tokens in a prompt and response directly affects how much text the model can handle in a single interaction.
