Do you know how AI works?
A beginners guide.

Let’s talk AI.
Or rather, let’s talk about what’s really under the hood when people say AI these days.
To most people it’s ChatGPT, Gemini, Claude, or just generative AI. And the tech underneath all of that is an LLM. An LLM is a Large Language Model.
So for now, we’ll stick to talking about those.
Before we can understand what LLMs are doing, we need a quick primer on how they work.
When an LLM is trained, it’s not memorizing facts, it’s building relationships between words. Imagine it learning from a piece of text about breakfast. It picks up words like toast, cereal, table, and morning. It even connects ideas about diets; people skipping breakfast or making it a feast.
Then, it learns from texts about weddings and carpentry. Suddenly, toast takes on a new meaning, so does table. The relationships between words grow more complex and we need more space to map them.
Picture our words as little pieces of paper laid out on a table. You group related ones together, but as more words are added, some need to be close to multiple words that are far apart.
The table isn’t enough.
So, you lift the words into the air with strings, arranging them in three dimensions.
Now, toast can sit near both bread and weddings, Table can link to dining and meetings.
But even 3D space isn’t enough… New words and meanings keep coming, and the connections expand into more dimensions—ones we can’t easily visualize.
And that’s what LLMs do. They build intricate relationships in N-dimensional space, far beyond human perception. We don’t need to grasp every detail—just knowing this structure exists helps us understand how they generate words that make sense in context.
Now you’ve got an idea how it learns.
The relationships between words aren’t between one word and another. It’s the relationship between 50 words, or a thousand words, to each other. It’s how the words we use to talk about a court case differ from the words we use to talk about particle physics, even though they’re both English. How there’s some words specific to those areas, but other words that exist in both.
When you ask an LLM a question, or have it handle a query, it’s building an answer based on that math. What is the most likely word/words to come next. And right there, hidden in that sentence is the key. The flag everyone buying AI needs to know, and the flag that everyone selling you AI doesn’t want to say.
The Most Likely.
Your LLM doesn’t know things. It doesn’t know anything.
It has the relationships between words. It has its N-dimensional vectors to connect things together and it finds the most likely answer. Because it always, always has an answer.
The problem with not knowing things, is that it also means there’s nothing you DON’T know. There’s always an answer. There’s always something that can fit.
And because it has learned from billions of pieces of text, it’s very good at sounding smart.
It’s scarily good at sounding smart…
By selecting the most likely word to come next (based on the topic, the prompt, the words already out etc) you can generate text that sounds exactly right for a given topic.
Sounds though…
And that’s the tough part. It sounds exactly right, but it might not be. When it’s writing about a topic you know, the errors could be obvious. Suggesting tomatoes for a fruit salad. It’s a fruit, right? A hundred thousand pages tell you it’s a fruit. You raise an eyebrow; you make a little correction.
You might even notice when it talks about a topic you’re not as close to. You might have spent enough time around your legal team to know copyright from trademark. At least in a general sense. But if you’re relying on an LLM to create content or copy, even to summarise something you don’t understand, you’ve got no way of knowing when it gets things wrong.
Those errors can compound pretty quickly. A mistake in a sentence doesn’t change the meaning too much. But if that sentence is the summary of several pages of text, ever word matters. If the summary talks about the points mentioned again and again, it’s doing a good job of averaging things out for you. But what if most of the text was a primer? What if the author wrote it to help prime you into a certain way of thinking and the key takeaway was only mentioned once? AI can’t really know what’s important. It can’t really think or reason to a conclusion.
None of it is an argument against AI or even using an LLM. But I think it’s critical information that we all need to have. In the end, tools like DeepSeek, ChatGPT and Gemini, are exactly that; tools. And there’s going to be a right and a wrong way to use them.
Knowing how it works, and where and how those errors come from helps you make better decisions. It helps you to achieve your goals and keep your brand safe.
My advice?
AI plays a fantastic role in boosting productivity. Not replacing it.
It can help turn a list of bullet points into two sentences each, and then you can write brilliant content from that. Correcting errors as you go.
It can help brainstorm ideas and spit out ten quick pieces of information based on the topic you describe. So that even if you use none of them, you’ve got a starting point to inspire you or tell you what to avoid.
AI tools and LLMs are a fantastic way to work smarter. A personal assistant who’s never tired or busy, ready to be a muse… But to still leave you to do what you do best, and really do the work.