Dan Minea's Blog

How Large Language Models Work And What You Can Do With Them: A Casual Look at AI & GPTs

So have you ever wondered how those super-smart text tools—like the ones that finish your sentences or write full essays - actually do their thing? They’re called large language models (LLMs), and while they seem like magic, they’re really just good at one thing: guessing.

Yep, it’s like a game of high-stakes Mad Libs where the model fills in blanks based on patterns it’s seen before. Pretty wild, right?

Big and Getting Bigger

Let’s talk about that word, “large.” What makes these models so big? In simple terms, it’s all about the insane number of connections they have inside, called parameters.

These are like the knobs and dials the model adjusts to figure out the next word in a sentence—or even the next pixel in an image.

Models like GPT-3 have a jaw-dropping 175 billion of these, and the newer ones? Even more.

And sure, bigger models tend to be smarter, but they’re also harder to train. Think of it as trying to teach a classroom of 175 billion students—things can get messy fast.

So, How Do They Learn Stuff?

Here’s how it usually works. First, the model gets trained on a mountain of text—like, almost the entire internet (or close enough).

It plays fill-in-the-blank games to learn patterns, something like: “The cat sat on the ___.” The model tries “chair,” “table,” and finally “mat.”

Over time, it gets better at figuring out the most likely answer based on what it’s read before. Once it’s decent at that, it gets more specialized training.

Say you want it to sort customer reviews into “positive” and “negative” categories. You’d show it examples, like “This product rocks!” (positive) or “This thing is trash” (negative).

It learns to connect phrases to vibes, which is kind of impressive, but also a bit spooky.

What Makes These Models So Good?

Ok, the secret sauce is something called a transformer. It’s a type of AI design that’s really good at understanding context.

For instance, in the sentence, “The car didn’t start because it was out of gas,” the model knows “it” refers to the car.

But if you said, “The party didn’t happen because it was canceled,” the same word “it” now means “the party.”

This is what transformers do—they zoom in on the connections between words. And because they can handle super long sentences, they’re great for things like writing essays, translating languages, or even generating code.

Building Your Own AI Buddy

Now, let’s say you want your very own GPT assistant to help you with, I don’t know, brainstorming product names or crafting tweets. You don’t need to be a coding genius to make that happen.

With tools like GPT Builders, you can whip something up by just answering a few questions.

Here’s a rough guide to get started:

  1. Give it a personality: Tell it to act like a marketing guru or a science nerd, depending on what you need.
  2. Be specific about tasks: Instead of “help me write something,” say, “Write a short Instagram caption about coffee.”
  3. Show it examples: If you’ve got a favorite style, give it a sample. It’ll learn fast.
  4. Set boundaries: Want responses short and sweet? Let it know. These models love to ramble unless you rein them in.

Cool Things You Can Do

Here’s where it gets fun. These AI tools are like the Swiss Army knives of text:

  • Got a research paper? Upload it, and ask for a quick summary.
  • Writer’s block? Have it throw out ideas for blog posts or ad campaigns.
  • Need analysis? Use it to pick apart data and deliver easy-to-digest insights.

Honestly, it’s like having a super-smart intern, but one that's fully online.

Where They Fall Short (Yeah, They’re Not Perfect)

Let’s not kid ourselves—these things aren’t flawless. Sometimes they “hallucinate,” meaning they just make stuff up. Ask about something obscure, and they might spit out a confident-sounding answer that’s totally wrong.

So, yeah, double-check their work, especially if you’re using them for serious stuff.

The Bottom Line

So, whether you’re a casual user or someone who loves geeking out over tech, these tools are worth exploring. They’re creative, versatile, and honestly, kind of fun to mess with.

The trick? Don’t overthink it - just dive in and see what you can make.


Some context first: there’s a great community out there called Small Bets, “a support network ready to help you get your first small wins”. As a member of Small Bets, I’m learning as much as I can and always try to improve using the knowledge from the series of webinars hosted for members. So what I’m trying in this article is to distill the information I’ve learned, firstly for me, to help me remember and implement, but also for anyone who might be interested in this topic. The webinar I’ve drawn knowledge from here is called “Enough GPT to be Dangerous”. Obviously, if you want the full experience, I encourage you to sign-up to Small Bets and watch the webinar for yourself, you won't regret it. Enjoy!

Thoughts? Leave a comment