Learn Agentic AI: A Beginner's Guide

Demystifying AI buzzwords โ€” for techies and non-techies

๐Ÿ’ก

Why This Guide?

Agentic AI comes with its own dictionary of mysterious buzzwords: RAG, embeddings, tools, MCPs, prompting, pre-prompting, fine-tuningโ€ฆ

What do they actually mean? And how do they fit together?

When I first started exploring Agentic AI, I realised I was guilty of nodding along, pretending I knew what half these words meant. ๐Ÿ™ƒ

๐Ÿ’ก This post is my attempt to demystify them in plain English โ€” based on what I've learned while building at AI ALCHEMY.

๐Ÿ“š

The Hidden Dictionary of AI Agents

Here are the key terms that keep popping up in conversations around Agentic AI:

โ€ข
Prompting โ†’ system prompts, user rules, context files
โ€ข
Tools โ†’ functions an agent can call
โ€ข
MCPs โ†’ collections of functions packaged together
โ€ข
RAG โ†’ retrieval-augmented generation (context injection)
โ€ข
Embeddings & chunking โ†’ breaking knowledge into searchable pieces
โ€ข
Fine-tuning & training โ†’ adapting models for specific use cases

Let's break down how they all work together.

AI Agent Toolkit - Visual representation of AI agent components and tools
๐ŸŽญ

1. Pre-Prompting: Setting the Stage

When you send a prompt to an AI agent, you're not just sending your text. Behind the scenes, the agent is bundled with:

System prompt
general rules and personality
User rules
your constraints/preferences
Tools list
functions it can call programmatically
Context files
any data or documents you've attached

This pre-prompting phase defines how the agent behaves.

๐Ÿ’ก In fact, studies show that the right mix of tools and instructions can cut error rates by 5โ€“10x.

โš™๏ธ

2. Function Calling: How Agents Actually Act

LLMs (like GPT, Claude, Mistral) are text generators. On their own, they can't send emails, write to files, or query databases.

That's where function calling comes in. The process looks like this:

1

The agent is given a list of functions it can use.

2

It generates a function call (in JSON).

3

The function is executed programmatically.

4

The result is returned to the LLM, which uses it to continue the conversation.

Example:

You ask, "List all files in this folder."
The agent calls a list_files(folder) function โ†’ runs it โ†’ gets the results โ†’ and then responds to you with the output.

This is how AI agents step out of text-only mode and start acting like digital assistants.

๐Ÿ› 

3. MCPs: Tool Libraries for Agents

MCPs (Model Context Protocols) have been hyped as the next big revolution. But in reality, they're simply libraries of functions.

Think of them like npm packages in JavaScript or Python libraries.

In TypeScript
you import packages for new functions
In Agentic AI
you import MCPs for new tools

โš ๏ธ Important: MCPs don't bring context. They only bring tools.

At AI ALCHEMY, we go beyond MCPs by also integrating data context โ€” so our agents don't just have tools, but also awareness of the data they're acting on.

๐ŸŽฃ

4. RAG: Retrieval-Augmented Generation

Here's where things get spicy.

LLMs have a context window limit โ€” they can't just take in your entire codebase or database. Enter RAG.

RAG solves the problem by:

1

Chunking โ†’ breaking files into smaller pieces.

2

Embedding โ†’ turning each chunk into a searchable vector.

3

Retrieving โ†’ finding only the relevant chunks when you ask a question.

Example:

You ask an agent, "Update the login API."
Instead of feeding it all 10,000 files, RAG fetches just the relevant files and injects them into the prompt.

That's how AI stays efficient, scalable, and accurate.

RAG Process Flow - Visual representation of Retrieval-Augmented Generation workflow
๐Ÿ”ฎ

5. Fine-Tuning & Training

These terms often get mixed up, but they're very different:

Training a model
building a foundational model from scratch (massive data + GPUs)
Fine-tuning a model
adapting an existing model on a smaller dataset for a specific task

Example:

โ€ข Training โ†’ OpenAI building GPT using billions of tokens.

โ€ข Fine-tuning โ†’ A company adapting GPT on 50 examples of "text โ†’ SQL" queries.

Most businesses won't train models (too costly). Instead, they'll fine-tune them for their use case.

โœจ

Final Thoughts

Agentic AI isn't magic. It's a careful combination of:

๐Ÿ’ฌPrompts
๐Ÿ”งTools
๐Ÿ“ฆMCPs
๐ŸŽฃRAG
๐ŸŽฏFine-tuning

โ€ฆall glued together to help LLMs go beyond text and into action.

We're still early โ€” like the early days of programming languages โ€” but the abstractions are forming fast.

At AI ALCHEMY, we're experimenting with blending tools + context so agents don't just talk smart โ€” they act smart.

๐Ÿ‘‰

Did this post help clarify things for you?

Let me know what topics you'd love us to unpack next! We're always looking to demystify more AI concepts and share practical insights from our real-world experience.

READY TO TRANSFORM?

Let's Build Something Future-Proof

Ready to transform your ideas into scalable, innovative solutions? Let's discuss how we can engineer your digital future together.

Join our newsletter for the latest tech insights and innovations

Stay Ahead with Our Tech Sparks