Why This Guide?
Agentic AI comes with its own dictionary of mysterious buzzwords: RAG, embeddings, tools, MCPs, prompting, pre-prompting, fine-tuningโฆ
What do they actually mean? And how do they fit together?
When I first started exploring Agentic AI, I realised I was guilty of nodding along, pretending I knew what half these words meant. ๐
๐ก This post is my attempt to demystify them in plain English โ based on what I've learned while building at AI ALCHEMY.
The Hidden Dictionary of AI Agents
Here are the key terms that keep popping up in conversations around Agentic AI:
Let's break down how they all work together.

1. Pre-Prompting: Setting the Stage
When you send a prompt to an AI agent, you're not just sending your text. Behind the scenes, the agent is bundled with:
This pre-prompting phase defines how the agent behaves.
๐ก In fact, studies show that the right mix of tools and instructions can cut error rates by 5โ10x.
2. Function Calling: How Agents Actually Act
LLMs (like GPT, Claude, Mistral) are text generators. On their own, they can't send emails, write to files, or query databases.
That's where function calling comes in. The process looks like this:
The agent is given a list of functions it can use.
It generates a function call (in JSON).
The function is executed programmatically.
The result is returned to the LLM, which uses it to continue the conversation.
Example:
You ask, "List all files in this folder."
The agent calls a list_files(folder)
function โ runs it โ gets the results โ and then responds to you with the output.
This is how AI agents step out of text-only mode and start acting like digital assistants.
3. MCPs: Tool Libraries for Agents
MCPs (Model Context Protocols) have been hyped as the next big revolution. But in reality, they're simply libraries of functions.
Think of them like npm packages in JavaScript or Python libraries.
โ ๏ธ Important: MCPs don't bring context. They only bring tools.
At AI ALCHEMY, we go beyond MCPs by also integrating data context โ so our agents don't just have tools, but also awareness of the data they're acting on.
4. RAG: Retrieval-Augmented Generation
Here's where things get spicy.
LLMs have a context window limit โ they can't just take in your entire codebase or database. Enter RAG.
RAG solves the problem by:
Chunking โ breaking files into smaller pieces.
Embedding โ turning each chunk into a searchable vector.
Retrieving โ finding only the relevant chunks when you ask a question.
Example:
You ask an agent, "Update the login API."
Instead of feeding it all 10,000 files, RAG fetches just the relevant files and injects them into the prompt.
That's how AI stays efficient, scalable, and accurate.

5. Fine-Tuning & Training
These terms often get mixed up, but they're very different:
Example:
โข Training โ OpenAI building GPT using billions of tokens.
โข Fine-tuning โ A company adapting GPT on 50 examples of "text โ SQL" queries.
Most businesses won't train models (too costly). Instead, they'll fine-tune them for their use case.
Final Thoughts
Agentic AI isn't magic. It's a careful combination of:
โฆall glued together to help LLMs go beyond text and into action.
We're still early โ like the early days of programming languages โ but the abstractions are forming fast.
At AI ALCHEMY, we're experimenting with blending tools + context so agents don't just talk smart โ they act smart.
Did this post help clarify things for you?
Let me know what topics you'd love us to unpack next! We're always looking to demystify more AI concepts and share practical insights from our real-world experience.