Artificial Intelligence (AI) is continuously revolutionizing our interaction with technology. While building your own AI agent might appear challenging, you can create one for free using Ollama and Python. This post will guide you through the process of developing your first AI agent capable of checking the real response time of any website, providing a solid foundation for further customization and enhancement to suit your specific use cases. Also, this tutorial will serve as the basis for future tutorials that I will share here on my blog.
What is Ollama?
Ollama is an open-source project designed to provide a powerful and user-friendly platform for running large language models (LLMs) on your local machine. (Thanks Meta)
Installing Ollama is as easy as running this command on your terminal:
First, let's pull the LLM model we want to use for our project, I recommend either mistral or llama3, both have agent capabilites.
ollama pull llama3
Then, we must create a virtual environment to install the necessary python packages.
python -m venv venv
And then you can activate it by going to the folder where you ran the first command and typing
source venv/bin/activate
Now we just need to install two libraries for our project, Ollama (to interact with the LLM) and requests (to check the response time of our website)
pip install ollama requests
Implementation
Let's create three python files to organize our project, prompts.py, actions.py and main.py.
prompts.py will contain our predefined prompts, like the system prompt that tells the LLM to act like an agent and what process it should follow.
Even though we explicitly tell the LLM to return just the JSON response and nothing else, sometimes it still does, but don't worry, we have a workaround for that.
Then, in actions.py, we will create all of the functions that the LLM can execute to achieve it's goal, we will parse the LLM response and see if it's an action that's defined in our actions.py file, and if so, execute the function with the provided parameters.
For this demo, we'll just create a simple function to check the response time of a website, but your actions can be as complex as you want.
Then, in main.py we define a function that tries to extract the JSON of the LLM response, and the main agent loop of our script.
Result
Running python main.py, we can ask the response time of any website on the internet
Example for my blog:
Example for google.com
Conclusion
As you continue to explore the capabilities of AI and Ollama, remember that the possibilities are vast. Experiment with different models, fine-tune your agent, and stay curious. The knowledge gained from this project will serve as a valuable asset in your AI journey.
If you found this guide helpful and want to stay updated with more tutorials and insights, consider subscribing to my newsletter for free. For those looking to support my work and gain access to exclusive content, you can become a Plus or Pro paid member. Your support helps me continue to provide high-quality, valuable content for the tech community.
Let's innovate and grow together. Subscribe today and take the next step in your AI adventure!
Creating an AI Agent for free with Python and Ollama