4 February 2025
Key takeways
What if AI could do more than wait for our instructions – what if it could take the initiative, make its own decisions, and learn from the experience?
This is the promise of Agentic AI: autonomous systems that may reach specified goals without constant human intervention. Unlike Traditional Large Language Models (LLMs), which rely only on their training data to generate results, AI agents actively use external tools such as APIs, web searches, datasets, and other agents (Gutowska, 2024). Therefore, AI agents can access real-time information, plan actions effectively, and deliver results tailored to complex tasks and goals.
History of AI agents
The history of agentic systems may be traced back to the 1950s and 1960s when AI pioneers such as Alan Turing and John McCarthy set the groundwork for machines that simulate intelligent behaviour. Early AI agents were simple rule-based systems, such as ELIZA (1966), that emulated human conversation through pattern matching. By the 1970s and 1980s, AI had progressed into decision-making agents such as MYCIN, an expert system for medical diagnosis. The 1990s saw a trend toward learning agents, with systems like TD-Gammon employing reinforcement learning to master games. In the 2000s, intelligent agents such as AlphaGo emerged, capable of working in dynamic situations through deep learning. Today, Agentic AI promotes innovation in healthcare, self-driving cars, and robotics, pointing us toward a future with entirely autonomous systems. (Deep Mind Systems, 2024)
Characteristics of AI agents
To tackle these complex tasks, AI agents use a combination of reasoning, tool interaction, and memory, allowing them to refine their responses continuously.
Capabilities:
Guided by human-defined objectives, these AI agents are reactive and proactive, opening paths for more dynamic and intelligent technical solutions (Forbes, 2024).
Example
To understand the AI agents’ process in more detail, consider an AI agent assigned to help a user decide the ideal time to visit a particular Parisian Museum. The agent must determine when the museum will most likely be less crowded. While the LLM may not have access to the museums’ crowd data, the agent can access a public API containing historical visitor data and determine when museum traffic was lower. Additionally, the agent may communicate with another agent specialized in tourism to find out when fewer significant events are happening in the city. Combining this information, the agent can determine the best week to visit and suggest it to the user.
Types of Agents
Based on the proposed task’s complexity level, AI agents can be designed with different levels of capabilities. While simpler agents are ideal for straightforward goals, more sophisticated agents are necessary for tackling more complex and dynamic challenges. From the simplest to the most advanced, the following are the main types of agents (Petrova-Dimitrova, 2022; Gutowska, 2024; aws, 2024):
Simple Reflex Agents: These are the most basic type of AI agents, suitable for specific tasks with predefined rules. They don’t have memory or advanced reasoning capabilities, so they simply perform preprogrammed actions as reflexes. For example, a thermostat adjusting the temperature based on a set threshold is a simple reflex agent.
Model-Based Agents: A model-based agent is like a simple reflex agent, but it uses memory to maintain an internal model of the world. It can use the stored information, its reflexes, and its state to support its decisions. For example, as it moves around, a robot vacuum cleaner detects obstacles like furniture, navigates around them, and stores a model of the areas it has already cleaned.
Goal-Based Agents: These agents have an internal model of their work environment available and are designed to achieve specific, complex goals. By assessing and anticipating different possible outcomes, they can search and plan a sequence of actions to complete the task and meet the final goal, always choosing the optimal path. This search and planning enhance their performance compared to simple and model-based agents. For example, a navigation system evaluates several routes to reach the destination. Considering its rule is selecting the fastest route, the agent identifies the best option and recommends it to the user.
Utility-Based Agents: Unlike goal-based agents that focus solely on achieving a particular objective, utility-based agents rely on a utility function to evaluate and compare the potential outcomes of different options. This utility function provides a way to measure success, especially when dealing with uncertainty, and helps the agent select the path that maximizes the overall benefit. For example, a utility-based agent can search for a flight that optimizes the travel time and minimizes the ticket’s total price.
Learning Agents: These agents are designed to improve over time by learning from experience while leveraging their base knowledge. While holding the same capabilities as other agents and potentially being goal or utility-based, they differentiate themselves by continuously using input and feedback mechanisms (Reinforcement Learning) to adapt to unfamiliar situations. Additionally, they employ a problem generator to create and propose new tasks and propose innovative solutions, enabling continuous learning and adaptation. For example, a learning agent in a movie streaming service might make recommendations based on a user’s viewing history. As the user watches more movies and provides ratings, the agent adapts its suggestions, refining the recommendations to match the user’s preferences.
Multi-Agents
Beyond the types of agents described previously, even more complex tasks require the knowledge of multiple AI agents. Multi-agent systems consist of numerous decision-making agents, each specialized in a distinct part of the main task, working collaboratively to achieve a common goal.
In multi-agent systems, all agents interact in a shared environment, considering each other’s goals, memories, and action plans. Communication in these systems can occur either directly or indirectly, by modifying the environment in which they operate.
Furthermore, unlike single agents, which become more challenging to manage and scale as their complexity increases, multi-agent systems provide a more efficient alternative. Single agents frequently encounter challenges such as managing an overwhelming number of tools, which might result in poor decision-making regarding which tool to use. Additionally, as tasks get more complex, a single agent may struggle to track and manage many specialization areas, such as planning, research, or mathematical problem-solving, leading to inefficiency and limited performance.
In contrast, the decentralized design of multi-agent systems provides significant advantages in accuracy, adaptability, and scalability. These systems outperform single-agent systems as they distribute work among specialized agents and use a shared pool of resources. Furthermore, rather than having several agents redundantly learn the same policies, multi-agent systems can exchange learned experiences, saving time and increasing efficiency and overall productivity. This method also enables additional agents or components to be smoothly incorporated, avoiding the integration issues that traditional systems frequently face.
In multi-agent systems, agents can connect through various methods. These connections enable communication and coordination between agents, impacting the system’s efficiency and scalability. The following figure shows some types of architectures commonly used (LangGraph):
Benefits and Limitations
Agentic systems, single or multi-agent, offer significant improvements compared to Traditional AI models like LLMs. While LLMs are very good at generating responses from static training data, they lack the dynamic, collaborative capabilities of agentic systems.
As seen previously, these advancements in agentic systems bring substantial benefits across industries, enhancing efficiency, adaptability, and decision-making (Forbes, 2024):
- One significant advantage is autonomy, as these systems can complete tasks independently, eliminating the need for constant human supervision. This allows for quicker reactions in essential environments like healthcare and autonomous cars.
- Another advantage is adaptability – agent systems may learn from their experiences, improve their performance over time using techniques such as reinforcement learning, and provide highly tailored experiences and solutions.
- They are also remarkably successful at solving complicated problems because they can analyse vast databases, discover trends, and make real-time judgments.
This enables companies to streamline processes and explore new automation and intelligent decision-making opportunities.
Despite all the benefits, there are still some limitations (Forbes, 2024):
· For instance, trust and transparency are significant challenges as these systems make independent decisions, and users may find it difficult to understand their reasoning.
· Another restriction is reliability in unknown conditions, as even well-trained systems can fail when confronted with situations they have never seen before.
· Furthermore, there are ethical and safety concerns, particularly in important fields such as healthcare and autonomous driving, where poor decisions can have catastrophic implications. This highlights the critical yet challenging need for clear legal and ethical regulations as agentic systems become more autonomous.
· Finally, agentic systems demand extensive resources to develop and deploy, such as processing power, massive datasets, and specialized skills, making them expensive for many companies.
Future of AI agents
The future of AI agents is promising, with AI systems becoming more proactive and capable of problem-solving on their own. Rather than simply waiting for instructions, they anticipate needs and suggest helpful solutions. With a better understanding of emotions, AI agents will create more natural and personalized interactions, and their ability to handle text, voice, and images will enhance communication and customer service. As their roles expand, ensuring that their actions are fair and transparent will be necessary, allowing them to contribute positively while maintaining trust.