Overview

The LLM Agent is the central component of Evo AI, acting as the “thinking” part of your application. It leverages the power of a Large Language Model (LLM) for reasoning, natural language understanding, decision making, response generation, and interaction with tools.

Unlike deterministic workflow agents that follow predefined execution paths, the LLM Agent behavior is non-deterministic. It uses the LLM to interpret instructions and context, dynamically deciding how to proceed, which tools to use (if any), or whether to transfer control to another agent.

Based on Google ADK: This implementation follows the standards established by the Google Agent Development Kit, ensuring compatibility and best practices.

Key Features

Dynamic Reasoning

Uses LLMs for contextual interpretation and intelligent decision making

Tool Usage

Integrates with APIs, databases, and external services through tools

Multi-turn

Maintains context in long and complex conversations

Flexibility

Adapts to different scenarios without reprogramming

Creating Your First LLM Agent

Step-by-Step Platform Guide

Let’s create a complete LLM agent using the Evo AI interface:

Multi-Agent Systems (Sub-Agents)

Fundamental Concepts

Based on the Google Agent Development Kit, multi-agent systems allow creating complex applications through composition of multiple specialized agents.

transfer_to_agent function: When you configure sub-agents for an LLM agent, a transfer_to_agent tool is automatically made available. This function allows the main agent to delegate session execution to one of its specialized sub-agents, transferring complete control of the conversation.

Agent Hierarchy

Parent-child structure where agents coordinate specialized sub-agents

Workflow Agents

Orchestrators that manage execution flow between sub-agents

Communication

Mechanisms to share state and delegate tasks between agents

Specialization

Each agent focuses on a specific responsibility

Configuring Sub-Agents on the Platform

Communication Mechanisms

Common Multi-Agent Patterns

1. Coordinator/Dispatcher Pattern

2. Sequential Pipeline Pattern

3. Generator-Critic Pattern

Testing Your Agent

First Conversation

Essential Components in the Interface

1. Identity and Purpose

2. Instructions

3. Advanced Configurations

Common Use Cases

Customer Service

Recommended configuration:

  • Model: GPT-3.5-turbo (fast)
  • Temperature: 0.3 (consistent)
  • Sub-agents: Specialists by area
  • Tools: Knowledge base
  • Agent Settings:
    • Load Memory: ✅ (remember preferences)
    • Load Knowledge: ✅ (FAQ and policies)
    • Output Schema: ✅ (structured tickets)

Sales Assistant

Recommended configuration:

  • Model: GPT-4 (advanced reasoning)
  • Temperature: 0.7 (creative)
  • Sub-agents: Qualifier, demonstrator
  • Tools: Product catalog
  • Agent Settings:
    • Load Memory: ✅ (customer history)
    • Planner: ✅ (sales process)
    • Output Schema: ✅ (structured data)

Data Analysis

Recommended configuration:

  • Model: Claude-3-Sonnet (analytical)
  • Temperature: 0.2 (precise)
  • Sub-agents: Data collectors
  • Tools: Data APIs
  • Agent Settings:
    • Planner: ✅ (complex analyses)
    • Output Schema: ✅ (standardized reports)
    • Load Knowledge: ✅ (methodologies)

Personal Assistant

Recommended configuration:

  • Model: Gemini-Pro (multimodal)
  • Temperature: 0.5 (balanced)
  • Sub-agents: Calendar, tasks
  • Tools: Calendar, email
  • Agent Settings:
    • Load Memory: ✅ (personal preferences)
    • Preload Memory: ✅ (complete context)
    • Planner: ✅ (task organization)

Best Practices

Next Steps


LLM agents are the foundation for creating truly intelligent and adaptable AI experiences. With proper configuration via the platform interface, you can build powerful assistants that meet your business’s specific needs.