Skip to the content.

LLM Generates Tokens, Agent Generates Messages, AgentLauncher Generates Agents

Understanding the fundamental differences between these three components is crucial for building effective AI systems. Each operates at a different abstraction level, with distinct inputs and outputs that work together to create intelligent behavior.

Contact Me


LLM (Large Language Model)

Agent

AgentLauncher

What Needs to be Done

Since the AgentLauncher serves as the orchestrator of agents, we need to standardize the agent lifecycle and all interfaces within this lifecycle. This standardization ensures consistency and interoperability across different agent implementations.

Standardize the Message Interface

The first step is standardizing the message interface, which defines the input and output formats for agents. The commonly used message types in modern systems include:

These message types form the communication backbone that enables agents to interact effectively with users, systems, and tools.

Standardize the LLM Call Interface

The LLM call drives the agent’s decision-making process, so we need to standardize this interface for consistency across different implementations.

Input: A list of messages containing:

Output: A list of messages containing:

The key insight is that if the output contains a tool call message, it indicates the agent’s logic should continue—the agent will execute the tool and receive a tool result message. If no tool call message is present, the agent’s task is complete.

Standardize the Tool Interface

Tool calls represent the agent’s actions and form the core of the agent’s interaction with its environment. We need to standardize the tool interface to ensure consistent behavior.

Tool Schema Components:

Tool Execution Process:

Standardize the Agent Lifecycle

The agent lifecycle is a standard loop that iterates through LLM calls and tool calls until the task is completed. We can define the agent lifecycle with these key components:

Core Components:

Implementation:

function agent_life_cycle(system_message, user_message, llm_call, tool_call):
    # system_message is optional
    conversation = [system_message, user_message]
    tool_set = [tool1, tool2, ...]

    while True:
        llm_output_messages = llm_call(conversation, tool_set)
        conversation.extend(llm_output_messages)
        if tool_call_message in llm_output_messages:
            tool_result_message = tool_call(tool_call_message, tool_set)
            conversation.append(tool_result_message)
        else:
            break
    return conversation

Benefits of Standardization

With this standardized agent lifecycle, we can generate different agents by simply changing the components. For example, we can implement an agent launcher that builds each component based on configuration parameters, then launches the agent lifecycle with these customized components.

This modular approach enables: