Agent Runtime

Thinking in Agent Runtime

Agent Runtime models AI agents after humans. You deal with concepts like task, tool, linguistics, communication styles, and memory, instead of graphs of nodes and edges. This is core to Agent Runtime's API design philosophy. The sooner you adapt to this mental model, the more productive you'll be at developing agents.

Remember, unlike traditional workflow-based LLM calls, agents are most productive because of their ability - not despite - to self-determine the best action to take to accomplish their goals.

Always try to build the minimal programmable agent. Think of the Agent Runtime as "programmer-in-the-loop" instead of "agent-in-the-loop". Your job is to let the AI agent plan and action on most tasks to accomplish their goal. Only program what you need to steer the agent into the right path faster.

Understanding agent lifecycle

An agent has a goal, tools, and an LLM model.

const agent = createAgent({
  goal: structuredGoal(``),
  model: "claude-sonnet-3-5-latest",
  tools: [webSearch], // optional
});

Agent Runtime provides a structuredGoal function that helps refine your prompt to better work for agents. If you want to hard code a specific prompt, simply omit the structuredGoal function and put in a string of the specific prompt.

A client, such as chat with human, another agent, or REST API, can invoke the agent to create a new session.

Each session consists of:

  • state - "What the agent knows right now" The current context and information the agent has access to during this interaction
  • action - "What the agent wants to do next" The specific operation or response the agent is currently executing
  • tasks - "What else the agent will do after" A queue of planned steps the agent will take to accomplish the goal

Use a reducer to influence the agent session. You can think of this as "managing the agent's thought process". For example, assume your agent needs to draft an email and you want to prepend [Drafted by AI] to the subject.

const reducer = (state: AgentState, action: AgentAction) => {
  if (action.type === "TOOL_USE" && action.tool === "draftEmail") {
    state.tasks.pop();
    const subject = `[Drafted by AI] ${action.input.subject}`;
    const action = callTool(lookupUser, {
      ...action,
      input: { ...action.input, subject },
    });
    state.tasks.push(callTool(draftEmail, action));
  }
 
  return state;
};

This is how you can use the deterministic benefits of code to enforce certain behaviors, while working with the non-deterministic benefits of LLMs to self-determine the best action sequence to complete their goal.

On this page