laravel 10 min read

Laravel AI SDK Advanced: Tools, Structured Output, and Conversational Agents

Go beyond basic prompting with the Laravel AI SDK — build agents that call tools, return typed structured data, remember multi-turn conversations, and run in the background via Laravel queues.

G
Gurpreet Singh
March 18, 2026

Beyond Simple Prompting

In the previous guide we covered installing laravel/ai and building a basic agent with prompt() and stream(). Once that foundation is in place, the next step is making agents genuinely useful in production: agents that can call your application's own functions as tools, return structured data instead of free-form text, remember what was said earlier in a conversation, and process long-running tasks in the background.

This guide covers all four of those capabilities — along with the middleware pipeline and event system — so you can build production-grade AI features rather than toy demos.

Tools: Letting Your Agent Take Action

A tool is a PHP class that the language model can invoke when it determines it needs external information or needs to perform an action. The model decides when to use a tool based on your agent's instructions and the user's message — you do not hardcode the decision.

Common use cases for tools:

  • Fetching real-time data (stock prices, weather, order status) that the model was not trained on
  • Querying your own database based on what the user asked
  • Sending emails, creating records, or triggering other application actions
  • Performing calculations that an LLM might get wrong if done purely through generation

Create a tool class with an execute() method and descriptive docblock — the SDK uses the description to tell the model what the tool does and when to use it:

namespace App\Ai\Tools;

use App\Models\Order;

class GetOrderStatus
{
    /**
     * Retrieve the current status and estimated delivery date for a customer order.
     *
     * @param  string  $orderId  The order reference number, e.g. "ORD-12345"
     */
    public function execute(string $orderId): string
    {
        $order = Order::where('reference', $orderId)->first();

        if (! $order) {
            return "No order found with reference {$orderId}.";
        }

        return "Order {$orderId} is currently {$order->status}. "
             . "Estimated delivery: {$order->estimated_delivery->format('D, d M Y')}.";
    }
}

Register the tool on your agent by implementing HasTools:

use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Contracts\HasTools;
use Laravel\Ai\Promptable;
use App\Ai\Tools\GetOrderStatus;
use App\Ai\Tools\GetProductInfo;

class SupportBot implements Agent, HasTools
{
    use Promptable;

    public function instructions(): string
    {
        return 'You are a customer support agent. Use the available tools to look up '
             . 'real-time order status and product information before answering.';
    }

    public function tools(): iterable
    {
        return [
            new GetOrderStatus,
            new GetProductInfo,
        ];
    }
}

Now when a user asks "Where is my order ORD-99821?", the model will call GetOrderStatus with $orderId = "ORD-99821", receive the result from your database, and incorporate it into a natural-language response — all automatically, within a single prompt() call.

Tools can call external APIs, run Eloquent queries, dispatch jobs, or invoke any PHP code. The return value should be a string (or something castable to a string) that the model can read and use. If your tool fails, return an error string — the model will handle it gracefully by telling the user it was unable to fetch the information.

Structured Output: Typed Responses from LLMs

By default, agent responses are strings — useful for chat, but not for when you need the AI to populate a report, extract entities from text, or classify content. Structured output instructs the model to return a JSON object that matches a schema you define, and the SDK maps it to an array-accessible response object.

Implement HasStructuredOutput and define your schema using the JsonSchema builder:

use Illuminate\Contracts\JsonSchema\JsonSchema;
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Contracts\HasStructuredOutput;
use Laravel\Ai\Promptable;

class SentimentAnalyser implements Agent, HasStructuredOutput
{
    use Promptable;

    public function instructions(): string
    {
        return 'Analyse the sentiment of the provided customer review. '
             . 'Return a sentiment label, a confidence score, and a one-sentence summary.';
    }

    public function schema(JsonSchema $schema): array
    {
        return [
            'sentiment'  => $schema->enum(['positive', 'neutral', 'negative'])->required(),
            'confidence' => $schema->number()->min(0)->max(1)->required(),
            'summary'    => $schema->string()->required(),
        ];
    }
}

Call the agent and access the response like an array:

$response = (new SentimentAnalyser)->prompt(
    'This product is absolutely terrible. Broke after two days and support ignored me.'
);

$sentiment  = $response['sentiment'];   // "negative"
$confidence = $response['confidence']; // 0.97
$summary    = $response['summary'];    // "Customer experienced product failure and poor support."

// Store in database
Review::find($id)->update([
    'ai_sentiment'   => $sentiment,
    'ai_confidence'  => $confidence,
    'ai_summary'     => $summary,
]);

The schema builder supports strings, integers, numbers, booleans, enums, arrays, and nested objects. Because the SDK enforces the schema at the model level (using provider-native structured output features), the response will always match the schema — no need to validate or handle malformed JSON on your side.

Practical uses for structured output:

  • Lead qualification: Extract company size, use case, and budget band from a free-text form submission
  • Document processing: Parse invoices, contracts, or CVs into structured fields
  • Content moderation: Return a category, severity score, and reason for flagged content
  • Code review: Return issues as a structured array with file, line, severity, and description

Conversational Agents with Memory

A stateless agent forgets everything the moment a request ends. For chat interfaces, support bots, or any multi-step interaction, you need agents that remember the conversation. The SDK handles this with the RemembersConversations trait and automatic database persistence.

Add the trait to your agent alongside the Conversational interface:

use Laravel\Ai\Concerns\RemembersConversations;
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Contracts\Conversational;
use Laravel\Ai\Promptable;

class SupportBot implements Agent, Conversational
{
    use Promptable, RemembersConversations;

    public function instructions(): string
    {
        return 'You are a helpful support agent. Remember details the user has shared '
             . 'earlier in the conversation and refer back to them naturally.';
    }
}

Start a new conversation by calling forUser():

$response = (new SupportBot)
    ->forUser(auth()->user())
    ->prompt('Hi, my name is Sarah and I ordered the Pro plan last week.');

// Save the conversation ID so you can continue it
session(['support_conversation' => $response->conversationId]);

Continue the same conversation by calling continue():

$conversationId = session('support_conversation');

$response = (new SupportBot)
    ->continue($conversationId, as: auth()->user())
    ->prompt('I still have not received my invoice.');

The SDK automatically loads all previous messages from the agent_conversation_messages table and includes them in the request, so the model knows that "Sarah" is talking about her Pro plan invoice. New messages — both the user turn and the assistant reply — are stored after each interaction.

You can implement a fully custom message history (e.g. loading from your own chat_messages table) by implementing the messages() method on your agent instead of using the RemembersConversations trait:

use Laravel\Ai\Messages\Message;

public function messages(): iterable
{
    return ChatMessage::where('conversation_id', $this->conversationId)
        ->orderBy('created_at')
        ->get()
        ->map(fn($m) => new Message($m->role, $m->content))
        ->all();
}

Queueing Long-Running Agents

Some AI tasks take too long to run within a synchronous HTTP request — processing a 50-page PDF, generating a 2,000-word report, or running an agent that calls ten tools in sequence. For these, dispatch the agent as a queued job:

// In a controller or action class
GenerateReportAgent::make(report: $report)->queue();

// Or with explicit queue configuration
GenerateReportAgent::make(report: $report)
    ->onQueue('ai-reports')
    ->onConnection('redis')
    ->queue();

The agent processes asynchronously, and you can use Laravel Broadcasting to push the result to the user's browser via WebSockets when it completes — giving you a seamless background-processing UI with real-time delivery.

Middleware

Agent middleware lets you intercept and modify requests and responses at the pipeline level — useful for logging, rate limiting, injecting context, or transforming output. Middleware is registered in config/ai.php and applies globally across all agents, or you can define per-agent middleware by overriding the middleware() method:

use Laravel\Ai\Middleware\LogAgentInteraction;
use Laravel\Ai\Middleware\RateLimitPerUser;

public function middleware(): array
{
    return [
        new RateLimitPerUser(maxPerMinute: 10),
        new LogAgentInteraction,
    ];
}

A custom middleware class receives the outgoing request and a next closure, making the pattern identical to Laravel HTTP middleware:

class LogAgentInteraction
{
    public function handle(AgentRequest $request, Closure $next): AgentResponse
    {
        $start = microtime(true);

        $response = $next($request);

        AiLog::create([
            'agent'      => $request->agent::class,
            'prompt'     => $request->prompt,
            'provider'   => $request->provider,
            'tokens'     => $response->usage->totalTokens,
            'duration_ms'=> round((microtime(true) - $start) * 1000),
        ]);

        return $response;
    }
}

Events

The SDK dispatches events at key points in the agent lifecycle. Listen to them in EventServiceProvider or via Event::listen() for observability, cost tracking, or triggering side effects:

use Laravel\Ai\Events\AgentPrompted;
use Laravel\Ai\Events\AgentResponseReceived;

Event::listen(AgentResponseReceived::class, function ($event) {
    // Track token usage per user for billing
    $event->agent->user?->increment('ai_tokens_used', $event->response->usage->totalTokens);

    // Alert if cost exceeds threshold
    if ($event->response->usage->totalTokens > 10_000) {
        Log::warning('High token usage', [
            'agent'  => $event->agent::class,
            'tokens' => $event->response->usage->totalTokens,
        ]);
    }
});

Putting It All Together: A Real Example

Here is a complete SalesCoach agent that combines tools, structured output, and conversation memory — the canonical example from the official documentation:

class SalesCoach implements Agent, Conversational, HasTools, HasStructuredOutput
{
    use Promptable, RemembersConversations;

    public function __construct(public User $user) {}

    public function instructions(): string
    {
        return 'You are a sales coach. Analyse sales call transcripts and provide '
             . 'specific, actionable feedback. Score the rep's performance objectively.';
    }

    public function tools(): iterable
    {
        return [
            new RetrievePreviousTranscripts($this->user),
        ];
    }

    public function schema(JsonSchema $schema): array
    {
        return [
            'feedback' => $schema->string()->required(),
            'score'    => $schema->integer()->min(1)->max(10)->required(),
            'strengths'   => $schema->array()->items($schema->string())->required(),
            'improvements'=> $schema->array()->items($schema->string())->required(),
        ];
    }
}

// Usage
$response = SalesCoach::make(user: $user)
    ->forUser($user)
    ->prompt($transcript);

$score       = $response['score'];        // int: 7
$feedback    = $response['feedback'];     // string
$strengths   = $response['strengths'];   // array of strings
$improvements= $response['improvements']; // array of strings

This single agent class handles tool orchestration, response structuring, and conversation persistence — all with standard Laravel patterns and zero custom HTTP client code.

Frequently Asked Questions

How do I limit which users can use which agents?

Use standard Laravel authorization. Add a Gate check or policy at the controller level before instantiating the agent. The SDK does not enforce access control — that is your application's responsibility, which keeps the AI layer clean and reusable across different authorization contexts.

Can agents call other agents as tools?

Yes. You can wrap an agent's prompt() call inside a tool class, making it callable by a parent orchestrator agent. This is the foundation of multi-agent pipelines — a routing agent receives the user's request, decides which specialist agent to invoke, calls that agent's tool, and synthesises the results. The Laravel AI SDK does not impose a specific multi-agent architecture, so you compose agents using plain PHP class composition.

What happens to token usage costs in a long conversation?

Each request to the API includes the entire conversation history, so token costs grow with conversation length. For high-volume applications, implement a summarisation strategy: after N turns, prompt the model to summarise the conversation so far, store the summary, and use it as the starting context for subsequent turns instead of the full message history. The messages() method on your agent is the right place to implement this — it gives you full control over what context is sent with each request.

Is the Laravel AI SDK production-ready?

Yes. It launched alongside Laravel 12 with full first-party support, comprehensive test utilities, and the same long-term support commitment as the rest of the framework. For greenfield Laravel 12 projects, it is the recommended approach over community packages. For existing projects on Laravel 10 or 11, consider whether upgrading makes sense or whether continuing with your current AI integration library is more practical.

#Laravel #AI #Laravel AI SDK #Agents #Tools #Structured Output #OpenAI #Anthropic #LLM #PHP
G
Gurpreet Singh

Senior Full Stack Developer — Laravel, Vue.js, Nuxt.js & AI. Available for freelance projects.

Hire Me for Your Project

Related Articles