Getting Started with Laravel AI SDK: Build Intelligent Agents in Laravel
The official Laravel AI SDK gives you a unified, expressive interface for building AI agents, generating text, working with images, audio, and embeddings — all with nine providers and zero boilerplate.
What is the Laravel AI SDK?
Laravel 12 ships with a first-party AI package — laravel/ai — that gives you a clean, Laravel-native way to build AI-powered features. Rather than wiring up raw HTTP clients to OpenAI or Anthropic by hand, you work with Agent classes, facades, and Eloquent-style fluent APIs that feel like they belong in a Laravel application.
The SDK supports nine providers out of the box: OpenAI, Anthropic, Gemini, Azure OpenAI, Groq, xAI, DeepSeek, Mistral, and Ollama. You can switch providers with a single method call and even configure automatic failover so your application keeps running if one provider goes down.
In this guide I will walk you through installation, your first agent, streaming responses, and embeddings — covering everything you need to ship a production AI feature in a Laravel application.
Installation
Install the package via Composer and publish the configuration file:
composer require laravel/ai
php artisan vendor:publish --provider="Laravel\Ai\AiServiceProvider"
php artisan migrate
The migration creates two tables — agent_conversations and agent_conversation_messages — which are used when you want agents to remember conversation history across requests. More on that in a moment.
Next, add your API keys to .env. You only need to add keys for the providers you actually use:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=AI...
MISTRAL_API_KEY=...
OLLAMA_API_KEY=ollama
The package reads these from config/ai.php, which maps each provider to its driver, key, and optional custom base URL. The custom URL option is useful if you route requests through a proxy, use Azure OpenAI's endpoint, or point Ollama at a self-hosted model:
// config/ai.php
'providers' => [
'openai' => [
'driver' => 'openai',
'key' => env('OPENAI_API_KEY'),
'url' => env('OPENAI_BASE_URL'), // optional
],
'anthropic' => [
'driver' => 'anthropic',
'key' => env('ANTHROPIC_API_KEY'),
],
'ollama' => [
'driver' => 'ollama',
'url' => 'http://localhost:11434/api',
],
],
Understanding Agents
The central concept in the Laravel AI SDK is the Agent. An agent is a PHP class that encapsulates everything an AI interaction needs: system instructions, conversation history, tools the model can call, and optionally a schema for structured output. Agents are the Laravel equivalent of what other frameworks call "assistants" or "chains".
Create your first agent with the Artisan command:
php artisan make:agent SupportBot
This scaffolds a class in app/Ai/Agents/SupportBot.php. A minimal agent looks like this:
namespace App\Ai\Agents;
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Promptable;
class SupportBot implements Agent
{
use Promptable;
public function instructions(): string
{
return 'You are a helpful customer support agent for Acme Corp. '
. 'Answer questions about our products politely and concisely. '
. 'If you cannot answer a question, tell the user to email support@acme.com.';
}
}
The Agent interface requires an instructions() method — the system prompt. The Promptable trait adds the prompt() method and handles all the HTTP communication with the provider.
Your First Prompt
Calling your agent is a single line:
$response = (new SupportBot)->prompt('What is your return policy?');
return (string) $response;
The response object is stringable, so you can cast it directly wherever you need the text. Behind the scenes, the SDK sends your instructions and the user message to the configured default provider.
You can override the provider, model, and timeout per-call without changing the agent class:
use Laravel\Ai\Lab;
$response = (new SupportBot)->prompt(
'What is your return policy?',
provider: Lab::Anthropic,
model: 'claude-haiku-4-5-20251001',
timeout: 30,
);
This is useful for A/B testing different models, using a cheaper model for simple queries and a more capable model for complex ones, or falling back to a different provider during an outage.
You can also use static make() for dependency injection — handy when your agent constructor accepts a model:
// Agent constructor
public function __construct(public User $user) {}
// Instantiation
$agent = SupportBot::make(user: $user);
$response = $agent->prompt('Track my order #1234.');
Streaming Responses
For chat interfaces or any UI where you want text to appear as it is generated — rather than waiting for the full response — use stream() instead of prompt():
$stream = (new SupportBot)->stream('Explain our premium plan.');
foreach ($stream as $chunk) {
echo $chunk;
ob_flush();
flush();
}
In a real application you would broadcast these chunks over a WebSocket or Server-Sent Events channel. The Laravel AI SDK integrates with Laravel Broadcasting, so you can broadcast agent responses to a channel with minimal configuration — useful for building ChatGPT-style interfaces within your Laravel app.
Supported Capabilities Per Provider
Not every provider supports every feature. The SDK documents capability support clearly:
- Text generation: OpenAI, Anthropic, Gemini, Azure, Groq, xAI, DeepSeek, Mistral, Ollama
- Image generation: OpenAI (DALL-E), Gemini, xAI
- Text-to-speech: OpenAI, ElevenLabs
- Speech-to-text: OpenAI Whisper, ElevenLabs, Mistral
- Embeddings: OpenAI, Gemini, Azure, Cohere, Mistral, Jina, VoyageAI
- Reranking: Cohere, Jina
- File / document uploads: OpenAI, Anthropic, Gemini
This means you can use Anthropic for your chat agent, OpenAI for embeddings, ElevenLabs for voice responses, and Cohere for reranking search results — all within the same Laravel application, all through the same consistent facade.
Images, Audio, and Embeddings
The Ai facade exposes these capabilities with a fluent builder API:
use Laravel\Ai\Facades\Ai;
use Laravel\Ai\Lab;
// Generate an image
$image = Ai::image('A minimalist logo for a SaaS product called Acme')
->provider(Lab::OpenAI)
->generate();
// Text-to-speech
$audio = Ai::audio('Your order has been shipped and will arrive in two days.')
->provider(Lab::ElevenLabs)
->generate();
// Transcribe audio
$transcript = Ai::transcribe(storage_path('app/recordings/call.mp3'))
->generate();
// Generate embeddings for vector search
$embedding = Ai::embeddings('The quick brown fox jumps over the lazy dog')
->provider(Lab::OpenAI)
->generate();
Embeddings return a float array that you can store in pgvector, Pinecone, or any other vector database for semantic search, clustering, or RAG (retrieval-augmented generation).
Provider Failover
Production AI features need resilience. The SDK lets you define a failover chain so that if your primary provider is unavailable or rate-limited, requests automatically retry against the next provider in the list:
// config/ai.php
'failover' => [
Lab::OpenAI,
Lab::Anthropic,
Lab::Gemini,
],
This means a transient OpenAI outage does not take down your application — the SDK silently routes the request to Anthropic instead.
Testing AI Features
The SDK ships with a testing facade that lets you fake AI responses without making real API calls — so your test suite is fast, free, and deterministic:
use Laravel\Ai\Facades\Ai;
Ai::fake();
// Optionally stub a specific response
Ai::fake([
'Our return policy allows returns within 30 days.',
]);
$response = (new SupportBot)->prompt('What is your return policy?');
Ai::assertPrompted('What is your return policy?');
The assertPrompted() helper verifies the correct prompt reached the AI layer. You can also assert on the provider used, the model, and whether streaming was requested — giving you confidence that your agent is configured correctly without paying per-token in CI.
Frequently Asked Questions
Is laravel/ai the same as Laravel Prism?
No. Laravel Prism is a popular community package for AI integration in Laravel. The laravel/ai SDK is an official first-party package maintained by the Laravel team, introduced with Laravel 12. Both solve similar problems but the official SDK has deeper framework integration (Artisan commands, migrations, broadcasting, queueing) and is the recommended approach going forward for new projects.
Can I use Ollama for local development?
Yes. The Ollama driver lets you run models like Llama 3, Mistral, and Phi locally during development, with no API key required. Set the driver to ollama and point it at your local Ollama server URL. This means zero API costs during development and no risk of accidentally leaking sensitive data to a cloud provider while building features.
Does the SDK support multimodal inputs — images, PDFs, audio?
Yes. The attachment API lets you send files along with a prompt. Supported file types depend on the provider — GPT-4o accepts images and documents, Claude accepts PDFs and images, Gemini accepts video. The SDK abstracts the provider-specific upload and attachment format so your application code stays the same regardless of which provider you use.
How does queueing work for long-running agents?
Agents can be dispatched to a queue via the standard Laravel job system. The SDK provides a QueueableAgent concern that handles serialization of the agent state. This is useful for tasks like batch document processing, overnight report generation, or any AI task where the user does not need to wait for a real-time response.
Senior Full Stack Developer — Laravel, Vue.js, Nuxt.js & AI. Available for freelance projects.
Hire Me for Your Project