borah / llm-monitoring-laravel
Advanced LLM monitoring using LLM Port and Filament panel
Fund package maintenance!
Borah
Requires
- php: ^8.2
- borah/llm-port-laravel: ^1.0.4
- filament/filament: ^3.2
- flowframe/laravel-trend: ^0.3.0
- illuminate/contracts: ^10.0||^11.0
- spatie/laravel-package-tools: ^1.16
Requires (Dev)
- larastan/larastan: ^2.9
- laravel/pint: ^1.14
- nunomaduro/collision: ^8.1.1||^7.10.0
- orchestra/testbench: ^9.0.0||^8.22.0
- pestphp/pest: ^2.34
- pestphp/pest-plugin-arch: ^2.7
- pestphp/pest-plugin-laravel: ^2.3
- phpstan/extension-installer: ^1.3
- phpstan/phpstan-deprecation-rules: ^1.1
- phpstan/phpstan-phpunit: ^1.3
README
A comprehensive monitoring solution for Large Language Model usage in Laravel applications using LLM Port and Filament.
Features
- Track LLM API calls and usage statistics
- Evaluate LLM responses using built-in metrics
- Monitor token usage and costs
- Integrated Filament dashboard
- Extensible architecture for custom metrics
Installation
You can install the package via composer:
composer require borah/llm-monitoring-laravel
Then run the installation command:
php artisan llm-monitoring:install
This will:
- Publish the config file
- Run migrations to create necessary tables
- Copy the LlmPortCall model to your app
- Set up Filament resources and dashboard components
Configuration
After installation, you can configure the package in config/llm-monitoring.php
:
return [ 'llmport' => [ 'driver' => null, // one of the llmport.php drivers 'model' => null, ], 'probability' => 100, // 0 to 100. Chance of a response being evaluated. 100 is always. 'evaluations' => [ \Borah\LlmMonitoring\Evaluations\AnswerRelevance::class, \Borah\LlmMonitoring\Evaluations\ContextRelevanceChainOfThought::class, ], ];
Dashboard Setup
During installation, a Filament dashboard will be set up. Make sure to register it in your Filament panel provider:
// in app/Providers/Filament/AdminPanelProvider.php public function panel(Panel $panel): Panel { return $panel // ... other configuration ->pages([ \App\Filament\Pages\LlmDashboard::class, ]) ->discoverWidgets(in: app_path('Filament/LlmMonitoring/Widgets'), for: 'App\\Filament\\LlmMonitoring\\Widgets'); }
Adding Custom Widgets
You can extend the dashboard with your own custom widgets. Create a new class that extends the LlmDashboard:
namespace App\Filament\Pages; use App\Filament\LlmMonitoring\Widgets\LlmCallsChart; use App\Filament\LlmMonitoring\Widgets\LlmStats; use App\Filament\LlmMonitoring\Widgets\LlmTokenConsumption; use App\Filament\Widgets\MyCustomWidget; class CustomLlmDashboard extends \App\Filament\Pages\LlmDashboard { public function getWidgets(): array { return [ LlmStats::class, LlmCallsChart::class, LlmTokenConsumption::class, MyCustomWidget::class, ]; } }
Then update your panel configuration to use your custom dashboard:
->pages([ \App\Filament\Pages\CustomLlmDashboard::class, ])
Creating Custom Evaluations
You can create custom evaluation metrics by extending the BaseEvaluation
class:
namespace App\Evaluations; use Borah\LlmMonitoring\Evaluations\BaseEvaluation; use Borah\LlmMonitoring\ValueObjects\EvaluationData; use Borah\LlmMonitoring\ValueObjects\EvaluationResult; use Borah\LLMPort\ValueObjects\ChatResponse; class MyCustomEvaluation extends BaseEvaluation { public function identifier(): string { return 'my-custom-evaluation'; } public function description(): string { return 'Evaluates something custom about the LLM response'; } public function systemPrompt(EvaluationData $data): string { return 'You are evaluating the quality of an AI response.'; } public function userPrompt(EvaluationData $data): string { return "User Query: {$data->query}\n\nAI Response: {$data->response}"; } protected function evaluate(EvaluationData $data, mixed $response): EvaluationResult { if ($response instanceof ChatResponse) { // Process the response and return a result return new EvaluationResult( value: 0.85, formattedValue: '85%', metadata: ['details' => 'Additional evaluation details'] ); } return new EvaluationResult(value: 0, formattedValue: '0%'); } }
Then add your custom evaluation to the config:
'evaluations' => [ \Borah\LlmMonitoring\Evaluations\AnswerRelevance::class, \Borah\LlmMonitoring\Evaluations\ContextRelevanceChainOfThought::class, \App\Evaluations\MyCustomEvaluation::class, ],
Testing
composer test
Changelog
Please see CHANGELOG for more information on what has changed recently.
Credits
License
The MIT License (MIT). Please see License File for more information.