sourceability / openai-client
PHP 8.0+ OpenAI API client with fully typed/documented requests+responses models, guzzle and symfony/http-client support and async/parallel requests.
Installs: 7 387
Dependents: 1
Suggesters: 0
Security: 0
Stars: 18
Watchers: 0
Forks: 1
Open Issues: 1
Requires
- php: >8.0
- brick/money: ^0.8.0
- jane-php/open-api-runtime: ^7.0
- php-http/discovery: ^1.0
- php-http/httplug: ^2.0
- php-http/message: ^1.0
Requires (Dev)
- guzzlehttp/guzzle: ^7.5
- guzzlehttp/psr7: ^2.4
- jane-php/open-api-3: ^7.4
- php-http/guzzle7-adapter: ^1.0
- php-http/logger-plugin: ^1.3
- phpunit/phpunit: ^10.0
- rector/rector: ^0.15.16
- slevomat/coding-standard: ^8.8
- symplify/easy-coding-standard: ^11.2
README
PHP 8.0+ OpenAI API client with fully typed/documented requests+responses models,
guzzlehttp/guzzle
+ symfony/http-client
support through HTTPPug, and async/parallel requests.
The client is generated using openai's OpenAPI with jane-php.
Features:
- The requests models are typed and include descriptions from the OpenAPI documentation.
- Uses HTTPPug as the HTTP Abstraction
- Async/parallel requests.
This is a community-maintained/unofficial library.
Installation
composer require sourceability/openai-client
Getting started
require __DIR__ . '/vendor/autoload.php'; use Sourceability\OpenAIClient\Client; use Sourceability\OpenAIClient\Generated\Model\CreateCompletionRequest; $apiClient = Client::create( apiKey: getenv('OPENAI_API_KEY') ); $requests = [ (new CreateCompletionRequest()) ->setModel('text-davinci-003') ->setTemperature(0) ->setMaxTokens(512) ->setPrompt('The jane php library is very useful because'), new CreateCompletionRequest( model: 'text-davinci-003', temperature: 0, maxTokens: 512, prompt: 'Symfony symfony symfony is like sourceability on a' ), ]; $completionResponses = $apiClient->createCompletions($requests); var_dump($completionResponses);
ChatGPT with /v1/chat/completions
:
<?php require __DIR__ . '/vendor/autoload.php'; use Sourceability\OpenAIClient\Client; use Sourceability\OpenAIClient\Generated\Model\ChatCompletionRequestMessage; use Sourceability\OpenAIClient\Generated\Model\CreateChatCompletionRequest; $apiClient = Client::create( apiKey: getenv('OPENAI_API_KEY') ); $requests = [ new CreateChatCompletionRequest( model: 'gpt-3.5-turbo', temperature: 0, messages: [ new ChatCompletionRequestMessage( role: 'user', content: 'The jane php library is very useful because' ) ], ), new CreateChatCompletionRequest( model: 'gpt-3.5-turbo', temperature: 0, messages: [ new ChatCompletionRequestMessage( role: 'user', content: 'Symfony symfony symfony is like sourceability on a' ) ], ), ]; $completionResponses = $apiClient->createChatCompletions($requests); var_dump($completionResponses);
Cost calculator
You can use ResponseCostCalculator
, which relies on brick/money, to calculate the cost of a response:
use Sourceability\OpenAIClient\Pricing\ResponseCostCalculator; $responseCostCalculator = new ResponseCostCalculator(); $responseCost = $responseCostCalculator->calculate($myCompletionResponse); var_dump([ 'total' => $responseCost->getTotal()->formatTo('en_US'), 'prompt' => $responseCost->getPrompt()->formatTo('en_US'), 'completion' => $responseCost->getCompletion()->formatTo('en_US'), ]); array(3) { ["total"]=> string(10) "$0.0001280" ["prompt"]=> string(10) "$0.0000980" ["completion"]=> string(10) "$0.0000300" }