How to Use ChatGPT-4 in Laravel Application
Table of Contents
ChatGPT was released a few months ago and is making new records almost daily. Millions of people have used the OpenAI-built chat app that talks to people and answers anything thrown at it.
ChatGPT is an OpenAI language model. From resolving simple math queries to writing poems and code, it can do it all very accurately. Content creators and programmers worldwide already use it to help them create content, provide creative ideas, and analyze existing content.
But OpenAI’s ChatGPT is not the only place to access the AI language model. The best part of ChatGPT is that we can integrate it into our applications and create anything we wish for. In this article, I will show you how to use ChatGPT in Laravel applications.
When writing this article, the public version of ChatGPT is 3.5. As announced by OpenAI, ChatGPT-4 is under limited beta test. ChatGPT-4 is much better than its predecessor ChatGPT-3.5. It can handle large amounts of data and create more lengthy responses than 3.5. The only way to access ChatGPT-4 is to avail through limited access provided by a few sites or signup for the waitlist.
Anyway, without further adieu, let’s start.
How to Use ChatGPT in Laravel
We have two ways to use ChatGPT API in Laravel applications. We can either install the library that easily allows us to make API requests or we can create our own package using API references and documentation.
First of all, we will use openai-php/laravel package. openai-php/laravel is a community-maintained package and it’s actively updated.
Install openai-php in Laravel
Open terminal and move into your app’s root directory. Use composer
to pull the package into our application.
composer require openai-php/laravel
Next we can publish the configuration with the following command –
php artisan vendor:publish --provider="OpenAI\Laravel\ServiceProvider"
It will create the config file at config/openai.php
. In the config we can set our OpenAI API. By default it looks for api in .env
file set to OPENAI_API_KEY
. So open the .env
file and set the openai api key.
OPENAI_API_KEY=sk-....
Once the api key is set, our application is ready to communicate with OpenAI API.
We can create a test route and direct it to a controller function to test the successful connection.
Route::get('openai', [AiController::class, 'process_request']);
Create a controller.
php artisan make:controller AiController
Create a test function process_request
. It’s very dirty, but just to test whether the integration works properly.
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use OpenAI\Laravel\Facades\OpenAI;
class AiController extends Controller
{
// request openai
public function process_request(Request $request) {
$result = OpenAI::completions()->create([
'model' => 'text-davinci-003',
'prompt' => 'Laravel is a ',
]);
$json_result = json_decode(json_encode($result));
return $json_result;
}
}
We can use the OpenAI facade to call the API. In the above function, we have imported the facade and called the completion
resource to complete the prompt. We can add several parameters to our request, including which model to use and the prompt itself.
Here is the output from the API. As you can see, it provides various data about our requests. The JSON data in the image contains the choices array that shows the actual completion of our prompt. Usage shows the tokens used to process the request.
Call API without openai-php/laravel
If you want to build your library or set of classes to call the resources, you can call the API resources using Laravel’s built-in HTTP API. We can rewrite our controller method to call resources directly.
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Http;
class AiController extends Controller
{
// request openai
public function process_request(Request $request) {
$result = Http::withHeaders([
'Content-Type' => 'application/json',
'Authorization' => "Bearer " . env('OPENAI_API_KEY')
])->post('https://api.openai.com/v1/completions', [
'model' => 'text-davinci-003',
'prompt' => "Laravel is a ",
]);
return $result;
}
}
All availbale API resources can be found on API Reference page.
There are many more resources we can call from API. For example, we can call chat resource to create chat completion to build apps similar to ChatGPT, an audio resource for transcribing audio into any given language, and FineTunes to create our fine-tuned model.
Fine-tuning
Fine-tuning is one of the best features of OpenAI. The base models developed by OpenAI are trained on the Internet data. With fine-tuning resource, we can train any base models on personalized data. Fine-tuning allows us to provide shorter prompts, saving us the cost, and get more accurate output.
How to Fine-tune a model?
We can fine-tune any of the four available models, ada, babbage, curie, and davinci. Mainly fine-tuning involves three steps, preparing the data file(s), uploading the file to openai using the upload file API, and initiating the fine-tuning job.
Depending on the data file size, it may take minutes or hours for the job to complete.
The following example shows how we can train the davinci base model on our custom data –
$result = OpenAI::fineTunes()->create([
'training_file' => 'file-iS2Uy3ZCGE42eeZn6u6VxyKy',
'validation_file' => 'file-ToDZxB6XEsJv942NRGj8ZfC9',
'model' => 'davinci',
'n_epochs' => 4,
'batch_size' => null,
'learning_rate_multiplier' => null,
'prompt_loss_weight' => 0.01,
'compute_classification_metrics' => false,
'classification_n_classes' => null,
'classification_positive_class' => null,
'classification_betas' => [],
'suffix' => null,
]);
The above request creates a custom fine-tuned davinci model using training_file
that we can upload using upload API resource.
Conclusion
So there you have it. We can easily use OpenAI in our Laravel applications. Recently there has been a tsunami of applications that utilize OpenAI, and most of them focus on creating content. Before you jump into building your app using OpenAI API, like ChatGPT, they are not free of charge. OpenAI provides an $18 one-time credit to test their API, after which each request is charged according to the prompt’s size and the output’s size.
LinuxAndUbuntu Newsletter
Join the newsletter to receive the latest updates in your inbox.