Flutter Chat GPT with REST API by (Getx)

Mohamed Abdo Elnashar
6 min readFeb 15, 2023

--

Machine learning and artificial intelligence are transformative technologies that are already changing many aspects of our lives and are expected to play an even more important role in the future. Natural language processing is a subfield of machine learning that enables computers to understand and generate human language, and it has a wide range of applications.

OpenAI is a leading organization in the field of artificial intelligence, and its GPT-3 language model is one of the most advanced and well-known examples of natural language processing technology. GPT-3 is capable of generating human-like text in a variety of scenarios, such as writing articles, answering questions, and even creating original stories or poems. This technology has the potential to revolutionize the way we communicate, learn, and interact with machines, and it is expected to lead to many new and exciting applications in the years to come.

OpenAI provides a variety of natural language processing APIs that enable developers to integrate machine learning features into their applications. By using these APIs in your Flutter apps, you can add advanced natural language processing capabilities to your application without relying on third-party libraries or additional dependencies.

OpenAI’s APIs, you can quickly build powerful and intelligent natural language processing features into your Flutter app.

Some of the features that you can add to your app using OpenAI’s APIs include language translation, sentiment analysis, text completion, and more. These features can enhance the user experience of your app, making it more engaging and useful for your users.

Let’s get started.

firstly we need to create API keys on openai website

login to website and go to view API keys
create new secret key
copy API generated

Create an API Request Class

Create completion

To get completion we need Call POST https://api.openai.com/v1/completions and pass three Parameters

1- model value is “text-davinci-003”,
2- prompt is string Say this is a test
3- Authorization as a header is ‘Bearer YOUR_API_KEY’

Parameters

{
"model": "text-davinci-003",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0,
"top_p": 1,
"n": 1,
"stream": false,
"logprobs": null,
"stop": "\n"
}

Response

{
"id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
"object": "text_completion",
"created": 1589478378,
"model": "text-davinci-003",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}

In Dart

We create a dart method called ‘getTextCompletion’ it is taken by my query adding calling API to get completion.

void getTextCompletion(String query) async {    
final response = await http.post(Uri.parse("$baseURL/completions"),
body: json.encode({
"model": "text-davinci-003",
"prompt": query,
}),
headers: {
"Content-Type": "application/json",
'Authorization': 'Bearer $OPEN_AI_KEY',
});
if (kDebugMode) {
print("Response : ${response.body}");
}
if (response.statusCode == 200) {
/// to add new message
for (TextCompletionData element
in TextCompletionModel.fromJson(json.decode(response.body))
.choices) {
messages.add(
ChatItem(item: element.text, isSendMassage: false, isText: true),
);
}
}
_loading.value = false;
update();

}

Image generation

The image generations endpoint allows you to create an original image given a text prompt. Generated images can have a size of 256x256, 512x512, or 1024x1024 pixels. Smaller sizes are faster to generate. You can request 1–10 images at a time using the n parameter.

To get generate an image we need Call POST https://api.openai.com/v1/images/generations and pass three Parameters

1- n the number of images is generations.
2- prompt text of the image as a white siamese cat’.
3- size as a 256x256.

Parameters

{
"prompt": "A cute baby sea otter",
"n": 2,
"size": "1024x1024"
}

Response

{
"created": 1589478378,
"data": [
{
"url": "https://..."
},
{
"url": "https://..."
}
]
}

In Dart

We create a dart method called ‘createImagesGenerations’ it is taken from my query adding calling API to generations.

  createImagesGenerations(String query) async {
final response = await http.post(Uri.parse("$baseURL/images/generations"),
body: json.encode({"prompt": query, "n": 1, "size": "256x256"}),
headers: {
"Content-Type": "application/json",
'Authorization': 'Bearer $OPEN_AI_KEY',
});
if (kDebugMode) {
print("Response : ${response.body}");
}
if (response.statusCode == 200) {
Map<String, dynamic> jsonDate = (json.decode(response.body));
List<dynamic> list = jsonDate["data"];

/// to add new message
messages.add(
ChatItem(
item: list.first["url"],
isSendMassage: false,
isText: false,
),
);
}
}

Image variation

The Create image variation endpoint allows you to create a variation of a given image.

To generate variation we need Call POST https://api.openai.com/v1/images/generations and pass three Parameters

1- n the number of images are generations.
2- image file image.
3- size as a 256x256.

Parameters

{
"image": image,
"n": 2,
"size": "1024x1024"
}

Response

{
"created": 1589478378,
"data": [
{
"url": "https://..."
},
{
"url": "https://..."
}
]
}

In Dart

We create a dart method called ‘createImagesVariation’ it is taken my file image and calls API to generations.

createImagesVariation(File file) async {    
final request = http.MultipartRequest(
'POST',
Uri.parse("$baseURL/images/variations"),
);

request.files.add(await http.MultipartFile.fromPath('image', file.path));
request.fields["n"] = "1";
request.fields["size"] = "256x256";
request.headers["Authorization"] = 'Bearer $OPEN_AI_KEY';

var streamedResponse = await request.send();
Map<String, dynamic> responseData =
json.decode(await streamedResponse.stream.bytesToString());

if (streamedResponse.statusCode == 200) {
List<dynamic> list = responseData["data"];

/// to add new message
messages.add(
ChatItem(
item: list.first["url"],
isSendMassage: false,
isText: false,
),
);
}
}

by using GetX as state management to create a GetxController it will be as :

import 'dart:convert';
import 'dart:developer';
import 'dart:io';
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:get/get.dart';
import 'package:http/http.dart' as http;
import '../../../../model/text_completion_model.dart';
import '../../model/chat_item.dart';
import '../config/global_config.dart';

class GPTController extends GetxController {
ValueNotifier<bool> get loading => _loading;
final ValueNotifier<bool> _loading = ValueNotifier(false);

List<ChatItem> messages = [];

void getTextCompletion(String query) async {
try {
/// adding my massage
messages.add(
ChatItem(
item: searchTextController.text, isSendMassage: true, isText: true),
);
searchTextController.clear();
_loading.value = true;
update();

final response = await http.post(Uri.parse("$baseURL/completions"),
body: json.encode({
"model": "text-davinci-003",
"prompt": query,
}),
headers: {
"Content-Type": "application/json",
'Authorization': 'Bearer $OPEN_AI_KEY',
});
if (kDebugMode) {
print("Response : ${response.body}");
}
if (response.statusCode == 200) {
/// to add new message

for (TextCompletionData element
in TextCompletionModel.fromJson(json.decode(response.body))
.choices) {
messages.add(
ChatItem(item: element.text, isSendMassage: false, isText: true),
);
}
}
_loading.value = false;
update();
} catch (e) {
_loading.value = false;
}
}

createImagesGenerations(String query) async {
try {
/// adding my massage
messages.add(
ChatItem(
item: searchTextController.text,
isSendMassage: true,
isText: true,
),
);

searchTextController.clear();
_loading.value = true;
update();

final response = await http.post(Uri.parse("$baseURL/images/generations"),
body: json.encode({"prompt": query, "n": 1, "size": "256x256"}),
headers: {
"Content-Type": "application/json",
'Authorization': 'Bearer $OPEN_AI_KEY',
});
if (kDebugMode) {
print("Response : ${response.body}");
}
if (response.statusCode == 200) {
Map<String, dynamic> jsonDate = (json.decode(response.body));
List<dynamic> list = jsonDate["data"];

/// to add new message
messages.add(
ChatItem(
item: list.first["url"],
isSendMassage: false,
isText: false,
),
);
}
_loading.value = false;
update();
} catch (e) {
log("e : ${e.toString()}");
_loading.value = false;
}
}

createImagesVariation(File file) async {
try {
/// adding my massage
messages.add(
ChatItem(
item: file.path,
isSendMassage: true,
isText: false,
),
);

searchTextController.clear();
_loading.value = true;
update();

final request = http.MultipartRequest(
'POST',
Uri.parse("$baseURL/images/variations"),
);

request.files.add(await http.MultipartFile.fromPath('image', file.path));
request.fields["n"] = "1";
request.fields["size"] = "256x256";
request.headers["Authorization"] = 'Bearer $OPEN_AI_KEY';

var streamedResponse = await request.send();
Map<String, dynamic> responseData =
json.decode(await streamedResponse.stream.bytesToString());

if (streamedResponse.statusCode == 200) {
List<dynamic> list = responseData["data"];

/// to add new message

messages.add(
ChatItem(
item: list.first["url"],
isSendMassage: false,
isText: false,
),
);
}

_loading.value = false;
update();
} catch (e) {
log("e : ${e.toString()}");
_loading.value = false;
}
}

TextEditingController searchTextController = TextEditingController();
}

it is all the main methods we will use in the app.

It is the source code of the full app

I hope you all liked this blog and it helped you get started with Flutter! Don’t forget to smash that clap button and leave a comment down below.

Meet you at the next one.

--

--

Mohamed Abdo Elnashar
Mohamed Abdo Elnashar

Written by Mohamed Abdo Elnashar

Senior Flutter Developer, and I study a master of computer science in the faculty of computer & information sciences at Mansoura university

Responses (1)