Creating Our Own Chat GPT

In June, OpenAI announced that third-party applications’ APIs can be passed into the GPT model, opening up a wide range of possibilities for creating specialized agents. Our team decided to write our own chat for working with GPT4 from OpenAI and other ML/LLM models with the ability to customize for the company’s internal needs. The project is open source, and can be downloaded via the link. It is currently in active development, so we would be glad to see your comments / wishes in the comments. Also, send us your pull requests with corrections.

For the backend, we chose Python, Django Rest Framework. On the frontend, React, Redux, Saga, Sass. Let’s start with the backend, which was managed by Yegor. He writes about the server part of the project himself.

Creating a chat message

Creating a chat message


The backend part of my project runs on Django. Also, Django Rest Framework is used for the API. These tools are very popular today, and there are many ready-made libraries for them, which has accelerated the programming process.

Let’s start with the fact that users more easily remember their email address than a username, which may be taken unlike a unique email. Therefore, I decided to create my own User model, with some changes compared to the basic model in Django. It’s important not to forget that it’s crucial to think through the user model at the initial stages of the project!

Custom User Model

User's Model

User’s Model

Let’s start with the changes:

1. Create a User model. I added an ‘is_email_confirmed’ flag, so we could track the email confirmation. Authorization will now be made through email. Decided to leave the option of choosing the own username.

2. Add a Manager, so users can be added to the database.

3. Assign User model in configuration:

I wanted users to edit their personal information, creating the image of live communication. Thus, I decided to add a profile, where personal information is stored, including avatar.

Profile model:

But if you do registration verification, you’ll notice that only the User model is created, without Profile. To fix this, and to add a profile model when creating a user model, you should apply a signal. Django uses the receiver decorator to define signal receivers. The following code will automatically create our profile for the user:

JWT Authorization

Now to interact with the system, you need to authenticate. For this, I decided to choose JWT authorization, as it’s considered one of the safe ways to transfer information between two parties. The main usage scenario looks like this: using an access token, we can access private resources, but when this token expires, we send a refresh token and we get a new pair of access token + refresh token. Refresh token can also expire, but it has a much longer lifespan and if the lifespan of the refresh token expires, you can simply re-pass the authorization procedure to get the JWT.

JWT Cycle

JWT Cycle

This implementation already has a ready library, Simple JWT. It does an excellent job of generating access and refresh tokens, but to secure against CSRF and XSS attacks, I decided to pass the refresh token in an HttpOnly Cookie. For this, I had to change the basic JWT settings, and add a cookie to the response. The sample code to add a refresh token to the cookie is:

I had certain difficulties. The question why the token is not being saved properly and just hangs in the response took most of the time. In this case, you should consider the file and add the parameter to it:

And don’t forget to change SIMPLE_JWT settings, as follows:
'AUTH_COOKIE_SECURE': False – needed when connecting https;
'AUTH_COOKIE_HTTP_ONLY' : True – HttpOnly Flag;
'AUTH_COOKIE_SAMESITE': ‘Lax’ – cookies are fully blocked for cross-site requests.
Also, remember that on the Frontend, the request should be with the withCredentials: true parameter.

Access to NLP (GPT)

Now that we’re authorized, we can reach out to the model and get a response. The inquiry is a regular message, and what does a message have? Exactly, its own dialog within which it goes. Based on this logic, you can communicate with NLP taking into account the context of previous messages. Perhaps it’s time to write a request. The text of our request will be “Write bubble sort in Python”, because who didn’t start programming with writing bubble sort :). As soon as you press Enter, a request is sent to our API, from which we will get a response. The response is recorded using the function take_answer(), to which we pass the request:

OpenAI provides access to its API. In the project, interaction with the API is organized according to the following principle:

Diagram of interaction sequence of application layers

Diagram of interaction sequence of application layers

The request to OpenAI is made using the ChatCompletion.create() function, let’s study it in more detail:

We take the previous requests and responses, 5 pieces each, and send them to OpenAI in ChatCompletion, decided not to change the model without any specific needs for now. Now it’s time to look at the response, which should have already come:

And I was so pleased when I realized that this response can be converted into Markdown for the user. Otherwise, it would be completely unreadable and I would have to come up with or look for a solution to display the answer on the screen.


Now, I will tell about my part of the work (Nikita). I have broken down the application into the following Redux modules:

  • Auth (authorization, registration, password recovery);
  • Chat (working with messages, dialogs);
  • User (user data such as username, email, loading, error);
  • Profile (user data such as First Name, Last Name, Photo).

Then, I will go into detail about how the routing, registration, authorization, and chat are organized.


I’ve added routing for the pages of authorization, registration, chat, settings, profile. The use of routing is very relevant for chat since it allows using a link to open a certain state. I divided the routing into two files: routes and index. In routes.js I created several lists of routes, which assume different levels of access. For example, for an authorized and unauthorized user, as well as public routes, the behavior of which should not change depending on whether the user is authorized or not. This approach reduces the number of redirects and checks that are used in the application. And if a user ends up on a page where they don’t have access, the router automatically redirects them to the authorization page. Here is a portion of the routes file as an example:

Index.js is the main routing component that returns a list of routers and sets different rules for each route depending on the type of list. Here is an example of a route from index.js for an internal dashboard of the project, which is not available for authorized users:

Registration / Authorization

Registration Process

Registration Process

Registration consists of several stages:

  • Registration window, validation of user data.
  • Sending an email with a link to confirm user data.
  • Immediately after registration, the user is redirected to the /chats page with limited access.
  • At this point, the user has a flag "is_email_confirmed" = false.
  • After email confirmation, a notification should be displayed in the dashboard that the email has been confirmed, if the user is already authorized. If the user is not authorized, redirect to the login page and display a message that the email has been confirmed.
State Diagram of the Registration Process

State Diagram of the Registration Process

Validation during Registration

Validation is organized through the ReactHook useFormik.

At the same time, through the Hook useEffect, we track the errors that come from the server and also display them in the View:

Email Confirmation

Email confirmation is implemented through the EmailVerification.js component. If a correct token is received, the component redirects to the login page, otherwise, it displays an error.


Authorization process state diagram

Authorization process state diagram

The main logic for working with authorization, I transferred to the autUtils.js helper. We use the OAuth2 mechanism to work with authorization. Two tokens, access and refresh, are used to track user authorization. Accordingly, while the access token is active, the user has access to the internal structure of the project. When the token expires, I request a token refresh through the API using a refresh token. The jwtDecode module is used to check the token expiry date.


Available chat functionality

Available chat functionality

The chat is divided into the following components:

  • Chats.js contains a list of chats from the left column. The component contains logic on working with paths such as chats/:id
  • ChatInput.js the component is responsible for the operation of the input field.
  • Index.js contains logic for displaying a list of messages.
  • UserHead contains the logic of chat operation: delete, return to the list of dialogs for the mobile version.

The OpenAI API returns responses in Markdown format. To display responses from the chat, which contain formatting, I used the ReactMarkdown component. For code inserts, I used React Syntax Highlighter.

We import the necessary components:

We output responses from GPT using formatting:

It was not entirely obvious how to correctly switch from the screen of a new chat, which is located on the path /chats to a certain chat /chats:id.

The logic is now as follows:

  • When creating the first message, a POST request is sent to create a dialogue.
  • Upon receiving a response that the dialogue is created, a new list of dialogs is drawn.
  • A POST request is sent to send a message to the LLM model.
  • After the list of dialogues is finished rendering, a redirect to a new tab is implemented.
  • We wait for an answer from the model and display the message in the chat.

The main part of the business logic, which is responsible for switching to a new dialogue, is implemented in the Chats.js component.

Function in Chat/saga.js, which implements POST requests to the server to create a new dialogue and chat:

Near Future Plans

  • Displaying of errors and server messages – currently, this is partially implemented for registration and authorization.
  • Message loading display. Streaming server messages as implemented in ChatGpt. Instantly display user messages.
  • Add logic to work with the user profile.
  • Add the ability to work with different types of agents (ML models).

Github link:

Project link:

About the author

Stay Informed

It's important to keep up
with industry - subscribe!

Stay Informed

Looks good!
Please enter the correct name.
Please enter the correct email.
Looks good!

Related articles


The Ultimate Guide to Pip

Developers may quickly and easily install Python packages from the Python Package Index (PyPI) and other package indexes by using Pip. Pip ...

Building a serverless web application with Python and AWS Lambda

AWS Lambda is a serverless computing solution that enables you to run code without the need for server provisioning or management. It automatically ...


Understanding Python Decorators and How to Use Them Effectively

Python decorators are a super handy and flexible aspect of the language, and I think that getting the hang of them is crucial for any Python ...


Phill Stolyarov December 12, 2023 at 11:12 am

I have been using the fosterflow resource for several months. He became significantly smarter and more useful during this time. How do you train him?

Nikita Bragin December 12, 2023 at 11:51 am

Hi, Philip, that you for your feedback. Appreciate it! I developed only the interface to the GPT model, OpenAi permanently improve their products.

Sign in

Forgot password?

Or use a social network account


By Signing In \ Signing Up, you agree to our privacy policy

Password recovery

You can also try to

Or use a social network account


By Signing In \ Signing Up, you agree to our privacy policy