We use cookies to personalize your experience.Learn More

Blog / Tutorials

Creating a ChatGPT wrapper using Canonic

In this tutorial, we create a simple ChatGPT application end-to-end using low-code. The app we'll build today collects inputs from the user, combines them with pre-defined templates, and generates a response using OpenAI.

Pratham Agrawal

Tue Jan 16 20247 min read

ChatGPT provides an incredibly easy way to generate and interact with content. In this article, we'll cover building a wrapper around ChatGPT on Canonic without code. Canonic is a low-code platform for building full-stack apps.

The app we'll build today collects inputs from the user, combines them with pre-defined templates, and generates a response using OpenAI. This can be useful for all kinds of projects, from product description generators, to support responses. Let's get started!

Creating a project

Let's log in to Canonic, you can create a free new account here. Let's create a new project. If we want to protect our app behind a login screen, we can enable it here.

Storing Templates

Creating a table

A template is simply comprised of the name and the prompt that would be needed for our ChatGPT query. Let's create a table called Template to store our templates. Make sure the Get all system endpoint is generated.

Now, Create 2 text fields, one for the name, and the other for the prompt. We can create more fields such as author, and other inputs to further customize it, but we'll skip that for the sake of brevity.

Using the CMS to create templates

Let's deploy the project so that we can use the CMS (Content Management System). Hit the deploy button on the top right to deploy the project. This deploys the project along with all the workflows and tables making them ready to interact with.

To create a new template, click on the + button to create a new template. Let's name it Sample . For the prompt, for the sake of this tutorial, let's just add Speak like a pirate. Hit Save to create and save the new template inside the Template table that we just created.

Now we're ready to feed this into the OpenAI workflow.

Connecting to ChatGPT

Now, let's create a workflow that takes our template, combines it with a few user inputs, and sends it to ChatGPT. It should return the output returned from the API call.

Creating a workflow

A Canonic workflow acts like an API endpoint that you can either trigger as an API or utilize in the frontend builder when building the UI. Let's create a workflow and name it OpenAI Request. We want to trigger it on API Requests.

The workflow should accept inputs for the name of the template to be selected along with the prompt that should be appended to the template prompt. Let's create those 2 inputs as shown below.

Fetching the template

This should create an empty workflow, now let's add our first node which fetches the data for the selected template. Select Function as the Node Type. Let's call it Query Templates. We want to query the Templates table created above by the name of the template. A simple piece of JS code does the trick. Add the following code inside the code field.

module.exports = async function (params) {
  return Template.find({ name: params.input.name })
}

Let's quickly test it out. Head on over to the Test tab, and let's pass the template we created in the CMS as input. Hit the test button to run the workflow. If it runs successfully you should see the data for the template we created earlier.

Adding the OpenAI Integration

Let's use the OpenAI integration to send the template and it's prompt to ChatGPT. For that, we need to create a new node in the workflow. Let's select integration as the Node Type and call it OpenAI Query.

In the API Integrations selector, select OpenAI. We want to use the chat completions API. Enter your API Key for ChatGPT to authenticate and hit the save button to authenticate.

OpenAI Integration Configuration

Now, we want to send the prompt from the selected template to ChatGPT to generate a response. Let's head on over to the Required tab in our configuration, and fill out all the fields. Let's add a single message from the user in the messages section.

The content of the message should be the prompt of the template selected. Since we are already fetching that data in the previous node, we can easily reference it using {{1.$output.prompt}} {{input.prompt}}

Here, 1 refers to the first node in the chain (Query Templates). $output is the output of the node which is the template that's returned. prompt is the field we created in our Templates table. input.prompt is the user-supplied prompt input that we want to append to the request.

Creating an output node for the response

Finally, let's create an output node, it should read the contents of the OpenAPI response and simply return the plaintext response from ChatGPT.

Let's create an Output type node. Let's name it Output. Set the output key to output. The output from this node is added to the workflow response under the output key.

In the code section, we want to simply return the response from the previous node. Here params.2 refers to the second node in our workflow (OpenAI Query).

module.exports = async function(params) {
  return params.2.$output.choices?.[0]?.message?.content;
}

Let's test it out! Open the Test tab, add the name of the template as the input, and hit the Test button to run the entire workflow. We should see a response from ChatGPT as the output of our workflow. It should match the selected template.

Creating the UI

It's time to create a UI so that users can trigger this workflow and see the output. It'll consist of a simple dropdown to select from the list of templates. A prompt to append to the template. Clicking on submit should show the output from ChatGPT.

Creating pages & components

Let's create a page called App. This will hold all of our components and will act as the homepage for our tool.

Now, it's time to drag in some components. Let's drag a text component for the title. For the text we've set it to OpenAI App , and the size to Title Small

Now, for the template selection, let's add the dropdown component, snapping it to the bottom of the title component. This will group the two components so that they stick together. Let's add a rich text component for the description, and a button for submitting. Stacking all the components in a single group.

Let's set the dropdown list to entries from the templates table. To do that we set the datasource for the dropdown to a dynamic expression.

{{endpoints.templates}}

Once you set it, you should be able to see the templates you created when you click on the dropdown.

Triggering workflow on submit

When the user clicks on the submit button, we want to execute the workflow we created in the above section. Canonic makes this quite simple. Open the button configuration, and add an onClick handler.

On click, we want to set a variable that should contain the output from the workflow.

  • Set the action to Set Variable
  • Set the variable key to response (This is the name of the variable)
  • Set the variable value to {{endpoints.openAiRequest}} .
  • Click on configure inputs to pass the template & prompt to the workflow.
  • For prompt we set: {{components.richText__XXXX.value}}
  • For name we set: {{components.dropdown__XXXX.value}}

According to the above flow, on click of the submit button, the OpenAI Request workflow will be executed with the template name and the prompt as its input. On successful completion, a variable called response will be set with the output of the workflow.

Showing the output

Finally, let's snap a simple text component to the right of our form. Drop the component when the right edge of the form group glows red. This will ensure that the app is responsive by automatically creating a horizontal group.

Set the value for the text component to the variable we created in the step above.

{{variables.response.output}}

Once set, you can test if everything is working by filling in the form and clicking on submit while the command(cmd) key is pressed (or the alt key on Windows). You can also click on the Trigger click button that appears below the submit component.

That's all folks!

You can either preview the app by clicking on the preview icon next to the deploy button. Once everything is working, you can deploy the app, it should show a publicly shareable URL that you can then share with your team & beyond.

If you enabled sign-in when creating the app, you will have to create users inside Project settings before you can share your app with the world!.


Did you find what you were looking for?
👍
👎
What went wrong?
Want to discuss?We have a thriving Discordcommunity that can help you. →

Enough said, let's start building

Start using canonic's fullstack solution to build internal tools for free