Integrate with OpenAI
Last updated
Last updated
ChatGPT is a conversational AI language model trained and designed to provide informative and engaging responses to users in natural language, and it supports multiple languages. It may be used in many different areas, such as customer service, education/training, personal assistants (summary, translations, planning), content creation, entertainment and programming.
ChatGPT is created by OpenAI. The application ChatGPT is just the application that is available to the public, and it operates on top of the powerful GPT language model (as of April 2023, GPT-4 is the latest addition). OpenAI offers other models as well, such as DALL-E for image creation and Whisper for converting audio to text.
The great thing with OpenAI is that all of this is available through APIs. You may try the OpenAI API platform for free, but in general, it is pay-as-you-go.
The key to getting the most value out of Chat API is to design a (input/questions) in an optimal way. You may read more on prompt design here.
First off, you need to sign up (or login) at https://platform.openai.com
Once inside, you have access to a lot of tutorials. But technically, you only need to go to your Account and locate the API Keys menu. Generate a new API key and store it somewhere safely (for example, as a Secret in Appfarm Create).
The API we will be using in this example is the Chat Completion API. It takes a prompt and a few parameters as input, and returns the response from OpenAI.
The integration is a simple web request.
In the above illustration:
Auth Token: The API Key from Open AI (stored as a secret, as mentioned in the Getting Started section above)
Body Content (the input to the API)
You may read more about these and additional parameters here. The most important parameters are:
model: Required. The ID of the model to use. See the overview here.
messages: Required. This is a list of messages describing the conversation so far. 3 Roles exist:
system
: Typically, your conversation (list of messages) starts with a system message. This message helps you set the default behaviour or context of the "AI assistant".
user
: this is the one instructing the assistant.
assistant
: this helps you store previous responses. If you have a full conversation between the AI assistant and the user, you may input alternate messages between user
and assistant
, such as in this example:
temperature: 0 means deterministic. You will get the same response for the same prompt each time. 1 is the opposite - OpenAI will take a higher risk and give you different responses each time. You may use something in between like 0.4 as well, depending on the use case
max_tokens: The number of "words". OpenAI operates with its own term here, where a token is similar to a word, but a token may be smaller as well.
Response
The response contains more information that you would typically need to store or process. Example response of the above input is:
The textual output from the API is found in the choices node, and you may refer to this first entry in the Result Mapping choices.0.message.content
Our Appfarm Showroom showcase - "Chef" - is an App for entering a list of ingredients and getting a suggestion of a dinner matching those ingredients. The end user just inputs a comma-separated list of ingredients, such as "chicken, egg, pasta" as well as the number of servings.
What we do in this case, is to save the user input into App Variables Ingredients List
and Servings.
We create the prompt
by merging a question/prompt with the user inputs, as seen from the function below, where we construct the body (input) to the Completions API.
As seen in the setup above, we also define a system
as "Act as a professional chef".
The concept above is called Prompt Design. Tuning the wording of the prompt, combined with a good system
definition, will allow you to take more control of the quality of the output from the API.