Integrate with OpenAI
Last updated
Was this helpful?
Last updated
Was this helpful?
ChatGPT is a conversational AI language model trained and designed to provide informative and engaging responses to users in natural language, and it supports multiple languages. It may be used in many different areas, such as customer service, education/training, personal assistants (summary, translations, planning), content creation, entertainment and programming.
ChatGPT is created by OpenAI. The application ChatGPT is just the application that is available to the public, and it operates on top of powerful Large Language Models (LLMs).
The great thing with OpenAI is that all of this is available through APIs. You may try the for free, but in general, it is pay-as-you-go.
The key to getting the most value out of Chat API is to design a (input/questions) in an optimal way. You may read more on prompt design .
First off, you need to sign up (or login) at
Once inside, you have access to a lot of tutorials. But technically, you only need to go to your Account and locate the . Generate a new API key and store it somewhere safely (for example, as a in Appfarm Create).
The API we will be using in this example is the . It takes a prompt and a few parameters as input, and returns the response from OpenAI. In the below example, we use text only as input, but the API also supports images and files.
The integration is a simple web request.
In the above illustration:
Auth Token: The API Key from Open AI (stored as a secret, as mentioned in the Getting Started section above)
Body Content (the input to the API)
messages: Required. This is a list of messages describing the conversation so far. 3 Roles exist:
system
: Typically, your conversation (list of messages) starts with a system message. This message helps you set the default behaviour or context of the "AI assistant".
user
: this is the one instructing the assistant.
assistant
: this helps you store previous responses. If you have a full conversation between the AI assistant and the user, you may input alternate messages between user
and assistant
, such as in this example:
temperature: 0 means deterministic. You will get the same response for the same prompt each time. 1 is the opposite - OpenAI will take a higher risk and give you different responses each time. You may use something in between like 0.4 as well, depending on the use case
Response
The response contains more information that you would typically need to store or process. Example response of the above input is:
The textual output from the API is found in the choices node, and you may refer to this first entry in the Result Mapping choices.0.message.content
What we do in this case, is to save the user input into App Variables Ingredients List
and Servings.
We create the prompt
by merging a question/prompt with the user inputs, as seen from the function below, where we construct the body (input) to the Completions API.
As seen in the setup above, we also define a system
as "Act as a professional chef".
The concept above is called Prompt Design. Tuning the wording of the prompt, combined with a good system
definition, will allow you to take more control of the quality of the output from the API.
In many business use cases, you may want to either
Send files or business data "on-demand" together with the prompt.
Example 1: Give me the business name, amount, and date from this receipt
Example 2: Analyze these data records (represented as CSV). The list represents all action items of a project plan of an implementation project with the following project description and timelines: "....". Give me a list of suggested action items to add, represented as JSON.
Uploads a large file or many files to a file storage, for the AI to use as a basis for its reasoning
This is RAG (Retrieval Augmented Generation), where the files are uploaded (and chunked) into a vector store, and you can enable a file-search tool in the API - enabling the AI to search for relevant information to be used together with the prompt.
Example 3: You are an assistant for our in-house HR department. Use file search to provide the user with an answer to the user query. Always ground your reasoning in these files. If you cannot find an answer in the files, inform the user, and give an answer based on best practices and current legislation.
For sending an image, see the following example
For sending CSV or other file formats, they must be converted to a string, and passed along with the prompt as a text message. You may convert a data source (holding all entries you want to analyze) to a CSV string with the following function (example):
For this purpose, you should do the following
URL:
You may read more about these and additional parameters . The most important parameters are:
model: Required. The ID of the model to use. See the overview .
max_tokens: The number of "words". OpenAI operates with its own term here, where a is similar to a word, but a token may be smaller as well.
Our Appfarm Showroom showcase - "" - is an App for entering a list of ingredients and getting a suggestion of a dinner matching those ingredients. The end user just inputs a comma-separated list of ingredients, such as "chicken, egg, pasta" as well as the number of servings.
For this purpose, you may use the .
For sending files (other than images), you may only send PDF files They need to be before sending, and added as a message such as the below example.
Create a (manually from the OpenAI developer portal, or via API). This is the database holding the files you want the AI to use for its reasoning. You may want a separate vector store for different use cases (e.g. an "HR assistant" and an "Accountant assistant" operate best if they have their own vector stores)
Add (update or delete) the files you want to analyze using the .
Use the instead of chat completion. To enable file search, you must add the section tool_choice
and tools
to the input (read more in the documentation).