Context handling
To effectively build and modify your application, Appfarm AI needs to understand not just your immediate request, but the current state of your app. This concept is known as context.
Just as a human developer needs to read existing code before adding a new feature, Appfarm AI needs to "read" your application metadata to make valid changes. This chapter explains what information is sent to the AI, how large applications are handled, and how this impacts your AI credit usage.
The context package
When you send a message to Appfarm AI, we compile a "context package" that is sent to the Large Language Model (LLM). This package consists of three main parts:
System instructions and agent knowledge: The core rules Appfarm has defined for the AI, combined with your custom agent knowledge.
Application metadata: The definition of your app (the data model, views, actions, etc.). This is the Appfarm equivalent of "existing code."
Message history: The conversation thread, including your prompts and the AI's previous responses.
This context is sent to Appfarm AI both when using Agent mode and Ask, but the system instruction and amount of metadata may vary.
Application metadata
The most dynamic part of the context is the application metadata. This allows the Agent to understand your object classes, UI structure, and logic flows.
Optimization and on-demand fetching
For small applications, Appfarm AI may read the entire application definition. However, as your solution grows, sending the entire application metadata with every message becomes inefficient and consumes unnecessary tokens.
To solve this, Appfarm AI uses an active context approach:
Global context: The data model (object classes) is almost always part of the context, as it is fundamental to all logic.
Local context: We prioritise the full metadata for the active view or active action you are currently working on in Appfarm Create, along with all data sources and app variables.
On-demand fetching: If you ask the Agent to modify a part of the app that is not currently in the active context (e.g., "Update the navigation menu in the main layout"), the Agent utilises Tools to fetch the necessary metadata for that specific component on demand.
Building blocks: When the Agent is building, it also uses tools to collect definitions of specific building blocks (such as how a specific action node or UI Component is structured) to ensure it generates valid metadata.
Message history and summarization
To maintain a coherent conversation, Appfarm AI sends the chat history along with your new prompt. However, LLMs have a limit on how much information they can process at once (the context window).
As your conversation thread grows, Appfarm AI automatically manages this history:
Summarization: We progressively summarize older parts of the conversation to make room for new messages.
Instruction retention: The system prioritises keeping your user instructions intact. We are more likely to summarise or drop older intermediate system confirmations than your original requirements.
If a thread becomes extremely long, with many messages with different intents, fixes and comments, the summarization might degrade or deviate from your current intent. Also, a task list might become irrelevant. In these cases, it is recommended to start a new thread.
Custom instructions (agent knowledge)
If you have defined custom instructions in the agent knowledge settings, these are treated with high priority.
Unlike the chat history, which may be automatically summarized, your agent knowledge is always included in the system instructions block of every request. This ensures that the AI adheres to your specific naming conventions, architectural patterns, or design guidelines regardless of how long the conversation becomes.
Impact on credit usage
The size of the context directly impacts the credit consumption of your request.
Complexity = tokens: A large app with many components requires more metadata. Therefore, asking the AI to modify a massive, complex app will consume more tokens (and thus more credits) than modifying a simple view. This also applies if the data model is large - the data model is always included in the application metadata.
History cost: Long threads containing much back-and-forth discussion (or tasks) carry more data payload than a fresh thread. However, this cost is small relative to the application metadata, since messages are automatically summarized when the total size hits a certain limit.
By optimizing the metadata sent (sending only the active view/action), Appfarm AI minimises credit usage while ensuring the Agent has exactly the information it needs to complete the task.
Last updated
Was this helpful?