Time series data
Introduction
Time series data support in Appfarm consists of an interface for defining and configuring time series object classes, along with a Time Series API that provides built-in endpoints for programmatically inserting and querying data. This comprehensive support is designed to facilitate seamless integration of time series data into your solution, enabling efficient storage, retrieval, and analysis of sequential measurements over time.
Time series data support is ideal for various use cases across industries, including:
IoT sensor data
Machine usage and performance metrics
Inventory level tracking
Financial transactions
Key features
Fully integrated solution: Appfarm provides a comprehensive solution for storage, querying, and visualization of time series data.
Flexible metadata and measurements: Define custom metadata labels and measurement values for your time series data.
Configurable time granularity: Developers can configure the frequency of incoming data (seconds, minutes, hours) to optimize performance.
Automatic data deletion: Enable time-based automatic data deletion to manage storage efficiently.
Aggregation support: The API for querying data supports aggregation to customize which data is returned.
Usage
Time series data can be inserted and queried exclusively via built-in API endpoints. To use time series data in an app or service you must use the Web Request action node to call the appropriate endpoint.
Time series data counts against the database storage allocated to the solution.
Configuration
Enabling time series
Premium feature
Time series support is a premium feature. Availability is determined by your subscription. Please check your current subscription or contact customer success to ensure you have access to this functionality.
To use the Time Series API:
Create a new time series object class under Data Model.
Configure the metadata and measurements for your time series data.
Set the time granularity and optionally automatic data deletion options.
Activate the time series class to enable the related API endpoints.
Hover over the blue dot next to Activate Time Series Class and note the class' unique endpoints.
The Time Series API is available in all environments when enabled in a solution.
Important
A time series class must be activated to enable the related API endpoints. Once activated, the time series class cannot be modified or deactivated. However, it can be deleted if needed.
Metadata
Metadata in time series data consists of labels or tags that uniquely identify a specific time series. These attributes typically remain constant or change very infrequently over time. Examples of metadata include:
Sensor ID
Location
Device type
Manufacturing batch number
Metadata helps in organizing, filtering, and querying your time series data efficiently.
Measurements
Measurements are the actual data points collected over time. These are the values that change and are recorded at each time interval. Examples of measurements include:
Temperature readings
Humidity levels
Stock prices
Website traffic
Measurements are usually numeric values but can also include other data types depending on your specific use case.
Time granularity
Time granularity is a crucial concept when working with time series data. It refers to the frequency at which data points are collected and stored. Properly setting the time granularity helps optimize storage and query performance.
When configuring a time series class, you can set the time granularity to:
Seconds
Minutes
Hours
Choose the granularity that best matches the frequency of your incoming data. For example, if you're collecting sensor data every 5 minutes, setting the granularity to "minutes" would be appropriate.
Making requests
Endpoint structure
Each time series object class has unique endpoints for inserting and querying data. The subdomain in the endpoint URL is specific to your deployment environment (e.g., Development, Test, Production).
For Production, the structure of the endpoints is as follows:
Available methods:
insert_one
: Insert a single recordinsert_many
: Insert multiple records in a batchread_one
: Query a single recordaggregate
: Query multiple records using an aggregation pipeline
Example of the insert_one
endpoint across environments:
Authentication
Requests to the Time Series API require a Bearer token for authentication. To set up the token:
Use an existing service account or create a new one in Appfarm Create.
Ensure the service account has a role with appropriate permissions for the time series class you want to access and data operations you want to perform.
Generate an API key for the service account. Make sure the API key has the Time Series scope.
Use this API key as a Bearer token in your requests for authentication.
For more details on service accounts and API keys, refer to Service accounts.
Inserting records
Required property
Each record must include a timestamp
property with an ISO-8601 formatted date.
Insert one record
POST
/insert_one
Insert multiple records
POST
/insert_many
Querying records
Query one record
GET
/read_one
Query multiple records (aggregation)
For more information on aggregation and the stages and operations available see Aggregation below.
POST
/aggregate
Aggregation
What is an aggregation pipeline?
From the MongoDB documentation:
An aggregation pipeline consists of one or more stages that process documents:
Each stage performs an operation on the input documents. For example, a stage can filter documents, group documents, and calculate values.
The documents that are output from a stage are passed to the next stage.
An aggregation pipeline can return results for groups of documents. For example, return the total, average, maximum, and minimum values.
Supported aggregation operations
The Time Series API supports the following aggregation operations:
$match
Filters the documents to pass only the documents that match the specified condition(s) to the next pipeline stage.
Example:
{ "$match": { "metadata.location": "oslo" } }
Use this to narrow down the set of documents to process.
Place this stage as early in the aggregation pipeline as possible.
$project
Passes along only the specified fields to the next stage in the pipeline.
Can be used to include, exclude, or add computed fields.
Example:
{ "$project": { "temperature": 1, "humidity": 1 } }
Use this to shape the output documents or create computed fields.
$skip
Skips over the specified number of documents that pass into the stage and passes the remaining documents to the next stage in the pipeline.
Example:
{ "$skip": 10 }
Useful for pagination when combined with $limit.
$limit
Passes only the first n documents to the next stage where n is the specified limit.
Example:
{ "$limit": 5 }
Use this to cap the number of documents in the output or for pagination.
These operations can be combined in various ways to create powerful queries. For example, you could use $match
to filter data for a specific time range, $project
to select only the fields you need, $skip
to ignore the first n results, and $limit
to cap the total number of results.
Last updated