Time series data

Introduction

Time series data support in Appfarm consists of an interface for defining and configuring time series object classes, along with a Time Series API that provides built-in endpoints for programmatically inserting and querying data. This comprehensive support is designed to facilitate seamless integration of time series data into your solution, enabling efficient storage, retrieval, and analysis of sequential measurements over time.

Time series data support is ideal for various use cases across industries, including:

  • IoT sensor data

  • Machine usage and performance metrics

  • Inventory level tracking

  • Financial transactions

Key features

  1. Fully integrated solution: Appfarm provides a comprehensive solution for storage, querying, and visualization of time series data.

  2. Flexible metadata and measurements: Define custom metadata labels and measurement values for your time series data.

  3. Configurable time granularity: Developers can configure the frequency of incoming data (seconds, minutes, hours) to optimize performance.

  4. Automatic data deletion: Enable time-based automatic data deletion to manage storage efficiently.

  5. Aggregation support: The API for querying data supports aggregation to customize which data is returned.

Usage

  • Time series data can be inserted and queried exclusively via built-in API endpoints. To use time series data in an app or service you must use the Web Request action node to call the appropriate endpoint.

  • Time series data counts against the database storage allocated to the solution.

Configuration

Enabling time series

Premium feature

Time series support is a premium feature. Availability is determined by your subscription. Please check your current subscription or contact customer success to ensure you have access to this functionality.

To use the Time Series API:

  1. Create a new time series object class under Data Model.

  2. Configure the metadata and measurements for your time series data.

  3. Set the time granularity and optionally automatic data deletion options.

  4. Activate the time series class to enable the related API endpoints.

  5. Hover over the blue dot next to Activate Time Series Class and note the class' unique endpoints.

The Time Series API is available in all environments when enabled in a solution.

Important

A time series class must be activated to enable the related API endpoints. Once activated, the time series class cannot be modified or deactivated. However, it can be deleted if needed.

Metadata

Metadata in time series data consists of labels or tags that uniquely identify a specific time series. These attributes typically remain constant or change very infrequently over time. Examples of metadata include:

  • Sensor ID

  • Location

  • Device type

  • Manufacturing batch number

Metadata helps in organizing, filtering, and querying your time series data efficiently.

Measurements

Measurements are the actual data points collected over time. These are the values that change and are recorded at each time interval. Examples of measurements include:

  • Temperature readings

  • Humidity levels

  • Stock prices

  • Website traffic

Measurements are usually numeric values but can also include other data types depending on your specific use case.

Time granularity

Time granularity is a crucial concept when working with time series data. It refers to the frequency at which data points are collected and stored. Properly setting the time granularity helps optimize storage and query performance.

When configuring a time series class, you can set the time granularity to:

  • Seconds

  • Minutes

  • Hours

Choose the granularity that best matches the frequency of your incoming data. For example, if you're collecting sensor data every 5 minutes, setting the granularity to "minutes" would be appropriate.

Making requests

Endpoint structure

Each time series object class has unique endpoints for inserting and querying data. The subdomain in the endpoint URL is specific to your deployment environment (e.g., Development, Test, Production).

For Production, the structure of the endpoints is as follows:

https://{SOLUTION_SHORTNAME}.appfarm.app/api/time_series/{OBJECT_CLASS_ID}/{METHOD}

Available methods:

Example of the insert_one endpoint across environments:

// Development
https://thirty50-dev.appfarm.app/api/time_series/inZkFo/insert_one

// Test
https://thirty50-test.appfarm.app/api/time_series/inZkFo/insert_one

// Staging
https://thirty50-stage.appfarm.app/api/time_series/inZkFo/insert_one

// Production
https://thirty50.appfarm.app/api/time_series/inZkFo/insert_one

Authentication

Requests to the Time Series API require a Bearer token for authentication. To set up the token:

  1. Use an existing service account or create a new one in Appfarm Create.

  2. Ensure the service account has a role with appropriate permissions for the time series class you want to access and data operations you want to perform.

  3. Generate an API key for the service account. Make sure the API key has the Time Series scope.

  4. Use this API key as a Bearer token in your requests for authentication.

For more details on service accounts and API keys, refer to Service accounts.

Inserting records

Required property

Each record must include a timestamp property with an ISO-8601 formatted date.

Insert one record

POST /insert_one

Example request
POST https://thirty50.appfarm.app/api/time_series/inZkFo/insert_one
Content-Type: application/json

{
    "data": {
        "metadata": { "id": 5578, "location": "oslo" },
        "temperature": 19.3,
        "humidity": 25.9,
        "timestamp": "2024-08-20T15:48:09.000Z"
    }
}
Response on success
204 No Content

Insert multiple records

POST /insert_many

Example request
POST https://thirty50.appfarm.app/api/time_series/inZkFo/insert_many
Content-Type: application/json

{
    "data": [
        {
            "metadata": { "id": 5578, "location": "oslo" },
            "temperature": 21.1,
            "humidity": 29.9,
            "timestamp": "2024-08-20T16:45:45.000Z"
        },
        {
            "metadata": { "id": 5577, "location": "oslo" },
            "temperature": 21.1,
            "humidity": 29.9,
            "timestamp": "2024-08-20T16:45:45.000Z"
        }
        // Additional records...
    ]
}
Response on success
204 No Content

Querying records

Query one record

GET /read_one

Example request
GET https://thirty50.appfarm.app/api/time_series/DsAFBA/read_one?filter={"temperature": {"$eq": 19.2}}
Example response
{
    "data": {
        "timestamp": "2024-08-20T15:43:23.000Z",
        "metadata": {
            "id": "5578",
            "location": "oslo"
        },
        "humidity": 26.3,
        "_id": "66c49e3bbc6780f566ad9e24",
        "__v": 0,
        "temperature": 19.2
    }
}

Query multiple records (aggregation)

For more information on aggregation and the stages and operations available see Aggregation below.

POST /aggregate

Example request
POST https://thirty50.appfarm.app/api/time_series/DsAFBA/aggregate
Content-Type: application/json

{
  "aggregation": [
    {
      "$match": { 
        "metadata.location": "oslo"
      }
    },
    {
      "$project" : {
        "temperature": 1
      }
    },
    {
      "$skip": 1
    },
    {
      "$limit": 20
    }
  ]
}
Example response
{
    "data": [
        {
            "_id": "66c49e3bbc6780f566ad9e24",
            "temperature": 19.2
        },
        {
            "_id": "66c49eb6bc6780f566ad9e2e",
            "temperature": 19.3
        }
        // Additional records...
    ]
}

Aggregation

What is an aggregation pipeline?

From the MongoDB documentation:

An aggregation pipeline consists of one or more stages that process documents:

  • Each stage performs an operation on the input documents. For example, a stage can filter documents, group documents, and calculate values.

  • The documents that are output from a stage are passed to the next stage.

  • An aggregation pipeline can return results for groups of documents. For example, return the total, average, maximum, and minimum values.

Supported aggregation operations

The Time Series API supports the following aggregation operations:

  1. $match

  • Filters the documents to pass only the documents that match the specified condition(s) to the next pipeline stage.

  • Example: { "$match": { "metadata.location": "oslo" } }

  • Use this to narrow down the set of documents to process.

  • Place this stage as early in the aggregation pipeline as possible.

  1. $project

  • Passes along only the specified fields to the next stage in the pipeline.

  • Can be used to include, exclude, or add computed fields.

  • Example: { "$project": { "temperature": 1, "humidity": 1 } }

  • Use this to shape the output documents or create computed fields.

  1. $skip

  • Skips over the specified number of documents that pass into the stage and passes the remaining documents to the next stage in the pipeline.

  • Example: { "$skip": 10 }

  • Useful for pagination when combined with $limit.

  1. $limit

  • Passes only the first n documents to the next stage where n is the specified limit.

  • Example: { "$limit": 5 }

  • Use this to cap the number of documents in the output or for pagination.

These operations can be combined in various ways to create powerful queries. For example, you could use $match to filter data for a specific time range, $project to select only the fields you need, $skip to ignore the first n results, and $limit to cap the total number of results.

Last updated