# Time series data

## Introduction

Time series data support in Appfarm consists of an interface for defining and configuring time series object classes, along with a Time Series API that provides built-in endpoints for programmatically inserting and querying data. This comprehensive support is designed to facilitate seamless integration of time series data into your solution, enabling efficient storage, retrieval, and analysis of sequential measurements over time.

Time series data support is ideal for various use cases across industries, including:

* IoT sensor data
* Machine usage and performance metrics
* Inventory level tracking
* Financial transactions

## Key features

1. **Fully integrated solution:** Appfarm provides a comprehensive solution for storage, querying, and visualization of time series data.
2. **Flexible metadata and measurements:** Define custom metadata labels and measurement values for your time series data.
3. **Configurable time granularity:** Developers can configure the frequency of incoming data (seconds, minutes, hours) to optimize performance.
4. **Automatic data deletion:** Enable time-based automatic data deletion to manage storage efficiently.
5. **Aggregation support:** The API for querying data supports aggregation to customize which data is returned.

## Usage

* Time series data can be inserted and queried exclusively via built-in API endpoints. To use time series data in an app or service you must use the Web Request action node to call the appropriate endpoint.
* Time series data counts against the database storage allocated to the solution.
* Requests to the Time Series API must comply with the solution's allocated number of inbound HTTP requests as per the [Product Glossary](https://app.gitbook.com/s/FbVp2DRRmZbcxUUGXqd5/glossary/product-glossary) and [Acceptable Use Policy](https://app.gitbook.com/s/FbVp2DRRmZbcxUUGXqd5/service-and-usage/acceptable-use-policy).

## Configuration

### Enabling time series

{% hint style="info" %}
**Premium feature**

Time series support is a premium feature. Availability is determined by your subscription. Please check your current subscription or contact customer success to ensure you have access to this functionality.
{% endhint %}

To use the Time Series API:

1. Create a new time series object class under **Data Model**.
2. Configure the [metadata](#metadata) and [measurements](#measurements) for your time series data.
3. Set the [time granularity](#time-granularity) and optionally automatic data deletion options.
4. Activate the time series class to enable the related [API endpoints](#endpoint-structure).
5. Hover over the blue dot next to **Activate Time Series Class** and note the class' unique endpoints.

The Time Series API is available in all environments when enabled in a solution.

{% hint style="warning" %}
**Important**

A time series class must be activated to enable the related API endpoints. Once activated, the time series class cannot be modified or deactivated. However, it can be deleted if needed.
{% endhint %}

### Metadata

Metadata in time series data consists of labels or tags that uniquely identify a specific time series. These attributes typically remain constant or change very infrequently over time. Examples of metadata include:

* Sensor ID
* Location
* Device type
* Manufacturing batch number

Metadata helps in organizing, filtering, and querying your time series data efficiently.

### Measurements

Measurements are the actual data points collected over time. These are the values that change and are recorded at each time interval. Examples of measurements include:

* Temperature readings
* Humidity levels
* Stock prices
* Website traffic

Measurements are usually numeric values but can also include other data types depending on your specific use case.

### Time granularity

Time granularity is a crucial concept when working with time series data. It refers to the frequency at which data points are collected and stored. Properly setting the time granularity helps optimize storage and query performance.

When configuring a time series class, you can set the time granularity to:

* Seconds
* Minutes
* Hours

Choose the granularity that best matches the frequency of your incoming data. For example, if you're collecting sensor data every 5 minutes, setting the granularity to "minutes" would be appropriate.

## Making requests

### Endpoint structure

Each time series object class has unique endpoints for inserting and querying data. The subdomain in the endpoint URL is specific to your [deployment environment](https://docs.appfarm.io/operations/deploy#environments) (e.g., Development, Test, Production).

For Production, the structure of the endpoints is as follows:

```http
https://{SOLUTION_SHORTNAME}.appfarm.app/api/time_series/{OBJECT_CLASS_ID}/{METHOD}
```

Available methods:

* `insert_one`: [Insert a single record](#insert-one-record)
* `insert_many`: [Insert multiple records](#insert-multiple-records) in a batch
* `read_one`: [Query a single record](#query-one-record)
* `aggregate`: [Query multiple records](#query-multiple-records-aggregation) using an [aggregation pipeline](#aggregation)

Example of the `insert_one` endpoint across environments:

<pre class="language-http"><code class="lang-http"><strong>// Development
</strong>https://thirty50-dev.appfarm.app/api/time_series/inZkFo/insert_one

// Test
https://thirty50-test.appfarm.app/api/time_series/inZkFo/insert_one

// Staging
https://thirty50-stage.appfarm.app/api/time_series/inZkFo/insert_one

// Production
https://thirty50.appfarm.app/api/time_series/inZkFo/insert_one
</code></pre>

### Authentication

Requests to the Time Series API require a Bearer token for authentication. To set up the token:

1. Use an existing [service account](https://docs.appfarm.io/reference/security/service-accounts) or create a new one in Appfarm Create.
2. Ensure the service account has a role with appropriate [permissions](https://docs.appfarm.io/security/permissions#object-classes) for the time series class you want to access and data operations you want to perform.
3. Generate an [API key](https://docs.appfarm.io/security/service-accounts#api-keys) for the service account. Make sure the API key has the **Time Series** scope.
4. Use this API key as a Bearer token in your requests for authentication.

For more details on service accounts and API keys, refer to [Service accounts](https://docs.appfarm.io/reference/security/service-accounts).

### Inserting records

{% hint style="info" %}
**Required property**

Each record must include a `timestamp` property with an ISO-8601 formatted date.
{% endhint %}

#### Insert one record

<mark style="color:green;">`POST`</mark> `/insert_one`

{% code title="Example request" %}

```http
POST https://thirty50.appfarm.app/api/time_series/inZkFo/insert_one
Content-Type: application/json

{
    "data": {
        "metadata": { "id": 5578, "location": "oslo" },
        "temperature": 19.3,
        "humidity": 25.9,
        "timestamp": "2024-08-20T15:48:09.000Z"
    }
}
```

{% endcode %}

{% code title="Response on success" %}

```http
204 No Content
```

{% endcode %}

#### Insert multiple records

<mark style="color:green;">`POST`</mark> `/insert_many`

{% code title="Example request" %}

```http
POST https://thirty50.appfarm.app/api/time_series/inZkFo/insert_many
Content-Type: application/json

{
    "data": [
        {
            "metadata": { "id": 5578, "location": "oslo" },
            "temperature": 21.1,
            "humidity": 29.9,
            "timestamp": "2024-08-20T16:45:45.000Z"
        },
        {
            "metadata": { "id": 5577, "location": "oslo" },
            "temperature": 21.1,
            "humidity": 29.9,
            "timestamp": "2024-08-20T16:45:45.000Z"
        }
        // Additional records...
    ]
}
```

{% endcode %}

{% code title="Response on success" %}

```http
204 No Content
```

{% endcode %}

### Querying records

#### Query one record

<mark style="color:green;">`GET`</mark> `/read_one`

{% code title="Example request" overflow="wrap" %}

```http
GET https://thirty50.appfarm.app/api/time_series/DsAFBA/read_one?filter={"temperature": {"$eq": 19.2}}
```

{% endcode %}

{% code title="Example response" %}

```json
{
    "data": {
        "timestamp": "2024-08-20T15:43:23.000Z",
        "metadata": {
            "id": "5578",
            "location": "oslo"
        },
        "humidity": 26.3,
        "_id": "66c49e3bbc6780f566ad9e24",
        "__v": 0,
        "temperature": 19.2
    }
}
```

{% endcode %}

#### Query multiple records (aggregation)

For more information on aggregation and the stages and operations available see [Aggregation](#aggregation) below.

<mark style="color:green;">`POST`</mark> `/aggregate`

{% code title="Example request" %}

```http
POST https://thirty50.appfarm.app/api/time_series/DsAFBA/aggregate
Content-Type: application/json

{
  "aggregation": [
    {
      "$match": { 
        "metadata.location": "oslo"
      }
    },
    {
      "$project" : {
        "temperature": 1
      }
    },
    {
      "$skip": 1
    },
    {
      "$limit": 20
    }
  ]
}
```

{% endcode %}

{% code title="Example response" %}

```json
{
    "data": [
        {
            "_id": "66c49e3bbc6780f566ad9e24",
            "temperature": 19.2
        },
        {
            "_id": "66c49eb6bc6780f566ad9e2e",
            "temperature": 19.3
        }
        // Additional records...
    ]
}
```

{% endcode %}

### Aggregation

#### What is an aggregation pipeline?

From the [MongoDB documentation](https://www.mongodb.com/docs/manual/core/aggregation-pipeline/#std-label-aggregation-pipeline):

> An aggregation pipeline consists of one or more stages that process documents:
>
> * Each stage performs an operation on the input documents. For example, a stage can filter documents, group documents, and calculate values.
> * The documents that are output from a stage are passed to the next stage.
> * An aggregation pipeline can return results for groups of documents. For example, return the total, average, maximum, and minimum values.

#### Supported aggregation operations

The Time Series API supports the following aggregation operations:

1. **$match**

* Filters the documents to pass only the documents that match the specified condition(s) to the next pipeline stage.
* Example: `{ "$match": { "metadata.location": "oslo" } }`
* Use this to narrow down the set of documents to process.
* Place this stage as early in the aggregation pipeline as possible.
* [MongoDB documentation](https://www.mongodb.com/docs/manual/reference/operator/aggregation/match/)

2. **$project**

* Passes along only the specified fields to the next stage in the pipeline.
* Can be used to include, exclude, or add computed fields.
* Example: `{ "$project": { "temperature": 1, "humidity": 1 } }`
* Use this to shape the output documents or create computed fields.
* [MongoDB documentation](https://www.mongodb.com/docs/manual/reference/operator/aggregation/project/)

3. **$skip**

* Skips over the specified number of documents that pass into the stage and passes the remaining documents to the next stage in the pipeline.
* Example: `{ "$skip": 10 }`
* Useful for pagination when combined with $limit.
* [MonoDB documentation](https://www.mongodb.com/docs/manual/reference/operator/aggregation/skip/)

4. **$limit**

* Passes only the first n documents to the next stage where n is the specified limit.
* Example: `{ "$limit": 5 }`
* Use this to cap the number of documents in the output or for pagination.
* [MongoDB documentation](https://www.mongodb.com/docs/manual/reference/operator/aggregation/limit/)

These operations can be combined in various ways to create powerful queries. For example, you could use `$match` to filter data for a specific time range, `$project` to select only the fields you need, `$skip` to ignore the first n results, and `$limit` to cap the total number of results.
