Skip to main content

Overview

In addition to our unified API, Terminal can automatically replicate normalized data to an Amazon S3 bucket in your AWS account. This enables:
  • Building downstream data pipelines using your existing S3-based infrastructure and tools
  • Complete data archival to your AWS account to ensure data ownership and backup
  • High-volume processing where hourly / daily deliveries are sufficient for your use case

How it Works

Terminal is ingesting data from TSPs on an ongoing basis and normalizing it into our Common Models. Once this data is ingested, we will then make it available in our APIs and push it to your S3 bucket based on configured triggers. Terminal uses AWS IAM roles to authorize access to your S3 bucket. Our S3 Delivery Service assumes an IAM role that you grant permissions to through your bucket policy, ensuring secure and controlled access to your data.

Delivery Triggers

Initial sync / backfill (single connection)

We automatically trigger a backfill to your S3 bucket in two scenarios:
  1. when a new connection completes its first sync
  2. when you manually request a sync to backfill additional data
Example: after a new connection’s first backfill completes, we will trigger a backfill to your S3 bucket for that connection for the requested time window.

Scheduled incremental (multiple connections)

In addition to the initial sync / backfill, we will also schedule an incremental delivery to your S3 bucket on your configured schedule (rate/cron) and include all connections that have new or updated data since the last run. Example: if configured to run daily at 12:00 AM, we will trigger an incremental delivery to your S3 bucket for all rows that have been added or updated since the last run across all connected connections.

Manual (on-demand)

You can request manual deliveries for specific connections and time ranges by reaching out to our team. In the future, you’ll be able to trigger these deliveries yourself through the Terminal dashboard. This is useful for scenarios like requesting a backfill of all connections that were ingested prior to destination configuration.

Object format

Data models

Terminal delivers normalized datasets using our standard Common Models, which provide consistent schemas across all telematics providers. Each object contains rows that match the canonical fields and data types defined in the model reference.
In addition to the fields documented in the Common Models, each row includes a connectionId field that identifies the source connection. This field is not documented in the public model reference but is included in all S3 deliveries.

File format

Files are delivered in JSON Lines format (one JSON object per line, UTF-8 encoded). GZIP compression is optional and can be enabled during configuration. File extensions indicate the format:
  • .jsonl for uncompressed files
  • .jsonl.gz for GZIP-compressed files
Each file contains rows from a single model type. Large or unbounded sources may be split into multiple part objects to ensure reliable writes and predictable downstream consumption.

Key formats

We deliver data incrementally in an append-only manner. All IDs in S3 object keys (delivery_id, connection_id, and part_id) are ULIDs, which are time-sortable identifiers. This means parts can be ordered chronologically by ID generation. Terminal supports several key format options to organize your data. The examples below show common patterns, but if you have a specific objective in mind, we encourage you to reach out to discuss options. Common key format patterns:
s3://your-company-terminal-data/{optional_prefix}/
  VehicleLocation/
    conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FBV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FCV.jsonl.gz
        part_01ARZ3NDEKTSV4RRFFQ69G5FDV.jsonl.gz
    conn_01ARZ3NDEKTSV4RRFFQ69G5FEV/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FFV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FGV.jsonl.gz
  Vehicle/
    conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FHV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FIV.jsonl.gz
  Trip/
    conn_01ARZ3NDEKTSV4RRFFQ69G5FEV/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FJV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FKV.jsonl.gz
This format groups files by model first, then connection ID, making it easy to locate all data for a specific model and connection.
s3://your-company-terminal-data/{optional_prefix}/
  VehicleLocation/
    dlv_01ARZ3NDEKTSV4RRFFQ69G5FBV/
      conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FCV.jsonl.gz
        part_01ARZ3NDEKTSV4RRFFQ69G5FDV.jsonl.gz
    dlv_01ARZ3NDEKTSV4RRFFQ69G5FFV/
      conn_01ARZ3NDEKTSV4RRFFQ69G5FEV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FGV.jsonl.gz
  Vehicle/
    dlv_01ARZ3NDEKTSV4RRFFQ69G5FHV/
      conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FIV.jsonl.gz
  Trip/
    dlv_01ARZ3NDEKTSV4RRFFQ69G5FJV/
      conn_01ARZ3NDEKTSV4RRFFQ69G5FEV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FKV.jsonl.gz
This format organizes files by model first, then delivery ID, making it easy to track individual deliveries.
s3://your-company-terminal-data/{optional_prefix}/
  conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/
    VehicleLocation/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FBV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FCV.jsonl.gz
        part_01ARZ3NDEKTSV4RRFFQ69G5FDV.jsonl.gz
    Vehicle/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FHV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FIV.jsonl.gz
  conn_01ARZ3NDEKTSV4RRFFQ69G5FEV/
    VehicleLocation/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FFV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FGV.jsonl.gz
    Trip/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FJV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FKV.jsonl.gz
This format groups files by connection ID first, making it easy to locate all data for a specific connection across all models.

Sandbox environment

Sandbox destinations are configured the same as production destinations but deliver to a separate S3 bucket from your sandbox environment. This enables you to test your pipelines against anonymized mock data before deploying to production.
Production environments connected to real providers include more data variability and edge cases not present in sandbox data.

Setup and configuration

Create your S3 bucket

  1. Log in to the AWS Management Console
  2. Navigate to S3 under Storage services
  3. Create a new bucket:
    • Click “Create bucket”
    • Enter a unique bucket name (e.g., your-company-terminal-data)
    • Choose your region
    • Keep default security settings to block public access
    • Enable default encryption
    • Click “Create bucket”
  4. (Optional) Enable S3 versioning to protect against accidental deletions or overwrites:
    • Go to bucket Properties
    • Find and enable Versioning to protect against accidental changes

Grant Terminal access

Terminal uses AWS IAM roles to securely access your S3 bucket. To authorize our S3 Delivery Service, you’ll need to grant permissions to our IAM role through your bucket policy.
  1. Go to your bucket’s Permissions tab
  2. Add the bucket policy:
    • Click “Edit” under Bucket Policy
    • Paste the relevant policy provided below depending on your environment
    • Save changes
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowTerminalObjectAccess",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::616545015708:role/prod-terminal-s3-delivery"
      },
      "Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
      "Resource": "arn:aws:s3:::your-company-terminal-data/*"
    },
    {
      "Sid": "AllowTerminalBucketList",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::616545015708:role/prod-terminal-s3-delivery"
      },
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::your-company-terminal-data"
    }
  ]
}
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowTerminalObjectAccess",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::115722544334:role/sandbox-terminal-s3-delivery"
      },
      "Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
      "Resource": "arn:aws:s3:::your-company-terminal-data/*"
    },
    {
      "Sid": "AllowTerminalBucketList",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::115722544334:role/sandbox-terminal-s3-delivery"
      },
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::your-company-terminal-data"
    }
  ]
}

Share your destination details

After configuring your bucket and bucket policy, you’ll need to set up your S3 destination with Terminal. Contact our team with the following information:
  • Bucket name
  • Environment (production or sandbox)
For the full list of configuration options and defaults, see Configuration options. Our team is here to help recommend the best settings for your needs.
Self-service destination management through the Terminal dashboard is coming soon.

Configuration options

OptionDescription
Bucket nameName of your S3 bucket.
AWS regionAWS region for your S3 bucket.
SourcesModels and data sources to deliver to your S3 bucket.
Key formatFormat pattern for S3 object keys. See Key formats above for available options.
Key prefixOptional prefix for object keys (for example, terminal-data/).
Include rawOptional. Include raw data with common models. Note: this increases data volume by 2-3x.
CompressionOptional. Enable GZIP compression for objects.
ScheduleSchedule for incremental deliveries to your S3 bucket (rate or cron expression).
Size limitAdvanced. Maximum uncompressed bytes per uploaded object for unbounded sources and streaming splits. Most use cases can omit this.