Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.withterminal.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

In addition to our unified API, Terminal can automatically replicate normalized data to an Amazon S3 bucket in your AWS account. This enables:
  • Building downstream data pipelines using your existing S3-based infrastructure and tools
  • Complete data archival to your AWS account to ensure data ownership and backup
  • High-volume processing where hourly / daily deliveries are sufficient for your use case

How it Works

Terminal is ingesting data from TSPs on an ongoing basis and normalizing it into our Common Models. Once this data is ingested, we will then make it available in our APIs and push it to your S3 bucket based on configured triggers. Terminal uses AWS IAM roles to authorize access to your S3 bucket. Our S3 Delivery Service assumes an IAM role that you grant permissions to through your bucket policy, ensuring secure and controlled access to your data.

Delivery Triggers

Initial sync / backfill (single connection)

We automatically trigger a backfill to your S3 bucket in two scenarios:
  1. when a new connection completes its first sync
  2. when you manually request a sync to backfill additional data
Example: after a new connection’s first backfill completes, we will trigger a backfill to your S3 bucket for that connection for the requested time window.

Scheduled incremental (multiple connections)

In addition to the initial sync / backfill, we will also schedule an incremental delivery to your S3 bucket on your configured schedule (rate/cron) and include all connections that have new or updated data since the last run. Example: if configured to run daily at 12:00 AM, we will trigger an incremental delivery to your S3 bucket for all rows that have been added or updated since the last run across all connected connections.

Manual (on-demand)

You can request manual deliveries for specific connections and time ranges by reaching out to our team. In the future, you’ll be able to trigger these deliveries yourself through the Terminal dashboard. This is useful for scenarios like requesting a backfill of all connections that were ingested prior to destination configuration.

Object format

Data models

Terminal delivers normalized datasets using our standard Common Models, which provide consistent schemas across all telematics providers. Each object contains rows that match the canonical fields and data types defined in the model reference. In addition to the fields documented in the Common Models, each row includes a connectionId field that identifies the source connection.
The JSON Schema files include the connectionId field and can be used for validation, code generation, or building typed data pipelines. A schema index listing all models is also available.

File format

Files are delivered in JSON Lines format (one JSON object per line, UTF-8 encoded). GZIP compression is optional and can be enabled during configuration. File extensions indicate the format:
  • .jsonl for uncompressed files
  • .jsonl.gz for GZIP-compressed files
Each file contains rows from a single model type. Large or unbounded sources may be split into multiple part objects to ensure reliable writes and predictable downstream consumption.

Key formats

We deliver data incrementally in an append-only manner. All IDs in S3 object keys (delivery_id, connection_id, and part_id) are ULIDs, which are time-sortable identifiers. This means parts can be ordered chronologically by ID generation. Terminal supports several key format options to organize your data. The examples below show common patterns, but if you have a specific objective in mind, we encourage you to reach out to discuss options. Common key format patterns:
s3://your-company-terminal-data/{optional_prefix}/
  VehicleLocation/
    conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FBV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FCV.jsonl.gz
        part_01ARZ3NDEKTSV4RRFFQ69G5FDV.jsonl.gz
    conn_01ARZ3NDEKTSV4RRFFQ69G5FEV/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FFV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FGV.jsonl.gz
  Vehicle/
    conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FHV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FIV.jsonl.gz
  Trip/
    conn_01ARZ3NDEKTSV4RRFFQ69G5FEV/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FJV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FKV.jsonl.gz
This format groups files by model first, then connection ID, making it easy to locate all data for a specific model and connection.
s3://your-company-terminal-data/{optional_prefix}/
  VehicleLocation/
    dlv_01ARZ3NDEKTSV4RRFFQ69G5FBV/
      conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FCV.jsonl.gz
        part_01ARZ3NDEKTSV4RRFFQ69G5FDV.jsonl.gz
    dlv_01ARZ3NDEKTSV4RRFFQ69G5FFV/
      conn_01ARZ3NDEKTSV4RRFFQ69G5FEV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FGV.jsonl.gz
  Vehicle/
    dlv_01ARZ3NDEKTSV4RRFFQ69G5FHV/
      conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FIV.jsonl.gz
  Trip/
    dlv_01ARZ3NDEKTSV4RRFFQ69G5FJV/
      conn_01ARZ3NDEKTSV4RRFFQ69G5FEV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FKV.jsonl.gz
This format organizes files by model first, then delivery ID, making it easy to track individual deliveries.
s3://your-company-terminal-data/{optional_prefix}/
  conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/
    VehicleLocation/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FBV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FCV.jsonl.gz
        part_01ARZ3NDEKTSV4RRFFQ69G5FDV.jsonl.gz
    Vehicle/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FHV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FIV.jsonl.gz
  conn_01ARZ3NDEKTSV4RRFFQ69G5FEV/
    VehicleLocation/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FFV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FGV.jsonl.gz
    Trip/
      dlv_01ARZ3NDEKTSV4RRFFQ69G5FJV/
        part_01ARZ3NDEKTSV4RRFFQ69G5FKV.jsonl.gz
This format groups files by connection ID first, making it easy to locate all data for a specific connection across all models.

Manifest files

Manifest files are an optional feature that can be enabled during configuration. When enabled, Terminal writes a single JSON manifest object to your bucket after a delivery finishes uploading all of its data files. The manifest lists every file produced by that delivery, and can be used to:
  • Signal that a delivery is complete and ready to consume downstream
  • Drive manifest-based ETL tools (for example, Snowflake COPY with a FILES list, or Redshift COPY with a manifest)
  • Audit which files belong to a given delivery without having to list the bucket
Manifest files are not written unless you explicitly opt in during configuration. If you don’t configure manifests, Terminal will only write data files. Each manifest is a JSON object with the delivery ID and an entry for every file in the delivery:
{
  "deliveryId": "dlv_01ARZ3NDEKTSV4RRFFQ69G5FBV",
  "files": [
    {
      "key": "VehicleLocation/conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/dlv_01ARZ3NDEKTSV4RRFFQ69G5FBV/part_01ARZ3NDEKTSV4RRFFQ69G5FCV.jsonl.gz",
      "model": "VehicleLocation",
      "count": 12450
    },
    {
      "key": "Vehicle/conn_01ARZ3NDEKTSV4RRFFQ69G5FAV/dlv_01ARZ3NDEKTSV4RRFFQ69G5FBV/part_01ARZ3NDEKTSV4RRFFQ69G5FIV.jsonl.gz",
      "model": "Vehicle",
      "count": 312
    }
  ]
}
Each entry contains:
  • key — the S3 object key of the data file (relative to your bucket, including any configured key prefix)
  • model — the Common Model the file belongs to
  • count — the number of rows written to the file
Manifests are written with the key {deliveryId}.json. A manifest is written for every delivery — both successful and failed — so downstream consumers can use its presence as a reliable “delivery finished” signal regardless of outcome. On a failed delivery, the manifest reflects whatever files were successfully uploaded before the failure. In the rare case a delivery is re-driven, we write to the same key, which updates the manifest in place with the final set of files for that delivery. Because this is a flat naming scheme, you should tell us what prefix you’d like manifests delivered under when you enable the feature — for example, manifests/ — so they land in a predictable location inside your bucket:
s3://your-company-terminal-data/{your_manifest_prefix}/
  dlv_01ARZ3NDEKTSV4RRFFQ69G5FBV.json
  dlv_01ARZ3NDEKTSV4RRFFQ69G5FFV.json
  dlv_01ARZ3NDEKTSV4RRFFQ69G5FJV.json
The manifest prefix is configured independently from the data file Key prefix, so manifests can live under their own prefix (for example, manifests/) while data files live elsewhere in the same bucket.

Sandbox environment

Sandbox destinations are configured the same as production destinations but deliver to a separate S3 bucket from your sandbox environment. This enables you to test your pipelines against anonymized mock data before deploying to production.
Production environments connected to real providers include more data variability and edge cases not present in sandbox data.

Setup and configuration

Create your S3 bucket

  1. Log in to the AWS Management Console
  2. Navigate to S3 under Storage services
  3. Create a new bucket:
    • Click “Create bucket”
    • Enter a unique bucket name (e.g., your-company-terminal-data)
    • Choose your region
    • Keep default security settings to block public access
    • Enable default encryption
    • Click “Create bucket”
  4. (Optional) Enable S3 versioning to protect against accidental deletions or overwrites:
    • Go to bucket Properties
    • Find and enable Versioning to protect against accidental changes

Grant Terminal access

Terminal supports two authentication methods for accessing your S3 bucket:
  1. Bucket Policy: Grant permissions to Terminal’s IAM role through your bucket policy
  2. Customer IAM Role: Create an IAM role in your AWS account that Terminal assumes
Grant Terminal’s IAM role direct access to your S3 bucket through a bucket policy.
  1. Go to your bucket’s Permissions tab
  2. Add the bucket policy:
    • Click “Edit” under Bucket Policy
    • Paste the relevant policy provided below depending on your environment
    • Save changes
    Production:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "AllowTerminalObjectAccess",
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::616545015708:role/prod-terminal-s3-delivery"
          },
          "Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
          "Resource": "arn:aws:s3:::your-company-terminal-data/*"
        },
        {
          "Sid": "AllowTerminalBucketList",
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::616545015708:role/prod-terminal-s3-delivery"
          },
          "Action": "s3:ListBucket",
          "Resource": "arn:aws:s3:::your-company-terminal-data"
        }
      ]
    }
    
    Sandbox:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "AllowTerminalObjectAccess",
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::115722544334:role/sandbox-terminal-s3-delivery"
          },
          "Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
          "Resource": "arn:aws:s3:::your-company-terminal-data/*"
        },
        {
          "Sid": "AllowTerminalBucketList",
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::115722544334:role/sandbox-terminal-s3-delivery"
          },
          "Action": "s3:ListBucket",
          "Resource": "arn:aws:s3:::your-company-terminal-data"
        }
      ]
    }
    

Share your destination details

After configuring your bucket and access permissions, you’ll need to set up your S3 destination with Terminal. Contact our team with the following information:
  • Bucket name
  • AWS region
  • Environment (production or sandbox)
  • (Optional) Customer IAM role ARN, if using the Customer IAM Role authentication method
For the full list of configuration options and defaults, see Configuration options. Our team is here to help recommend the best settings for your needs.
Self-service destination management through the Terminal dashboard is coming soon.

Configuration options

OptionDescription
Bucket nameName of your S3 bucket.
AWS regionAWS region for your S3 bucket.
Role ARNOptional. ARN of a customer-managed IAM role to assume for S3 access. When provided, Terminal will assume this role instead of using the default delivery role. See Customer IAM Role setup for details.
SourcesModels and data sources to deliver to your S3 bucket.
Key formatFormat pattern for S3 object keys. See Key formats above for available options.
Key prefixOptional prefix for object keys (for example, terminal-data/).
Include rawOptional. Include raw data with common models. Note: this increases data volume by 2-3x.
CompressionOptional. Enable GZIP compression for objects.
ScheduleSchedule for incremental deliveries to your S3 bucket (rate or cron expression).
ManifestsOptional. Enable per-delivery manifest files. If omitted, no manifests are written.
Manifest key prefixPrefix under which manifests are delivered (for example, manifests/). Independent from the data Key prefix.
Size limitAdvanced. Maximum uncompressed bytes per uploaded object for unbounded sources and streaming splits. Most use cases can omit this.