Overview
In addition to our unified API, Terminal can automatically replicate normalized data to an Amazon S3 bucket in your AWS account. This enables:- Building downstream data pipelines using your existing S3-based infrastructure and tools
- Complete data archival to your AWS account to ensure data ownership and backup
- High-volume processing where hourly / daily deliveries are sufficient for your use case
How it Works
Terminal is ingesting data from TSPs on an ongoing basis and normalizing it into our Common Models. Once this data is ingested, we will then make it available in our APIs and push it to your S3 bucket based on configured triggers. Terminal uses AWS IAM roles to authorize access to your S3 bucket. Our S3 Delivery Service assumes an IAM role that you grant permissions to through your bucket policy, ensuring secure and controlled access to your data.Delivery Triggers
Initial sync / backfill (single connection)
We automatically trigger a backfill to your S3 bucket in two scenarios:- when a new connection completes its first sync
- when you manually request a sync to backfill additional data
Scheduled incremental (multiple connections)
In addition to the initial sync / backfill, we will also schedule an incremental delivery to your S3 bucket on your configured schedule (rate/cron) and include all connections that have new or updated data since the last run. Example: if configured to run daily at 12:00 AM, we will trigger an incremental delivery to your S3 bucket for all rows that have been added or updated since the last run across all connected connections.Manual (on-demand)
You can request manual deliveries for specific connections and time ranges by reaching out to our team. In the future, you’ll be able to trigger these deliveries yourself through the Terminal dashboard. This is useful for scenarios like requesting a backfill of all connections that were ingested prior to destination configuration.Object format
Data models
Terminal delivers normalized datasets using our standard Common Models, which provide consistent schemas across all telematics providers. Each object contains rows that match the canonical fields and data types defined in the model reference.In addition to the fields documented in the Common Models, each row includes a
connectionId field that identifies the source connection. This field is not
documented in the public model reference but is included in all S3 deliveries.View supported models
View supported models
S3 delivery supports the following models:
File format
Files are delivered in JSON Lines format (one JSON object per line, UTF-8 encoded). GZIP compression is optional and can be enabled during configuration. File extensions indicate the format:.jsonlfor uncompressed files.jsonl.gzfor GZIP-compressed files
Key formats
We deliver data incrementally in an append-only manner. All IDs in S3 object keys (delivery_id, connection_id, and part_id) are ULIDs, which are time-sortable identifiers. This means parts can be ordered chronologically by ID generation.
Terminal supports several key format options to organize your data. The examples below show common patterns, but if you have a specific objective in mind, we encourage you to reach out to discuss options.
Common key format patterns:
{model}/{connection_id}/{delivery_id}/{part_id}
{model}/{connection_id}/{delivery_id}/{part_id}
{model}/{delivery_id}/{connection_id}/{part_id}
{model}/{delivery_id}/{connection_id}/{part_id}
{connection_id}/{model}/{delivery_id}/{part_id}
{connection_id}/{model}/{delivery_id}/{part_id}
Sandbox environment
Sandbox destinations are configured the same as production destinations but deliver to a separate S3 bucket from your sandbox environment. This enables you to test your pipelines against anonymized mock data before deploying to production.Production environments connected to real providers include more data
variability and edge cases not present in sandbox data.
Setup and configuration
Create your S3 bucket
- Log in to the AWS Management Console
- Navigate to S3 under Storage services
-
Create a new bucket:
- Click “Create bucket”
- Enter a unique bucket name (e.g.,
your-company-terminal-data) - Choose your region
- Keep default security settings to block public access
- Enable default encryption
- Click “Create bucket”
-
(Optional) Enable S3 versioning to protect against accidental deletions or overwrites:
- Go to bucket Properties
- Find and enable Versioning to protect against accidental changes
Grant Terminal access
Terminal uses AWS IAM roles to securely access your S3 bucket. To authorize our S3 Delivery Service, you’ll need to grant permissions to our IAM role through your bucket policy.- Go to your bucket’s Permissions tab
-
Add the bucket policy:
- Click “Edit” under Bucket Policy
- Paste the relevant policy provided below depending on your environment
- Save changes
Production Bucket Policy
Production Bucket Policy
Sandbox Bucket Policy
Sandbox Bucket Policy
Share your destination details
After configuring your bucket and bucket policy, you’ll need to set up your S3 destination with Terminal. Contact our team with the following information:- Bucket name
- Environment (production or sandbox)
Self-service destination management through the Terminal dashboard is coming
soon.
Configuration options
| Option | Description |
|---|---|
| Bucket name | Name of your S3 bucket. |
| AWS region | AWS region for your S3 bucket. |
| Sources | Models and data sources to deliver to your S3 bucket. |
| Key format | Format pattern for S3 object keys. See Key formats above for available options. |
| Key prefix | Optional prefix for object keys (for example, terminal-data/). |
| Include raw | Optional. Include raw data with common models. Note: this increases data volume by 2-3x. |
| Compression | Optional. Enable GZIP compression for objects. |
| Schedule | Schedule for incremental deliveries to your S3 bucket (rate or cron expression). |
| Size limit | Advanced. Maximum uncompressed bytes per uploaded object for unbounded sources and streaming splits. Most use cases can omit this. |