Documentation Index
Fetch the complete documentation index at: https://docs.withterminal.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
In addition to our unified API, Terminal can automatically replicate normalized data to an Amazon S3 bucket in your AWS account. This enables:- Building downstream data pipelines using your existing S3-based infrastructure and tools
- Complete data archival to your AWS account to ensure data ownership and backup
- High-volume processing where hourly / daily deliveries are sufficient for your use case
How it Works
Terminal is ingesting data from TSPs on an ongoing basis and normalizing it into our Common Models. Once this data is ingested, we will then make it available in our APIs and push it to your S3 bucket based on configured triggers. Terminal uses AWS IAM roles to authorize access to your S3 bucket. Our S3 Delivery Service assumes an IAM role that you grant permissions to through your bucket policy, ensuring secure and controlled access to your data.Delivery Triggers
Initial sync / backfill (single connection)
We automatically trigger a backfill to your S3 bucket in two scenarios:- when a new connection completes its first sync
- when you manually request a sync to backfill additional data
Scheduled incremental (multiple connections)
In addition to the initial sync / backfill, we will also schedule an incremental delivery to your S3 bucket on your configured schedule (rate/cron) and include all connections that have new or updated data since the last run. Example: if configured to run daily at 12:00 AM, we will trigger an incremental delivery to your S3 bucket for all rows that have been added or updated since the last run across all connected connections.Manual (on-demand)
You can request manual deliveries for specific connections and time ranges by reaching out to our team. In the future, you’ll be able to trigger these deliveries yourself through the Terminal dashboard. This is useful for scenarios like requesting a backfill of all connections that were ingested prior to destination configuration.Object format
Data models
Terminal delivers normalized datasets using our standard Common Models, which provide consistent schemas across all telematics providers. Each object contains rows that match the canonical fields and data types defined in the model reference. In addition to the fields documented in the Common Models, each row includes aconnectionId field that identifies the source connection.
View supported models
View supported models
| Model | Reference | JSON Schema |
|---|---|---|
| Connection | View model | Download |
| Vehicle | View model | Download |
| Driver | View model | Download |
| Device | View model | Download |
| Group | View model | Download |
| Trailer | View model | Download |
| VehicleLocation | View model | Download |
| VehicleStatLog | View model | Download |
| SafetyEvent | View model | Download |
| FaultCodeEvent | View model | Download |
| Trip | View model | Download |
| HOSDailyLog | View model | Download |
| HOSLog | View model | Download |
| IFTAVehicleMonth | View model | Download |
| VehicleUtilizationDay | View model | Download |
File format
Files are delivered in JSON Lines format (one JSON object per line, UTF-8 encoded). GZIP compression is optional and can be enabled during configuration. File extensions indicate the format:.jsonlfor uncompressed files.jsonl.gzfor GZIP-compressed files
Key formats
We deliver data incrementally in an append-only manner. All IDs in S3 object keys (delivery_id, connection_id, and part_id) are ULIDs, which are time-sortable identifiers. This means parts can be ordered chronologically by ID generation.
Terminal supports several key format options to organize your data. The examples below show common patterns, but if you have a specific objective in mind, we encourage you to reach out to discuss options.
Common key format patterns:
{model}/{connection_id}/{delivery_id}/{part_id}
{model}/{connection_id}/{delivery_id}/{part_id}
{model}/{delivery_id}/{connection_id}/{part_id}
{model}/{delivery_id}/{connection_id}/{part_id}
{connection_id}/{model}/{delivery_id}/{part_id}
{connection_id}/{model}/{delivery_id}/{part_id}
Manifest files
Manifest files are an optional feature that can be enabled during configuration. When enabled, Terminal writes a single JSON manifest object to your bucket after a delivery finishes uploading all of its data files. The manifest lists every file produced by that delivery, and can be used to:- Signal that a delivery is complete and ready to consume downstream
- Drive manifest-based ETL tools (for example, Snowflake
COPYwith aFILESlist, or RedshiftCOPYwith a manifest) - Audit which files belong to a given delivery without having to list the bucket
key— the S3 object key of the data file (relative to your bucket, including any configured key prefix)model— the Common Model the file belongs tocount— the number of rows written to the file
{deliveryId}.json. A manifest is written for every delivery — both successful and failed — so downstream consumers can use its presence as a reliable “delivery finished” signal regardless of outcome. On a failed delivery, the manifest reflects whatever files were successfully uploaded before the failure. In the rare case a delivery is re-driven, we write to the same key, which updates the manifest in place with the final set of files for that delivery.
Because this is a flat naming scheme, you should tell us what prefix you’d like manifests delivered under when you enable the feature — for example, manifests/ — so they land in a predictable location inside your bucket:
Key prefix, so manifests can live under their own prefix (for example, manifests/) while data files live elsewhere in the same bucket.
Sandbox environment
Sandbox destinations are configured the same as production destinations but deliver to a separate S3 bucket from your sandbox environment. This enables you to test your pipelines against anonymized mock data before deploying to production.Production environments connected to real providers include more data
variability and edge cases not present in sandbox data.
Setup and configuration
Create your S3 bucket
- Log in to the AWS Management Console
- Navigate to S3 under Storage services
-
Create a new bucket:
- Click “Create bucket”
- Enter a unique bucket name (e.g.,
your-company-terminal-data) - Choose your region
- Keep default security settings to block public access
- Enable default encryption
- Click “Create bucket”
-
(Optional) Enable S3 versioning to protect against accidental deletions or overwrites:
- Go to bucket Properties
- Find and enable Versioning to protect against accidental changes
Grant Terminal access
Terminal supports two authentication methods for accessing your S3 bucket:- Bucket Policy: Grant permissions to Terminal’s IAM role through your bucket policy
- Customer IAM Role: Create an IAM role in your AWS account that Terminal assumes
- Bucket Policy
- Customer IAM Role
Grant Terminal’s IAM role direct access to your S3 bucket through a bucket policy.
- Go to your bucket’s Permissions tab
-
Add the bucket policy:
- Click “Edit” under Bucket Policy
- Paste the relevant policy provided below depending on your environment
- Save changes
Sandbox:
Share your destination details
After configuring your bucket and access permissions, you’ll need to set up your S3 destination with Terminal. Contact our team with the following information:- Bucket name
- AWS region
- Environment (production or sandbox)
- (Optional) Customer IAM role ARN, if using the Customer IAM Role authentication method
Self-service destination management through the Terminal dashboard is coming
soon.
Configuration options
| Option | Description |
|---|---|
| Bucket name | Name of your S3 bucket. |
| AWS region | AWS region for your S3 bucket. |
| Role ARN | Optional. ARN of a customer-managed IAM role to assume for S3 access. When provided, Terminal will assume this role instead of using the default delivery role. See Customer IAM Role setup for details. |
| Sources | Models and data sources to deliver to your S3 bucket. |
| Key format | Format pattern for S3 object keys. See Key formats above for available options. |
| Key prefix | Optional prefix for object keys (for example, terminal-data/). |
| Include raw | Optional. Include raw data with common models. Note: this increases data volume by 2-3x. |
| Compression | Optional. Enable GZIP compression for objects. |
| Schedule | Schedule for incremental deliveries to your S3 bucket (rate or cron expression). |
| Manifests | Optional. Enable per-delivery manifest files. If omitted, no manifests are written. |
| Manifest key prefix | Prefix under which manifests are delivered (for example, manifests/). Independent from the data Key prefix. |
| Size limit | Advanced. Maximum uncompressed bytes per uploaded object for unbounded sources and streaming splits. Most use cases can omit this. |