For real-time data queries (like current vehicle locations), see our
Real-Time Data Guide. This guide focuses
on syncing historical and record data.
Core Concepts
Sync Modes
Automatic Sync Mode (Default): Terminal automatically keeps your connection’s data up-to-date by running syncs in the background. Most connections use this mode and it’s typically configured during connection setup. Manual Sync Mode: You control when syncs happen by requesting them via the API. Useful when you need precise control over sync timing or costs.Late-Arriving Data & Lookback Windows
Data doesn’t always become available in Terminal immediately when events occur. There can be hours or even days between when an event happens and when it’s available and ingested in our systems, creating a gap between record time (when the event occurred) and ingestion time (when Terminal processed it). When using record time for incremental access, apply a 24-48 hour lookback window to catch late-arriving data. Some endpoints such as HOS logs may require a longer lookback window due to the nature of when records can be updated.Ingestion time access eliminates lookback windows by using Terminal’s
processing time.
Access Patterns
Ingestion Time Access: Query data based on when Terminal processed it from the provider- Parameters:
modifiedAfter
,modifiedBefore
- Best for: Incremental syncing by finding what data has changed since a point in time
- Data is ordered by when it was ingested into Terminal (not record time)
- Recommended to use when available since it eliminates the need for lookback windows
Do not pass record time parameters when using ingestion time access
- Example parameters:
startedAfter
,startedBefore
,startAt
,endAt
- Best for: Historical analysis and backfill scenarios
- Data is ordered by record time (not ingestion time)
- Required for:
/vehicles/{id}/locations
,/vehicles/{id}/stats/historical
Make sure to apply lookback windows when using for incremental replication
(see above)
Data Types
Most data in Terminal is time-related and follows these patterns. Here are the three main categories: 1. Entities - Vehicles, drivers, groups, trailers, devices- Endpoints:
/vehicles
,/drivers
,/groups
,/trailers
,/devices
- How to sync: Use
modifiedAfter
for incremental updates - Notes: No lookback windows needed
- Endpoints:
/safety/events
,/trips
,/hos/logs
,/hos/daily-logs
, and most other time-based data - How to sync: Use
modifiedAfter
OR record time parameters - Notes: Can use lookback windows with record time
- Endpoints:
/vehicles/{id}/locations
,/vehicles/{id}/stats/historical
- How to sync: Use record time parameters (
startAt
/endAt
) - Notes: Requires lookback windows if using for incremental syncing
Use Cases
Continuous Data Replication
Goal: Keep your data warehouse continuously updatedBest for: Real-time dashboards, operational systems, ongoing analytics
Step 1: Sync Entities & Records
Start with the simple data types usingmodifiedAfter
:
Step 2: Sync Historical Telematics
Vehicle locations and stats require special handling with record time parameters:Key Points for Steps 1 & 2
- Process all pages using the
cursor
parameter (see Pagination Guide) - Store sync start time before beginning, update only after successful completion
- Maintain separate “last sync time” checkpoints for each endpoint
- Run endpoint syncs in parallel for optimal performance
- Use default page sizes by omitting the
limit
parameter for optimal performance - Apply 24-48 hour lookback windows for vehicle locations and stats
- Expect data to be mutable - upsert records to your data store to handle updates
- Consider syncing data 1-2 days old to reduce update frequency (especially for locations/stats)
Step 3: Webhook Integration (Optional)
Webhooks are optional but helpful for triggering ingestion when new data is available. Key events for sync triggers:sync.completed
- Provider sync finished, trigger your incremental syncvehicle.added
,vehicle.modified
- Sync vehicles endpoint or use event details directlysafety.added
,safety.modified
- Sync safety events endpoint or use event details directly
Historical Data Backfills
Goal: Load historical data for analysis or reportingBest for: Data science, compliance reporting, one-time analysis projects
Step 1: Request Historical Sync
For new connections: Configure historical days during the connection setup process. For existing connections: Request a sync with historical data:- Webhook:
connection.first_sync_completed
(for new connections only) - Webhook:
sync.completed
(for any sync completion) - Polling:
GET /syncs/{id}
orGET /connections/current
endpoints
Step 2: Replicate Historical Data
Once the sync completes, query and replicate historical data to your data store:- Process all pages using the
cursor
parameter (see Pagination Guide) - Split large time ranges into smaller chunks (weeks or months) for parallel processing
- Use default page sizes by omitting the
limit
parameter for optimal performance - If you plan to use this for ongoing updates, apply lookback windows for late-arriving data
- Stream data directly to your data store for efficiency
We recommend using an orchestration tool like Airflow, Step Functions, or
Dagster for advanced parallel processing of large historical datasets.
Complete Data Pipeline (Historical + Continuous)
Goal: Build a complete data pipeline with historical data plus ongoing updatesBest for: Production data warehouses, comprehensive analytics platforms Approach: Start with historical backfill, then switch to continuous replication.
Step 1: Initial Historical Sync
For new connections: Configure historical days during the connection setup process. For existing connections: Request a sync with historical data:Step 2: Replicate Historical Data
Use record time access patterns to replicate your historical data to your data store (same approach as Historical Data Backfills).Step 3: Switch to Continuous Replication
Once historical data is processed, implement the ongoing sync pattern from Continuous Data Replication:- Use
modifiedAfter
for most endpoints - Use record time with lookback for vehicle locations/stats
- Listen for webhook triggers to enable real-time updates
- Replicate historical data first using record time parameters
- Set your “last sync start time” to mark the transition point between historical and ongoing data
- Switch to ingestion time patterns for ongoing updates
- Consider orchestration tools for complex data pipelines
Understanding the Parameters
Record Time Parameters
These query by when events actually occurred:startAt
/endAt
: Time range for vehicle locations and stats (ISO 8601)startedAfter
/startedBefore
: Time range for events with duration (ISO 8601)startDay
/endDay
: Events from a specific day (YYYY-MM-DD)startMonth
/endMonth
: Events from a specific month (YYYY-MM)
Ingest Time Parameters
These query by when Terminal processed the data:modifiedAfter
: Records modified after this timemodifiedBefore
: Records modified before this time
Best Practices
Error Handling & Rate Limits
- Implement exponential backoff for retries (start with 1s, double each attempt)
- Handle HTTP 429 responses by respecting the
Retry-After
header - Follow our Rate Limits Guide for proper throttling
- Log failed requests with context for debugging
Data Replication Efficiency
- Process data in batches using pagination cursors and time range chunking
- Use default page sizes by omitting the
limit
parameter for optimal performance - Stream data directly to your data store to avoid memory accumulation
- Use bulk insert operations for optimal database write performance
- Use parallel processing for large historical datasets
Data Consistency
- Store sync timestamps before starting, update only after successful completion
- Use database transactions when processing batches
- Track progress and resume from failures using pagination cursors or sub-jobs