title: 'DynamoDB Data Connector' sidebar_label: 'DynamoDB Data Connector' description: 'DynamoDB Data Connector Documentation' tags:
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. This connector enables using DynamoDB tables as data sources for federated SQL queries in Spice.
fromThe from field should specify the DynamoDB table name:
from | Description |
|---|---|
dynamodb:table | Read data from a DynamoDB table named table |
:::note
If an expected table is not found, verify the dynamodb_aws_region parameter. DynamoDB tables are region-specific.
:::
nameThe dataset name. This will be used as the table name within Spice.
Example:
The dataset name cannot be a [reserved keyword(../../reference/spicepod/keywords).
paramsThe DynamoDB data connector supports the following configuration parameters:
| Parameter Name | Description |
|---|---|
dynamodb_aws_region | Required. The AWS region containing the DynamoDB table |
dynamodb_aws_access_key_id | Optional. AWS access key ID for authentication. If not provided, credentials will be loaded from environment variables or IAM roles |
dynamodb_aws_secret_access_key | Optional. AWS secret access key for authentication. If not provided, credentials will be loaded from environment variables or IAM roles |
dynamodb_aws_session_token | Optional. AWS session token for authentication |
unnest_depth | Optional. Maximum nesting depth for unnesting embedded documents into a flattened structure. Higher values expand deeper nested fields. |
schema_infer_max_records | Optional. The number of documents to use to infer the schema. Defaults to 10 |
scan_segments | Optional. Number of segments for Scan request. 'auto' by default, which will calculate number of segments based on number of the records in a table |
If AWS credentials are not explicitly provided in the configuration, the connector will automatically load credentials from the following sources in order.
Environment Variables:
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEYAWS_SESSION_TOKEN (if using temporary credentials)Shared AWS Config/Credentials Files:
Config file: ~/.aws/config (Linux/Mac) or %UserProfile%\.aws\config (Windows)
Credentials file: ~/.aws/credentials (Linux/Mac) or %UserProfile%\.aws\credentials (Windows)
The AWS_PROFILE environment variable can be used to specify a named profile, otherwise the [default] profile is used.
Supports both static credentials and SSO sessions
Example credentials file:
:::tip To set up SSO authentication:
aws configure sso to configure a new SSO profileAWS_PROFILE=sso-profileaws sso login --profile sso-profile to start a new SSO session
:::AWS STS Web Identity Token Credentials:
The connector will try each source in order until valid credentials are found. If no valid credentials are found, an authentication error will be returned.
:::note[IAM Permissions]
Regardless of the credential source, the IAM role or user must have appropriate S3 permissions (e.g., s3:ListBucket, s3:GetObject) to access the files. If the Spicepod connects to multiple different AWS services, the permissions should cover all of them.
:::
The IAM role or user needs the following permissions to access DynamoDB tables:
| Permission | Purpose |
|---|---|
dynamodb:Scan | Required. Allows reading all items from the table |
dynamodb:Query | Required. Allows reading items from the table using partition key |
dynamodb:DescribeTable | Required. Allows fetching table metadata and schema information |
:::warning[Security Considerations]
dynamodb:* permissions as it grants more access than necessary.:::
The table below shows the DynamoDB data types supported, along with the type mapping to Apache Arrow types in Spice.
| DynamoDB Type | Description | Arrow Type | Notes |
|---|---|---|---|
Bool | Boolean | Boolean | |
S | String | Utf8 | |
S | String | Timestamp(Millisecond) | Naive timestamp if it matches time_format without timezone |
S | String | Timestamp(Millisecond, <timezone>) | Timezone-aware timestamp if it matches time_format with timezone |
Ss | String Set | List<Utf8> | |
N | Number | Int64 | Float64 | |
Since DynamoDB stores timestamps as strings, Spice supports parsing timestamps using a customizable format. By default, Spice will try to parse timestamps using ISO8601 format, but you can provide a custom format using the time_format parameter.
Once Spice is able to parse a timestamp, it will convert it to a Timestamp(Millisecond) Arrow type, and will use the same format to serialize it back to DynamoDB for filter pushdown.
This parameter uses Go-style time formatting, which uses a reference time of Mon Jan 2 15:04:05 MST 2006.
| Format Pattern | Example Value | Description |
|---|---|---|
2006-01-02T15:04:05Z07:00 | 2024-03-15T14:30:00Z | ISO8601 / RFC3339 with timezone (default) |
2006-01-02T15:04:05.999Z07:00 | 2024-03-15T14:30:00.123-07:00 | ISO8601 with milliseconds and timezone |
2006-01-02T15:04:05 | 2024-03-15T14:30:00 | ISO8601 without timezone (naive timestamp) |
2006-01-02 15:04:05 | 2024-03-15 14:30:00 | Date and time with space separator |
01/02/2006 15:04:05 | 03/15/2024 14:30:00 | US-style date with time |
02/01/2006 15:04:05 | 15/03/2024 14:30:00 | European-style date with time |
Jan 2, 2006 3:04:05 PM | Mar 15, 2024 2:30:00 PM |
Go's format uses specific reference values that must appear exactly as shown:
| Component | Reference Value | Alternatives |
|---|---|---|
| Year | 2006 | 06 (2-digit) |
| Month | 01 | 1, Jan, January |
| Day | 02 | 2 |
| Hour (24h) | 15 | — |
| Hour (12h) | 03 | 3 |
| Minute | 04 | 4 |
| Second | 05 | 5 |
| AM/PM | PM |
:::
Consider the following document:
Using unnest_depth you can control the unnesting behavior. Here are the examples:
DynamoDB supports complex nested JSON structures. These fields can be queried using SQL:
:::warning[Limitations]
:::
Example schema from a users table:
The DynamoDB Data Connector integrates with DynamoDB Streams to enable real-time streaming of table changes. This feature supports both initial table bootstrapping and continuous change data capture (CDC), allowing Spice to automatically detect and stream inserts, updates, and deletes from DynamoDB tables.
:::warning
Using DynamoDB Streams requires [acceleration(../data-accelerators/index) with refresh_mode: changes.
:::
To enable streaming from DynamoDB, enable acceleration and set the refresh_mode to changes in your dataset configuration.
You also need to configure the on_conflict parameter to specify how the connector should handle updates to existing records. The keys defined in on_conflict must match your DynamoDB table's partition key and range key (if your table has one)
ready_lag - Defines the maximum lag threshold before the dataset is reported as "Ready". Once the stream lag falls below this value, queries can be executed against the dataset. Default behavior reports ready immediately after bootstrap completes.
scan_interval - Controls the polling frequency for checking new records in the DynamoDB stream. Lower values provide more real-time updates but increase API calls. Higher values reduce API usage but may introduce additional latency.
on_conflict - Specifies the conflict resolution strategy when streaming changes that match existing records. The keys in the tuple should correspond to your DynamoDB table's partition key and range key (if applicable). The upsert action will insert new records or update existing ones based on these key columns.
Examples:
id: upsert(partition_key, sort_key): upsertsnapshots_trigger_threshold - Determines how frequently snapshots are created during streaming. A value of 5 means a snapshot is created every 5 batch updates. Snapshots enable faster recovery and better query performance but consume additional storage.
The following Component Metrics are provided for monitoring streaming performance and health:
| Metric | Type | Description |
|---|---|---|
shards_active | Gauge | Current number of active shards in the stream |
records_consumed_total | Counter | Total number of records consumed from the stream |
lag_ms | Gauge | Current lag in milliseconds between stream watermark and the current time |
errors_transient_total | Counter | Total number of transient errors encountered while polling from the stream |
These metrics are not enabled by default, enable them by setting the metrics parameter:
You can find an example dashboard for DynamoDB Streams in monitoring/grafana-dashboard.json.
For production workloads requiring fine-tuned control over streaming behavior and performance characteristics:
ECS Container Credentials:
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI or AWS_CONTAINER_CREDENTIALS_FULL_URI which are automatically injected by ECS.AWS EC2 Instance Metadata Service (IMDSv2):
Ns| Number Set |
List<Int64|Float64> |
B | Binary | Binary |
Bs | Binary Set | List<Binary> |
L | List | List<Utf8> | DynamoDB arrays can be heterogeneous e.g. [1, "foo", true], Arrow arrays must be homogeneous - use strings to preserve all data |
M | Map | Utf8 or Unflattened | Depending on unnest_depth value |
| Human-readable with 12-hour clock |
20060102150405 | 20240315143000 | Compact format (no separators) |
pm| Timezone | Z07:00 | -0700, MST |
| Milliseconds | .000 | .999 (trailing zeros trimmed) |
| Microseconds | .000000 | .999999 (trailing zeros trimmed) |
| Nanoseconds | .000000000 | .999999999 (trailing zeros trimmed) |