DuckDB is an in-process SQL OLAP (Online Analytical Processing) database management system designed for analytical query workloads. It is optimized for fast execution and can be embedded directly into applications, providing efficient data processing without the need for a separate database server.
This connector supports DuckDB persistent databases as a data source for federated SQL queries.
fromThe from field supports one of two forms:
from | Description |
|---|---|
duckdb:database.schema.table | Read data from a table named database.schema.table in the DuckDB file |
duckdb:* | Read data using any DuckDB function that produces a table. For example one of the data import functions such as read_json, read_parquet or read_csv. |
nameThe dataset name. This will be used as the table name within Spice.
Example:
The dataset name cannot be a reserved keyword.
paramsThe DuckDB data connector can be configured by providing the following params:
| Parameter Name | Description |
|---|---|
duckdb_open | The name of the DuckDB database to open. |
Configuration params are provided either in the top level dataset for a dataset source, or in the acceleration section for a data store.
A generic example of DuckDB data connector configuration.
Common data import DuckDB functions can also define datasets. Instead of a fixed table reference (e.g. database.schema.table), a DuckDB function is provided in the from: key. For example
Datasets created from DuckDB functions are similar to a standard SELECT query. For example:
is equivalent to:
Many DuckDB data imports can be rewritten as DuckDB functions, making them usable as Spice datasets. For example:
:::warning[Limitations]
SELECT MAP(['key1', 'key2', 'key3'], [10, 20, 30])Decimal256 (76 digits), as it exceeds DuckDB's maximum Decimal width of 38 digits.:::