The File Data Connector enables federated SQL queries on files stored by locally accessible filesystems. It supports querying individual files or entire directories, where all child files within the directory will be loaded and queried.
File formats are specified using the file_format parameter, as described in Object Store File Formats.
Example spicepod.yml
fromThe from field for the File connector takes the form file://path where path is the path to the file to read from. See the examples below for examples of relative and absolute paths
nameThe dataset name. This will be used as the table name within Spice.
Example:
The dataset name cannot be a reserved keyword.
params| Parameter name | Description |
|---|---|
file_format | Specifies the data file format. Required if the format cannot be inferred from the from path. Refer to Object Store File Formats for details. |
hive_partitioning_enabled | Enable partitioning using hive-style partitioning from the folder structure. Defaults to false |
schema_source_path | Specifies the path used to infer the dataset schema. Default to the most recently modified file |
For additional CSV, JSON, and Parquet specific parameters, see File Formats.
In addition to standard Data Refresh, a data refresh can also be triggered when the source file is modified. The File Data Connector uses a file system watcher to be notified the file has changed. The file watcher is disabled by default and can be enabled by setting the file_watcher parameter to enabled in the acceleration parameters.
When the file is modified, the acceleration will be refreshed and will include the latest data.
Refer to Object Store Data Types for data type mapping from object store files to arrow data type.
In this example, path is an absolute path to the file on the filesystem.
In this example, the path is relative to the directory where the spicepod.yaml is located.
:::warning[Performance Considerations]
When using the File Data connector without acceleration, data is loaded into memory during query execution. Ensure sufficient memory is available, including overhead for queries and the runtime, especially with concurrent queries.
Memory limitations can be mitigated by storing acceleration data on disk, which is supported by duckdb and sqlite accelerators by specifying mode: file.
:::
Refer to the File cookbook recipe to see an example of the File connector in use.