title: 'Spice Cayenne Data Accelerator' sidebar_label: 'Spice Cayenne Data Accelerator' description: 'Spice Cayenne Data Accelerator (Vortex) Documentation' sidebar_position: 1 tags:
:::info Beta The Spice Cayenne Data Accelerator is in Beta. :::
Spice Cayenne is a data acceleration engine designed for high-performance, scalable query on large-scale datasets. Built on Vortex, a high-performance columnar file format, Spice Cayenne combines columnar storage with in-process metadata management to provide fast query performance to scale to datasets beyond 1TB.
Spice Cayenne uses Vortex as its storage format, providing significant performance advantages:
Vortex is a Linux Foundation (LF AI & Data) project under Apache-2.0 license with neutral governance. For performance benchmarks, see bench.vortex.dev.
While DuckDB excels for datasets up to approximately 1TB, Spice Cayenne with Vortex is designed to scale beyond these limits.
Spice Cayenne follows a lakehouse architecture inspired by DuckLake, separating metadata management from data storage:

Key Design Principles:
ListingTable at a unique directory, enabling append operations and parallel readsFor optimal performance, store Cayenne data files on NVMe storage. NVMe provides the lowest latency and highest throughput for the random access patterns that Vortex files require.
Use S3 Express One Zone when persistence of accelerations across restarts is required. S3 Express One Zone adds network latency compared to local NVMe but provides durability. Sharing accelerated data across multiple Spice instances is planned for a future release.
To use Spice Cayenne as the data accelerator, specify cayenne as the engine for acceleration. Spice Cayenne supports mode: file, mode: file_create, and mode: file_update and stores data on disk.
params| Parameter | Description |
|---|---|
cayenne_compression_strategy | Compression algorithm for accelerated data. Defaults to btrblocks. Supports btrblocks or zstd. |
cayenne_unsupported_type_action | Action when an unsupported data type is encountered. See Data Type Support. |
cayenne_footer_cache_mb | Size of the in-memory Vortex footer cache in megabytes. Larger values improve query performance for repeated scans. Defaults to 128. |
cayenne_segment_cache_mb | Size of the in-memory Vortex segment cache in megabytes, caching decompressed data segments for improved query performance. Defaults to 256. |
cayenne_file_path | Custom path for storing Cayenne data files. Supports local paths or S3 Express One Zone URLs (e.g., s3://bucket--usw2-az1--x-s3/prefix/). |
cayenne_target_file_size_mb | Target size for individual Vortex files in MB. When writes exceed this size, a new Vortex file is created. Defaults to . Smaller files enable better parallelism and predicate pushdown. |
| Parameter | Description |
|---|---|
cayenne_s3_zone_ids | Comma-separated availability zone IDs (e.g., usw2-az1,usw2-az2). Auto-generates bucket names in format spice-{app}-{dataset}--{zone}--x-s3. |
cayenne_s3_region | AWS region (e.g., us-west-2). Auto-derived from zone ID if not specified. |
cayenne_s3_auth | Authentication method: iam_role (default) or key. |
cayenne_s3_key | AWS access key ID (required when cayenne_s3_auth: key). |
cayenne_s3_secret | AWS secret access key (required when cayenne_s3_auth: key). |
cayenne_s3_session_token | AWS session token (optional, for temporary credentials). |
cayenne_s3_endpoint | Custom S3 endpoint URL (optional, overrides auto-generated endpoint). |
Spice Cayenne performance can be optimized through cache configuration, compression strategy selection, and resource allocation.
Spice Cayenne uses two in-memory caches to accelerate query performance:
Footer Cache (cayenne_footer_cache_mb):
The footer cache stores Vortex file metadata, including schemas, statistics, and encoding information. Larger cache sizes benefit workloads with many files.
Segment Cache (cayenne_segment_cache_mb):
The segment cache stores decompressed data segments. Larger cache sizes benefit workloads with repeated queries on the same data.
Example - High-throughput configuration:
Spice Cayenne supports two compression strategies, each with different performance characteristics. The BtrBlocks compression algorithm is designed for fast analytical queries, while zstd provides fast write performance. Additionally, zstd achieves better compression ratios when data contains large chunks of binary or text.
| Strategy | Compression | Read Speed | Write Speed | Best For |
|---|---|---|---|---|
btrblocks | Higher | Faster | Moderate | Read-heavy analytics (default) |
zstd | High | Moderate | Faster | Write-heavy workloads, large binary or text data |
Example - Write-optimized configuration:
The cayenne_target_file_size_mb parameter controls when new Vortex files are created during writes:
Spice Cayenne is DataFusion query-native, meaning all query execution uses Apache DataFusion and adheres to the runtime.query.memory_limit setting. This provides:
DataFusion's FairSpillPool divides memory evenly among partitions, providing predictable memory usage under concurrent query load.
Spice Cayenne uses Vortex's advanced columnar format, which provides:
Vortex delivers 100x faster random access reads compared to Apache Parquet through several architectural features:
Segment Statistics (Zone-Map Equivalent):
Vortex's ChunkedLayout maintains per-segment statistics for each column, enabling segment pruning during query execution. Statistics include:
| Statistic | Description | Use Case |
|---|---|---|
min | Minimum value in segment | Range predicate pruning |
max | Maximum value in segment | Range predicate pruning |
null_count | Count of null values | IS NULL/IS NOT NULL optimization |
is_sorted | Whether segment is sorted | Binary search for point lookups |
is_constant | Whether all values are identical | Immediate value return |
When a query includes a WHERE clause, Spice Cayenne evaluates whether each segment could contain matching rows. Segments that cannot match based on min/max statistics are skipped entirely, similar to DuckDB's zone-maps without requiring explicit index creation.
Example - Segment Pruning:
For a table with segments containing timestamp ranges [2024-01-01, 2024-01-15], [2024-01-16, 2024-01-31], [2024-02-01, 2024-02-15], a query:
Prunes the first segment (max < 2024-01-20) and reads only the second and third segments.
Fast Random Access Encodings:
Vortex encodings support direct random access to compressed data:
Compute Push-Down:
Vortex supports executing filter and compute operations directly on compressed data, avoiding full decompression for predicate evaluation. This compute push-down reduces CPU and memory overhead by processing data in its compressed form:
| Encoding | Data Type | Operations |
|---|---|---|
| FSST | Strings | Equality, prefix matching on compressed symbols |
| FastLanes | Integers | SIMD-accelerated comparison on bit-packed data |
| ALP | Floats | Range comparisons with minimal decompression |
| Dictionary | Any | Lookup predicates evaluated on dictionary indices |
| RLE | Any | Constant runs evaluated once per run |
Array-level statistics (is_sorted, is_constant, min, max) enable additional optimizations beyond filtering. For example, is_sorted enables binary search for point lookups, and is_constant returns values immediately without scanning.
Performance Characteristics:
For point lookups and selective queries, Spice Cayenne with Vortex often matches or exceeds the performance of traditional B-tree indexes while consuming no additional memory for index structures. Performance scales with:
Spice Cayenne implements efficient deletes without rewriting data files using deletion vectors. Deletion vectors track which rows have been logically deleted, and the information is applied transparently during query execution.
Cayenne supports two deletion vector strategies based on your table configuration:
| Strategy | Use Case | Configuration | Memory per Delete |
|---|---|---|---|
| Position-based | Tables without primary key | No primary_key set | ~4 bytes (RoaringBitmap) |
| Key-based | Tables with primary key | primary_key configured | 8+ bytes per key |
Position-based deletion uses row position within the table. Cayenne uses RoaringBitmap for memory-efficient storage of deleted row IDs, providing 50-90% memory savings compared to HashSet for sparse deletions.
Key-based deletion uses the byte representation of primary key columns. This approach is position-independent and survives data reorganization, making it more reliable for tables with primary keys.
For tables with a single-column Int64 primary key, Cayenne uses an optimized direct lookup strategy that avoids serialization overhead:
When on_conflict is configured, Cayenne supports upsert semantics using sequence numbers (Iceberg-style ordering):
When a primary key is deleted and then re-inserted:
Spice Cayenne supports storing data files in AWS S3 Express One Zone for single-digit millisecond latency, ideal for latency-sensitive query workloads that require persistence. Metadata remains on local disk for fast catalog operations while data files are stored in S3 Express One Zone.
S3 Express One Zone directory buckets provide:
Example 1 - Explicit bucket:
Example 2 - Auto-generated bucket with IAM role:
Example 3 - Explicit credentials:
S3 Express One Zone buckets use a specific naming format:
{base-name}--{zone-id}--x-s3{region-code}-az{number} (e.g., usw2-az1, use1-az4)spice-{app-name}-{dataset-name}--{zone-id}--x-s3The zone ID is automatically extracted from the bucket name to configure the correct endpoint.
S3 Express One Zone is available in select regions. Spice automatically derives the region from zone IDs:
| Zone ID Prefix | Region |
|---|---|
use1 | us-east-1 |
use2 | us-east-2 |
usw1 | us-west-1 |
usw2 | us-west-2 |
euw1 | eu-west-1 |
euw2 | eu-west-2 |
euw3 | eu-west-3 |
euc1 | eu-central-1 |
eun1 | eu-north-1 |
eus1 | eu-south-1 |
apne1 | ap-northeast-1 |
apne2 | ap-northeast-2 |
apse1 | ap-southeast-1 |
apse2 | ap-southeast-2 |
See AWS documentation for the complete list of S3 Express One Zone availability zones.
cayenne_s3_zone_ids, Spice automatically creates the S3 Express directory bucket if it doesn't exist (requires appropriate IAM permissions).Cayenne (via Vortex) supports most Arrow data types with the following considerations:
Int8, Int16, Int32, Int64, UInt*)Float32, Float64)| Original Type | Converted To | Notes |
|---|---|---|
Float16 | Float32 | Automatic conversion for Vortex compatibility |
Timestamp(Nanosecond/...) | Timestamp(Microsecond) | Precision normalized |
The following types require the unsupported_type_action parameter:
Interval typesDuration typesMap typesFixedSizeBinaryunsupported_type_action options:
| Value | Behavior |
|---|---|
error | Fail with error (default) |
string | Convert to Utf8 string |
warn | Include as-is with warning (may fail on insert) |
ignore | Skip the column entirely |
Resource requirements for Spice Cayenne depend on dataset size, query patterns, and cache configuration.
Spice Cayenne manages memory efficiently through columnar storage and selective caching. Memory allocation should account for:
| Component | Default | Notes |
|---|---|---|
| Runtime overhead | ~500 MB | Fixed baseline for the Spice runtime |
| Footer cache | 128 MB | Increase for datasets with many files (1-10 KB per file) |
| Segment cache | 256 MB | Increase based on hot data volume |
| Query execution | Variable | Depends on query complexity and concurrency |
Example - Memory-constrained environment:
Spice Cayenne stores data in a columnar format optimized for analytical queries. Storage requirements include:
Query performance scales with available CPU cores. Vortex's columnar format supports parallel decompression and scanning across multiple threads. Allocate sufficient CPU for:
Consider the following limitations when using Spice Cayenne acceleration:
mode: file and does not support in-memory (mode: memory) acceleration.Interval, Duration, Map, and FixedSizeBinary types require unsupported_type_action configuration.indexes configuration. Vortex's segment statistics and fast random access encodings provide equivalent or better performance for most point lookup workloads.:::warning BETA SOFTWARE As a Beta feature, Spice Cayenne should be thoroughly tested in development environments before production deployment. Monitor release notes for updates, breaking changes, and new capabilities. :::
Complete example configuration using Spice Cayenne with performance tuning:
Spice Documentation:
External References:
128cayenne_metadata_dir | Custom directory for storing Cayenne metadata (SQLite catalog). Defaults to {spice_data_path}/metadata. |
cayenne_metastore | Metastore backend type. Supports sqlite (default) or turso (requires turso feature flag). |
sort_columns | Comma-separated list of columns to sort data by on refresh operations. Improves segment pruning for frequently filtered columns. |
unsupported_type_action | Action when encountering unsupported data types. Options: error (default), warn, ignore, string. |
cayenne_s3_client_timeout | Request timeout duration (e.g., 5m). Defaults to 5 minutes for uploads. |
cayenne_s3_allow_http | Set to true for testing with local S3-compatible storage. Defaults to false. |
aps1 |
| ap-south-1 |
sae1 | sa-east-1 |
cac1 | ca-central-1 |
afs1 | af-south-1 |
mes1 | me-south-1 |