Learn how to configure and run Spice in distributed mode to handle larger scale queries across multiple nodes.
:::info Preview
Multi-node distributed query execution based on Apache Ballista is available as a preview feature in Spice v1.9.0.
:::
Spice integrates Apache Ballista to schedule and coordinate distributed queries across multiple executor nodes. This integration is useful when querying large, partitioned datasets in data lake formats such as Parquet, Delta Lake, or Iceberg. For smaller workloads or non-partitioned data, a single Spice instance is typically sufficient.
A distributed Spice cluster consists of two components:
The scheduler holds the cluster-wide configuration for a Spicepod, while executors connect to the scheduler to receive work. A cluster can run with a single scheduler for simplicity, or multiple schedulers for high availability.
Spice separates public and internal cluster traffic across different ports:
| Port | Service | Description |
|---|---|---|
| 50051 | Flight SQL | Public query endpoint |
| 8090 | HTTP API | Public REST API |
| 9090 | Prometheus | Metrics endpoint |
| 50052 | Cluster Service | Internal scheduler/executor communication (mTLS enforced, by default) |
Internal cluster services are isolated on port 50052 with mTLS enforced by default.
Distributed query cluster mode uses mutual TLS (mTLS) for secure communication between schedulers and executors. Internal cluster communication includes highly privileged RPC calls like fetching Spicepod configuration and expanding secrets. mTLS ensures only authenticated nodes can join the cluster and access sensitive data.
Each node in the cluster requires:
ca.crt) trusted by all nodesProduction deployments should use the organization's PKI infrastructure to generate certificates with proper SANs for each node.
For local development and testing, the Spice CLI provides commands to generate self-signed certificates:
Certificates are stored in ~/.spice/pki/ by default.
:::warning CLI-generated certificates are not intended for production use. Production deployments should use certificates issued by the organization's PKI or a trusted certificate authority. :::
For local development and testing, mTLS can be disabled using the --allow-insecure-connections flag:
:::warning
Do not use --allow-insecure-connections in production environments. This flag disables authentication and encryption for internal cluster communication.
:::
Spice separates public and internal cluster traffic across different ports:
| Port | Service | Description |
|---|---|---|
| 50051 | Flight SQL | Public query endpoint |
| 8090 | HTTP API | Public REST API |
| 9090 | Prometheus | Metrics endpoint |
| 50052 | Cluster Service | Internal scheduler/executor communication (mTLS enforced, by default) |
Internal cluster services are isolated on port 50052 with mTLS enforced by default.
Distributed query cluster mode uses mutual TLS (mTLS) for secure communication between schedulers and executors. Internal cluster communication includes highly privileged RPC calls like fetching Spicepod configuration and expanding secrets. mTLS ensures only authenticated nodes can join the cluster and access sensitive data.
Each node in the cluster requires:
ca.crt) trusted by all nodesProduction deployments should use the organization's PKI infrastructure to generate certificates with proper SANs for each node.
For local development and testing, the Spice CLI provides commands to generate self-signed certificates:
Certificates are stored in ~/.spice/pki/ by default.
:::warning CLI-generated certificates are not intended for production use. Production deployments should use certificates issued by the organization's PKI or a trusted certificate authority. :::
For local development and testing, mTLS can be disabled using the --allow-insecure-connections flag:
:::warning
Do not use --allow-insecure-connections in production environments. This flag disables authentication and encryption for internal cluster communication.
:::
Spice separates public and internal cluster traffic across different ports:
| Port | Service | Description |
|---|---|---|
| 50051 | Flight SQL | Public query endpoint |
| 8090 | HTTP API | Public REST API |
| 9090 | Prometheus | Metrics endpoint |
| 50052 | Cluster Service | Internal scheduler/executor communication (mTLS enforced, by default) |
Internal cluster services are isolated on port 50052 with mTLS enforced by default.
Distributed query cluster mode uses mutual TLS (mTLS) for secure communication between schedulers and executors. Internal cluster communication includes highly privileged RPC calls like fetching Spicepod configuration and expanding secrets. mTLS ensures only authenticated nodes can join the cluster and access sensitive data.
Each node in the cluster requires:
ca.crt) trusted by all nodesProduction deployments should use the organization's PKI infrastructure to generate certificates with proper SANs for each node.
For local development and testing, the Spice CLI provides commands to generate self-signed certificates:
Certificates are stored in ~/.spice/pki/ by default.
:::warning CLI-generated certificates are not intended for production use. Production deployments should use certificates issued by the organization's PKI or a trusted certificate authority. :::
For local development and testing, mTLS can be disabled using the --allow-insecure-connections flag:
:::warning
Do not use --allow-insecure-connections in production environments. This flag disables authentication and encryption for internal cluster communication.
:::
Cluster deployment typically starts with a scheduler instance, followed by one or more executors that register with the scheduler.
The following examples use CLI-generated development certificates. For production, substitute certificates from your organization's PKI.
The scheduler is the only spiced process that needs to be configured (i.e. have a spicepod.yaml in the current dir). Override the Flight bind address when it must be reachable outside of localhost:
Executors connect to the scheduler's internal cluster port (50052) to register and pull work. Executors do not require a spicepod.yaml; they fetch the configuration from the scheduler. Each executor automatically selects a free port if the default is unavailable:
Specifying --scheduler-address implies --role executor.
Queries run against the scheduler endpoint. The EXPLAIN output confirms that distributed planning is active—Spice includes a distributed_plan section showing how stages are split across executors:
:::warning[Limitations]
:::
For production deployments, Spice supports running multiple active schedulers in an active/active configuration. This eliminates the scheduler as a single point of failure and enables graceful handling of node failures.
In an HA cluster:
┌─────────────────────┐
│ Load Balancer │
└─────────────────────┘
│
┌────────────────┼────────────────┐
▼ ▼ ▼
┌────────────┐ ┌────────────┐ ┌────────────┐
│ Scheduler │ │ Scheduler │ │ Scheduler │◄──► Object Store
│ │ │ │ │ │ (S3)
└────────────┘ └────────────┘ └────────────┘
▲ ▲ ▲
│ (executor-initiated) │
┌────────────┐ ┌────────────┐ ┌────────────┐
│ Executor │ │ Executor │ │ Executor │
└────────────┘ └────────────┘ └────────────┘
Enable HA by configuring runtime.scheduler.state_location in the Spicepod to point to an S3-compatible object store:
The object store is used for scheduler registration and discovery. Job state persistence for query handoff between schedulers is planned for a future release.
The runtime.scheduler.params section supports the following S3 parameters:
| Parameter | Description | Default |
|---|---|---|
s3_region | AWS region for the S3 bucket | - |
s3_endpoint | Custom S3-compatible endpoint URL | - |
s3_auth | Authentication method: iam_role or key | iam_role |
s3_key | AWS access key ID (when auth: key) | - |
s3_secret | AWS secret access key (when auth: key) | - |
s3_session_token | AWS session token for temporary credentials | - |
client_timeout | S3 client timeout | - |
allow_http | Allow HTTP (non-TLS) connections to S3 endpoint | false |
Example with explicit credentials:
Configure shared state in spicepod.yaml:
Start multiple schedulers, each with unique certificates:
Start executors (they discover all schedulers automatically):
Configure a load balancer to distribute queries across scheduler Flight SQL endpoints (port 50051).
:::info Object Store Requirements The object store must support conditional writes (S3 ETags). Most S3-compatible stores support this, including AWS S3, MinIO, and Google Cloud Storage (with S3 compatibility mode). :::