The Spice open source project provides multiple distribution variants to support different use cases and deployment scenarios.
:::note The Spice runtime is 64-bit only. 32-bit platforms are not supported. :::
| Channel | Image | Description |
|---|---|---|
| DockerHub | spiceai/spiceai | Official release images |
| GitHub Container Registry | ghcr.io/spiceai/spiceai | Official release images |
| GitHub Container Registry (Nightly) | ghcr.io/spiceai/spiceai-nightly | Nightly builds with additional variants |
| AWS Marketplace | — | Enterprise image |
| Azure Marketplace | — | Enterprise image (coming soon) |
| Spice Cloud Platform | — | Uses Enterprise image |
| Spice.ai Enterprise | — | Uses Enterprise image |
:::note Some variant distributions are only available in nightly images (data) or exclusively through the Spice Cloud Platform and Spice.ai Enterprise (NAS, CUDA, allocator variants). :::
| Platform | Architecture | Minimum CPU Features | Build Prerequisites |
|---|---|---|---|
| Linux | x86_64 | AVX2, FMA, BMI1/2, LZCNT, POPCNT | — |
| Linux | aarch64 (arm64) | NEON, FP16 (FEAT_FP16), FHM (FEAT_FHM) | clang, lld |
| macOS | aarch64 (Apple Silicon) | Native (build host) | — |
| Windows | x86_64 (MSVC) | — | MSVC toolchain |
:::note
Windows support is CLI (spice) only. The runtime daemon (spiced) is not supported on Windows natively — use WSL instead.
:::
| Distribution / Variant | Image Tag | Open Source | Spice Cloud | Enterprise |
|---|---|---|---|---|
| Default (Data + AI) | latest | ✅ | ✅ | ✅ |
| Data-only | latest-data | Nightly only | ✅ | ✅ |
| NAS (SMB + NFS) | — | Local build only | ❌ | ✅ |
| Metal (macOS) | — | Local build only | ✅ | ✅ |
| CUDA (Linux) | latest-cuda | Local build only | ✅ | ✅ |
| Allocator variants | latest-{jemalloc,mimalloc,sysalloc} | Local build only | ✅ | ✅ |
| ODBC connector | — | Local build only | ✅ | ✅ |
The default distribution includes all features including AI/ML model support. This is the recommended distribution for most users.
Included Features:
:::note The PostgreSQL data accelerator is only available in nightly builds. The PostgreSQL data connector is included in all distributions. :::
Installation:
Docker:
The data distribution excludes AI/ML model support, resulting in a smaller binary size and reduced attack surface. Use this when data federation and acceleration capabilities are needed without AI features.
:::note Open Source: Available in nightly builds only. Cloud Platform & Enterprise: Production-ready data distribution available. :::
Included Features:
Excluded Features:
Docker (Nightly):
Local Build:
For macOS systems with Apple Silicon, the Metal distribution enables GPU-accelerated AI/ML inference.
Included Features:
Local Build:
For Linux systems with NVIDIA GPUs, CUDA distributions enable GPU-accelerated AI/ML inference. Multiple CUDA compute capability versions are available.
:::note CUDA distributions are available with the Spice Cloud Platform and Spice.ai Enterprise. Open source users can build locally for development and testing. :::
Included Features:
Supported Compute Capabilities:
Local Build:
The NAS (Network Attached Storage) distribution adds support for SMB and NFS data connectors, enabling federated queries against data stored on network file shares.
:::note The NAS distribution is available with Spice.ai Enterprise. Open source users can build locally for development and testing. :::
Included Features:
Local Build:
Different memory allocators can significantly impact performance depending on workload characteristics.
:::note Allocator variants are available with the Spice Cloud Platform and Spice.ai Enterprise. Open source users can build locally for development and testing. :::
The default allocator, optimized for concurrent workloads.
Alternative allocator that may perform better for certain memory allocation patterns.
Microsoft's mimalloc allocator, designed for performance and security.
Uses the system's default allocator (glibc malloc on Linux).
| Platform | Default | Data | NAS | Metal | CUDA |
|---|---|---|---|---|---|
| Linux x86_64 | ✅ | Nightly | Enterprise only | ❌ | Cloud/Enterprise |
| Linux aarch64 | ✅ | Nightly | Enterprise only | ❌ | ❌ |
| macOS aarch64 (Apple Silicon) | ✅ | Nightly | Enterprise only | ✅ | ❌ |
| Windows (WSL) | ✅ | Nightly | Enterprise only | ❌ | Cloud/Enterprise |
| Windows (Native) | ❌ | Enterprise only | Enterprise only | ❌ | Enterprise only |
:::note Native Windows support for the Spice runtime is available with the Spice Cloud Platform and Spice.ai Enterprise. Open source users on Windows should use Windows Subsystem for Linux (WSL). :::
| Use Case | Recommended Distribution |
|---|---|
| General purpose with AI capabilities | Default |
| Data federation only, minimal footprint | Data (nightly) |
| Network attached storage (SMB/NFS) | NAS |
| macOS with GPU acceleration | Metal |
| Linux with NVIDIA GPU | CUDA |
| Memory allocation benchmarking | Allocator variants |
Some connectors require additional dependencies and are available with the Spice Cloud Platform and Spice.ai Enterprise:
These can be built locally for development and testing:
gemm matrix multiplication library (used by the Candle ML framework) contains half-precision ARM inline assembly that requires the fullfp16 CPU feature. This is supported on AWS Graviton2+, Ampere Altra, Apple M-series (via Linux VM), and most ARMv8.2-A+ processors.R_AARCH64_CALL26 relocations. lld automatically inserts range extension thunks.sudo apt-get install -y clang lldCustom distributions with specific feature combinations can be built:
See the project Makefile for all available build targets and options.