date: 2025-09-23 title: 'Spice v1.7.0 (Sep 23, 2025)'
Announcing the release of Spice v1.7.0! ⚡
Spice v1.7.0 upgrades to DataFusion v49 for improved performance and query optimization, introduces real-time full-text search indexing for CDC streams, EmbeddingGemma support for high-quality embeddings, new search table functions powering the /v1/search API, embedding request caching for faster and cost-efficient search and indexing, and OpenAI Responses API tool calls with streaming. This release also includes numerous bug fixes across CDC streams, vector search, the Kafka Data Connector, and error reporting.
Source: DataFusion 49.0.0 Release Blog.
Performance Improvements 🚀
ORDER BY and LIMIT now use dynamic filters and physical filter pushdown, skipping unnecessary data reads for much faster top-k queries.percentile_disc) with WITHIN GROUP.EmbeddingGemma Support: Spice now supports EmbeddingGemma, Google's state-of-the-art embedding model for text and documents. EmbeddingGemma provides high-quality, efficient embeddings for semantic search, retrieval, and recommendation tasks. You can use EmbeddingGemma via HuggingFace in your Spicepod configuration:
Example spicepod.yml snippet:
Learn more about EmbeddingGemma in the official documentation.
POST /v1/search API Use Search Table Functions: The /v1/search API now uses the new text_search and vector_search Table Functions for improved performance.
Embedding Request Caching: The runtime now supports caching embedding requests, reducing latency and cost for repeated content and search requests.
Example spicepod.yml snippet:
See the Caching documentation for details.
Real-Time Indexing for Full Text Search: Full Text search indexing is now supported for connectors that enable real-time changes, such as Debezium CDC streams. Adding a full-text index on a column with refresh_mode: changes works as it does for full/append-mode refreshes, enabling instant search on new data.
Example spicepod.yml snippet:
OpenAI Responses API Tool Calls with Streaming: The OpenAI Responses API now supports tool calls with streaming, enabling advanced model interactions such as web_search and code_interpreter with real-time response streaming. This allows you to invoke OpenAI-hosted tools and receive results as they are generated.
Learn more in the OpenAI Model Provider documentation.
Runtime Output Level Configuration: You can now set the output_level parameter in the Spicepod runtime configuration to control logging verbosity in addition to the existing CLI and environment variable support. Supported values are info, verbose, and very_verbose. The value is applied in the following priority: CLI, environment variables, then YAML configuration.
Example spicepod.yml snippet:
For more details on configuring output level, see the Troubleshooting documentation.
Several bugs and issues have been resolved in this release, including:
refresh_mode: changes could prevent the Spice runtime from becoming Ready, and improved support for full-text indexing on CDC streams.vector_search UDTF.No breaking changes.
The Spice Cookbook includes 78 recipes to help you get started with Spice quickly and easily.
To upgrade to v1.7.0, use one of the following methods:
CLI:
Homebrew:
Docker:
Pull the spiceai/spiceai:1.7.0 image:
For available tags, see DockerHub.
Helm:
AWS Marketplace:
🎉 Spice is now available in the AWS Marketplace!
0.14 by @phillipleblanc in #6977embed UDF by @mach-kernel in #6967ORDER BY: (BytesProcessedExec to avoid pruning ordered execs during physical optimization) by @mach-kernel in #7105max_timestamp_df during acceleration refresh (#7055)" by @phillipleblanc in #7156