:::info Spice is built on Apache DataFusion and uses the PostgreSQL dialect, even when querying datasources with different SQL dialects. Note, when using a data accelerator like DuckDB, function support is specific to each acceleration engine, and not all functions are supported by all acceleration engines. :::
Scalar functions help transform, compute, and manipulate data at the row level. These functions are evaluated for each row in a query result and return a single value per invocation. Spice.ai supports a broad set of scalar functions, including math, string, conditional, date/time, array, struct, map, regular expression, and hashing functions. The function set closely follows the PostgreSQL dialect.
Math functions in Spice.ai SQL help perform numeric calculations, transformations, and analysis. These functions operate on numeric expressions, which can be constants, columns, or results of other functions and operators. The following math functions are supported:
absReturns the absolute value of a numeric expression. If the input is negative, the result is its positive equivalent; if the input is positive or zero, the result is unchanged.
acosReturns the arc cosine (inverse cosine) of a numeric expression. The input must be in the range [-1, 1]. The result is in radians.
acoshReturns the inverse hyperbolic cosine of a numeric expression. The input must be greater than or equal to 1.
asinReturns the arc sine (inverse sine) of a numeric expression. The input must be in the range [-1, 1]. The result is in radians.
asinhReturns the inverse hyperbolic sine of a numeric expression.
atanReturns the arc tangent (inverse tangent) of a numeric expression. The result is in radians.
atan2Returns the arc tangent of the quotient of its arguments, that is, atan(expression_y / expression_x). The result is in radians and takes into account the signs of both arguments to determine the correct quadrant.
atanhReturns the inverse hyperbolic tangent of a numeric expression. The input must be in the range (-1, 1).
cbrtReturns the cube root of a numeric expression.
ceilReturns the smallest integer greater than or equal to the input value.
cosReturns the cosine of a numeric expression, where the input is in radians.
coshReturns the hyperbolic cosine of a numeric expression.
cotReturns the cotangent of a numeric expression, where the input is in radians.
degreesConverts radians to degrees.
expReturns the value of e (Euler's number) raised to the power of the input value.
factorialReturns the factorial of a non-negative integer. For values less than 2, returns 1.
floorReturns the largest integer less than or equal to the input value.
gcdReturns the greatest common divisor of two integer expressions. If both inputs are zero, returns 0.
isnanReturns true if the input is NaN (not a number), otherwise returns false.
iszeroReturns true if the input is +0.0 or -0.0, otherwise returns false.
lcmReturns the least common multiple of two integer expressions. If either input is zero, returns 0.
modReturns the remainder after dividing the first argument by the second, matching the Spark SQL %/mod semantics for signed values.
Reference: Spark SQL mod.
pmodReturns a positive remainder for integer or floating-point division. When the standard remainder is negative, the divisor is added to produce a non-negative result, mirroring Spark SQL behavior.
Reference: Spark SQL pmod.
lnReturns the natural logarithm (base e) of a numeric expression.
logReturns the logarithm of a numeric expression. If a base is provided, returns the logarithm to that base; otherwise, returns the base-10 logarithm.
log10Returns the base-10 logarithm of a numeric expression.
log2Returns the base-2 logarithm of a numeric expression.
nanvlReturns the first argument if it is not NaN; otherwise, returns the second argument.
piReturns an approximate value of π (pi).
pow and powerReturns the value of the first argument raised to the power of the second argument. pow is an alias for power.
radiansConverts degrees to radians.
randomReturns a random floating-point value in the range [0, 1). The random seed is unique for each row.
roundRounds a numeric expression to the nearest integer or to a specified number of decimal places.
rintRounds a double-precision value to the nearest integer using IEEE-754 "round to nearest, ties to even" rules and returns the rounded value as a floating-point number, matching Spark SQL semantics.
Reference: Spark SQL rint.
signumReturns the sign of a numeric expression. Returns -1 for negative numbers, 1 for zero and positive numbers.
sinReturns the sine of a numeric expression, where the input is in radians.
sinhReturns the hyperbolic sine of a numeric expression.
sqrtReturns the square root of a numeric expression.
tanReturns the tangent of a numeric expression, where the input is in radians.
tanhReturns the hyperbolic tangent of a numeric expression.
truncTruncates a numeric expression to a whole number or to a specified number of decimal places. If decimal_places is positive, truncates digits to the right of the decimal point; if negative, truncates digits to the left.
width_bucketAssigns a value to an equiwidth histogram bucket. Returns 0 when the value is below min_value, num_bucket + 1 when it is above max_value, and otherwise the 1-based bucket index, mirroring Spark SQL behavior.
Reference: Spark SQL width_bucket.
Conditional functions help handle null values, select among alternatives, and compare multiple expressions. Functions such as coalesce, greatest, least, and nullif are supported. These are useful for data cleaning and conditional logic in queries.
ifEvaluates a boolean condition and returns one of two expressions, matching the Spark SQL if function semantics.
Reference: Spark SQL if.
String functions in Spice.ai SQL help manipulate, analyze, and transform text data. These functions operate on string expressions, which can be constants, columns, or results of other functions. The implementation closely follows the PostgreSQL dialect. The following string functions are supported:
asciiReturns the Unicode code point of the first character in a string. If the string is empty, returns 0.
Related function: chr
bit_lengthReturns the number of bits in the string. Each character is counted according to its byte representation (8 bits per byte).
Related functions: length, octet_length
btrimRemoves the longest string containing only characters in trim_str from the start and end of str. If trim_str is omitted, whitespace is removed.
Alternative syntax: trim(BOTH trim_str FROM str) or trim(trim_str FROM str)
Aliases: trim
Related functions: ltrim, rtrim
char_lengthAlias of character_length.
character_lengthReturns the number of characters in a string, not bytes. Handles Unicode correctly.
Aliases: length, char_length
Related functions: bit_length, octet_length
chrReturns the character with the specified Unicode code point.
Related function: ascii
concatConcatenates two or more strings into a single string.
Related function: concat_ws
concat_wsConcatenates strings using a separator between each value.
Related function: concat
containsReturns true if search_str is found within str. The search is case-sensitive.
likePerforms SQL pattern matching using % to match zero or more characters and _ to match a single character. The optional SQL ESCAPE clause can be used to treat a wildcard literally, matching Spark SQL behavior.
% and _ wildcards.Reference: Spark SQL like.
ilikeCase-insensitive variant of like that treats ASCII characters in str and pattern without regard to case. The optional SQL ESCAPE clause may be used to treat % or _ literally.
% and _ wildcards.Reference: Spark SQL ilike.
ends_withReturns true if str ends with the substring substr.
find_in_setReturns the position (1-based) of str in the comma-separated list strlist. Returns 0 if not found.
initcapCapitalizes the first character of each word in the string. Words are delimited by non-alphanumeric characters.
Related functions: lower, upper
instrAlias of strpos.
leftReturns the first n characters from the left side of the string.
Related function: right
lengthAlias of character_length.
levenshteinReturns the Levenshtein distance between two strings.
lowerConverts all characters in the string to lower case.
Related functions: initcap, upper
luhn_checkValidates that a string of digits satisfies the Luhn checksum, returning true for valid numbers and false otherwise. This matches the Spark SQL implementation and is useful for validating identifiers such as credit card numbers.
Reference: Spark SQL luhn_check.
lpadPads the left side of the string with another string until the result reaches the specified length. If the padding string is omitted, a space is used.
Related function: rpad
ltrimRemoves the longest string containing only characters in trim_str from the start of str. If trim_str is omitted, whitespace is removed.
Alternative syntax: trim(LEADING trim_str FROM str)
Related functions: btrim, rtrim
octet_lengthReturns the number of bytes in the string.
Related functions: bit_length, length
overlayReplaces a substring of str with substr, starting at position pos for count characters. If count is omitted, uses the length of substr.
parse_urlExtracts a component from a URL, or retrieves an individual query parameter when provided a key, following Spark SQL semantics. Supported parts include HOST, PATH, QUERY, REF, PROTOCOL, FILE, and AUTHORITY.
QUERY part.Reference: Spark SQL parse_url.
positionAlias of strpos.
repeatReturns a string consisting of the input string repeated n times.
replaceReplaces all occurrences of substr in str with replacement.
reverseReturns the string with the character order reversed.
rightReturns the last n characters from the right side of the string.
Related function: left
rpadPads the right side of the string with another string until the result reaches the specified length. If the padding string is omitted, a space is used.
Related function: lpad
rtrimRemoves the longest string containing only characters in trim_str from the end of str. If trim_str is omitted, whitespace is removed.
Alternative syntax: trim(TRAILING trim_str FROM str)
Related functions: btrim, ltrim
split_partSplits the string on the specified delimiter and returns the substring at the given position (1-based).
starts_withReturns true if str starts with the substring substr.
strposReturns the position (1-based) of the first occurrence of substr in str. Returns 0 if not found.
Alternative syntax: position(substr in origstr)
Aliases: instr, position
substrExtracts a substring from str, starting at start_pos for length characters. If length is omitted, returns the rest of the string.
Alternative syntax: substring(str from start_pos for length)
Aliases: substring
substr_indexReturns the substring from str before or after a specified number of occurrences of the delimiter delim. If count is positive, returns everything to the left of the final delimiter (counting from the left). If count is negative, returns everything to the right of the final delimiter (counting from the right).
Aliases: substring_index
substringAlias of substr.
substring_indexAlias of substr_index.
to_hexConverts an integer to its hexadecimal string representation.
translateReplaces each character in str that matches a character in chars with the corresponding character in translation. If translation is shorter than chars, extra characters are removed.
trimAlias of btrim.
upperConverts all characters in the string to upper case.
Related functions: initcap, lower
uuidReturns a UUID v4 string value that is unique per row.
Binary string functions help encode and decode binary data, such as base64 and hexadecimal conversions. These are useful for working with encoded data or binary blobs.
bit_getReturns the bit (0 or 1) at the specified zero-based position when counting from the least-significant bit of an integral or binary expression, matching Spark SQL semantics.
Reference: Spark SQL bit_get.
bit_countCounts the number of set bits in an integral or binary expression. Useful for quick popcount operations on bitmaps or packed flags, aligned with Spark SQL behavior.
Reference: Spark SQL bit_count.
bitmap_countReturns the number of set bits in a binary bitmap produced by functions such as bitmap_construct_agg, mirroring the Spark SQL implementation.
Reference: Spark SQL bitmap_count.
Regular expression functions help match, extract, and replace patterns in strings. Spice.ai uses a PCRE-like regular expression syntax. Spice supports the following regular expressions:
regexp_likeReturns true if a regular expression has at least one match in a string, false otherwise.
regexp_matchReturns the first regular expression matches in a string.
regexp_replaceReplaces substrings in a string that match a regular expression.
regexp_countReturns the number of matches that a regular expression has in a string.
regexp_instrReturns the position in a string where the specified occurrence of a POSIX regular expression is located.
Time and date functions help extract, format, and manipulate temporal data. Functions include current_date, now, date_part, date_trunc, and various conversion functions. These are essential for time series analysis and working with timestamps.
current_dateReturns the current UTC date.
The current_date() return value is determined at query time and will return the same date, no matter when in the query plan the function executes.
current_timeReturns the current UTC time.
The current_time() return value is determined at query time and will return the same time, no matter when in the query plan the function executes.
current_timestampAlias of now.
date_binCalculates time intervals and returns the start of the interval nearest to the specified timestamp. Use date_bin to downsample time series data by grouping rows into time-based "bins" or "windows" and applying an aggregate or selector function to each window.
For example, if you "bin" or "window" data into 15 minute intervals, an input timestamp of 2023-01-01T18:18:18Z will be updated to the start time of the 15 minute bin it is in: 2023-01-01T18:15:00Z.
date_addAdds a number of days to a DATE or TIMESTAMP expression, matching Spark SQL semantics. Negative offsets move backwards in time.
Reference: Spark SQL date_add.
date_subSubtracts a number of days from a DATE or TIMESTAMP expression using Spark-compatible behavior.
Reference: Spark SQL date_sub.
last_dayReturns the last day of the month that contains the input date or timestamp, matching Spark SQL semantics.
Reference: Spark SQL last_day.
next_dayReturns the first date after start_date that matches the requested day of week. Valid day names include full names (e.g., Monday) or abbreviations such as Mon, matching Spark SQL behavior.
Reference: Spark SQL next_day.
date_formatAlias of to_char.
date_partReturns the specified part of the date as an integer.
part: Part of the date to return. The following date parts are supported:
expression: Time expression to operate on. Can be a constant, column, or function.
date_truncTruncates a timestamp value to a specified precision.
precision: Time precision to truncate to. The following precisions are supported:
expression: Time expression to operate on. Can be a constant, column, or function.
datepartAlias of date_part.
datetruncAlias of date_trunc.
from_unixtimeConverts an integer to RFC3339 timestamp format (YYYY-MM-DDT00:00:00.000000000Z). Integers and unsigned integers are interpreted as seconds since the unix epoch (1970-01-01T00:00:00Z) return the corresponding timestamp.
make_dateMake a date from year/month/day component parts.
nowReturns the current UTC timestamp.
The now() return value is determined at query time and will return the same timestamp, no matter when in the query plan the function executes.
to_charReturns a string representation of a date, time, timestamp or duration based on a Chrono format. Unlike the PostgreSQL equivalent of this function numerical formatting is not supported.
to_dateConverts a value to a date (YYYY-MM-DD).
Supports strings, integer and double types as input.
Strings are parsed as YYYY-MM-DD (e.g. '2023-07-20') if no Chrono formats are provided.
Integers and doubles are interpreted as days since the unix epoch (1970-01-01T00:00:00Z).
Returns the corresponding date.
Note: to_date returns Date32, which represents its values as the number of days since unix epoch(1970-01-01) stored as signed 32 bit value. The largest supported date value is 9999-12-31.
to_local_timeConverts a timestamp with a timezone to a timestamp without a timezone (with no offset or timezone information). This function handles daylight saving time changes.
to_timestampConverts a value to a timestamp (YYYY-MM-DDT00:00:00Z). Supports strings, integer, unsigned integer, and double types as input. Strings are parsed as RFC3339 (e.g. '2023-07-20T05:44:00') if no [Chrono formats] are provided. Integers, unsigned integers, and doubles are interpreted as seconds since the unix epoch (1970-01-01T00:00:00Z). Returns the corresponding timestamp.
Note: to_timestamp returns Timestamp(Nanosecond). The supported range for integer input is between -9223372037 and 9223372036. Supported range for string input is between 1677-09-21T00:12:44.0 and 2262-04-11T23:47:16.0. Please use to_timestamp_seconds for the input outside of supported bounds.
to_timestamp_microsConverts a value to a timestamp (YYYY-MM-DDT00:00:00.000000Z). Supports strings, integer, and unsigned integer types as input. Strings are parsed as RFC3339 (e.g. '2023-07-20T05:44:00') if no Chrono formats are provided. Integers and unsigned integers are interpreted as microseconds since the unix epoch (1970-01-01T00:00:00Z) Returns the corresponding timestamp.
to_timestamp_millisConverts a value to a timestamp (YYYY-MM-DDT00:00:00.000Z). Supports strings, integer, and unsigned integer types as input. Strings are parsed as RFC3339 (e.g. '2023-07-20T05:44:00') if no Chrono formats are provided. Integers and unsigned integers are interpreted as milliseconds since the unix epoch (1970-01-01T00:00:00Z). Returns the corresponding timestamp.
to_timestamp_nanosConverts a value to a timestamp (YYYY-MM-DDT00:00:00.000000000Z). Supports strings, integer, and unsigned integer types as input. Strings are parsed as RFC3339 (e.g. '2023-07-20T05:44:00') if no Chrono formats are provided. Integers and unsigned integers are interpreted as nanoseconds since the unix epoch (1970-01-01T00:00:00Z). Returns the corresponding timestamp.
to_timestamp_secondsConverts a value to a timestamp (YYYY-MM-DDT00:00:00.000Z). Supports strings, integer, and unsigned integer types as input. Strings are parsed as RFC3339 (e.g. '2023-07-20T05:44:00') if no Chrono formats are provided. Integers and unsigned integers are interpreted as seconds since the unix epoch (1970-01-01T00:00:00Z). Returns the corresponding timestamp.
to_unixtimeConverts a value to seconds since the unix epoch (1970-01-01T00:00:00Z). Supports strings, dates, timestamps and double types as input. Strings are parsed as RFC3339 (e.g. '2023-07-20T05:44:00') if no Chrono formats are provided.
todayAlias of current_date.
Array functions in Spice.ai SQL help construct, transform, and query array data types. These functions operate on array expressions, which can be constants, columns, or results of other functions. The implementation closely follows the PostgreSQL dialect. The following array functions are supported:
arrayConstructs an array from the provided expressions using Spark-compatible semantics. Inputs are evaluated left to right, cast to a common element type, and collected into a single Arrow list value without removing duplicates or nulls.
Reference: Spark SQL array.
array_any_valueReturns the first non-null element in the array. If all elements are null, returns null.
array_appendAppends an element to the end of an array and returns the new array.
array_catAlias of array_concat.
array_concatConcatenates two or more arrays into a single array.
array_containsReturns true if the array contains the specified element.
Note: For array-to-array containment operations, use the @> operator.
array_dimsReturns an array of the array's dimensions. For a 2D array, returns the number of rows and columns.
array_distanceReturns the Euclidean distance between two input arrays of equal length.
array_distinctReturns a new array with duplicate elements removed, preserving the order of first occurrence.
array_elementExtracts the element at the specified index from the array. Indexing is 1-based.
Struct functions help construct and access structured data types (Arrow structs). Functions such as struct, named_struct, and get_field are supported. These are useful for working with nested or composite data.
Map functions help construct and query key-value data structures. Functions include map, map_extract, map_keys, and map_values. These are useful for semi-structured or JSON-like data.
Hashing functions help compute cryptographic hashes and checksums, such as md5, sha256, and digest. These are useful for data integrity, fingerprinting, and security applications.
Union functions help work with union (variant) data types, such as extracting the value or tag from a union. Functions include union_extract and union_tag.
Additional scalar functions include type casting, type inspection, and version reporting. Functions such as arrow_cast, arrow_typeof, and version are available.
aiInvokes large language models (LLMs) directly within SQL queries for text generation tasks. This asynchronous function processes prompts through configured model providers and returns generated text responses.
Returns a string containing the generated text response. Returns NULL if an error occurs during processing (errors are logged).
Queries execute asynchronously, processing LLM calls in parallel across rows for improved performance. Each invocation queues an asynchronous call to the specified model provider.
The function honors DataFusion concurrency configuration for parallel requests. When multiple models with different providers are configured (e.g., OpenAI and Anthropic), each provider processes requests in parallel according to concurrency settings.
Limits: Maximum batch size of 100 rows per query; maximum input message size of 1 MB per message.
Models must be configured in spicepod.yaml under the models section. See Large Language Models for configuration details.
embedGenerates vector embeddings for text using specified embedding models. Supports both single text strings and arrays of text for batch processing.
Returns a list of floating-point values representing the embedding vector. For array inputs, returns embeddings for each element, preserving the input array length including NULL values.
bucketAssigns a deterministic bucket identifier for a value by hashing the input and projecting it into a fixed number of buckets. Helpful for partition_by expressions and for co-locating related rows during acceleration refreshes.
Returns an Int64 in the range [0, num_buckets - 1]. The same input value always maps to the same bucket for a given num_buckets.
In spicepod.yaml, use the function directly inside partition_by to build file-based accelerations:
truncateRounds numeric values down to the nearest multiple of the specified width. Useful when partitioning timestamps or numeric identifiers into wider ranges.
10, 900, or 3600).Returns a numeric value of the same type as value, rounded down so that the result is evenly divisible by width.
Spice.ai aims for compatibility with PostgreSQL, but some functions or behaviors may differ depending on the underlying engine version.