We are excited to announce the release of Spice.ai v0.5-alpha! 🥇
Highlights include a new learning algorithm called "Soft Actor-Critic" (SAC), fixes to the behavior of spice upgrade
, and a more consistent authoring experience for reward functions.
If you are new to Spice.ai, check out the getting started guide and star spiceai/spiceai on GitHub.
The addition of the Soft Actor-Critic (Discrete) (SAC) learning algorithm is a significant improvement to the power of the AI engine. It is not set as the default algorithm yet, so to start using it pass the --learning-algorithm sacd
parameter to spice train
. We'd love to get your feedback on how its working!
With the addition of the reward function files that allow you to edit your reward function in a Python file, the behavior of starting a new training session by editing the reward function code was lost. With this release, that behavior is restored.
In addition, there is a breaking change to the variables used to access the observation state and interpretations. This change was made to better reflect the purpose of the variables and make them easier to work with in Python
Previous (Type) | New (Type) |
---|---|
prev_state (SimpleNamespace) | current_state (dict) |
prev_state.interpretations (list) | current_state_interpretations (list) |
new_state (SimpleNamespace) | next_state (dict) |
new_state.interpretations (list) | next_state_interpretations (list) |
The Spice.ai CLI will no longer recommend "upgrading" to an older version. An issue was also fixed where trying to upgrade the Spice.ai CLI using spice upgrade
on Linux would return an error.
prev_state
and new_state
to current_state
and next_state
to be consistent with the reward function files.spice upgrade
command.Spice.ai started with the vision to make AI easy for developers. We are building Spice.ai in the open and with the community. Reach out on Discord or by email to get involved. We will also be starting a community call series soon!