title: 'Load and Serve Models Locally' sidebar_label: 'Local Models' description: 'Learn how to load and serve large learning models.' sidebar_position: 6 pagination_prev: null pagination_next: null tags:
Spice supports loading and serving LLMs from various sources for embeddings and inference, including local filesystems and Hugging Face.
Models can be hosted on a local filesystem and referenced directly in the configuration. For more details, see the Filesystem Model Component.
Spice integrates with Hugging Face, enabling you to use a wide range of pre-trained models. For more information, see the Hugging Face Model Component.