Getting Started 👋
Open-source models with Ollama
Connect with open-source models while maintaining full privacy.
By default, the Alice app stores all settings locally on your Mac or PC. This means that if your hardware is powerful enough to run local, open-source language models, you can maintain complete privacy over your conversations.
The easiest way to download and run an open-source language model locally is to use https://ollama.com. First, download the app and install it on your computer.
Note: After opening the app, nothing will happen because Ollama will run in the background.

Next, you need to download the model and as a moment of writing this, one of the best models are Gemma 3 and DeepSeek-R1. Let's download Gemma.
Open your terminal or PowerShell and run the following command:
ollama run gemma3
. If you have at least 64GB of RAM, you can run a larger version that will run more slowly but provide better responses —ollama run gemma3:27b-it-q8_0
.The model will be downloaded, so this may take some time, depending on your Internet connection.
Then open the Alice app, go to
Settings -> Apps
, and set up a connection as shown in the image below. Note: Context window and output context limit values can be found on the model's page on the Ollama website.

Connection Name: Ollama
API Endpoint URL: http://localhost:11434/v1/chat/completions
Model Name: gemma3
Context Window: 128000
Output Limit: 8196
Now, let's ensure the connection is active by switching the toggle, and also make sure to activate the "Enable Streaming" option as well.
From now on, the Gemma 3 model will be available in the models list.

Notes:
For the first query, model will be loaded into memory, so it will take much longer than subsequent requests.
The longer the conversation, the longer it will take to generate a response, so it's a good idea to keep threads short.
The overall performance and capabilities of open-source language models are significantly lower than those of commercial models.
You can add any text and vision model supported by Ollama. For vision models, just make sure to enable the "Vision Support" toggle.
By default, the Alice app stores all settings locally on your Mac or PC. This means that if your hardware is powerful enough to run local, open-source language models, you can maintain complete privacy over your conversations.
The easiest way to download and run an open-source language model locally is to use https://ollama.com. First, download the app and install it on your computer.
Note: After opening the app, nothing will happen because Ollama will run in the background.

Next, you need to download the model and as a moment of writing this, one of the best models are Gemma 3 and DeepSeek-R1. Let's download Gemma.
Open your terminal or PowerShell and run the following command:
ollama run gemma3
. If you have at least 64GB of RAM, you can run a larger version that will run more slowly but provide better responses —ollama run gemma3:27b-it-q8_0
.The model will be downloaded, so this may take some time, depending on your Internet connection.
Then open the Alice app, go to
Settings -> Apps
, and set up a connection as shown in the image below. Note: Context window and output context limit values can be found on the model's page on the Ollama website.

Connection Name: Ollama
API Endpoint URL: http://localhost:11434/v1/chat/completions
Model Name: gemma3
Context Window: 128000
Output Limit: 8196
Now, let's ensure the connection is active by switching the toggle, and also make sure to activate the "Enable Streaming" option as well.
From now on, the Gemma 3 model will be available in the models list.

Notes:
For the first query, model will be loaded into memory, so it will take much longer than subsequent requests.
The longer the conversation, the longer it will take to generate a response, so it's a good idea to keep threads short.
The overall performance and capabilities of open-source language models are significantly lower than those of commercial models.
You can add any text and vision model supported by Ollama. For vision models, just make sure to enable the "Vision Support" toggle.
Getting Started 👋
Open-source models with Ollama
Connect with open-source models while maintaining full privacy.
By default, the Alice app stores all settings locally on your Mac or PC. This means that if your hardware is powerful enough to run local, open-source language models, you can maintain complete privacy over your conversations.
The easiest way to download and run an open-source language model locally is to use https://ollama.com. First, download the app and install it on your computer.
Note: After opening the app, nothing will happen because Ollama will run in the background.

Next, you need to download the model and as a moment of writing this, one of the best models are Gemma 3 and DeepSeek-R1. Let's download Gemma.
Open your terminal or PowerShell and run the following command:
ollama run gemma3
. If you have at least 64GB of RAM, you can run a larger version that will run more slowly but provide better responses —ollama run gemma3:27b-it-q8_0
.The model will be downloaded, so this may take some time, depending on your Internet connection.
Then open the Alice app, go to
Settings -> Apps
, and set up a connection as shown in the image below. Note: Context window and output context limit values can be found on the model's page on the Ollama website.

Connection Name: Ollama
API Endpoint URL: http://localhost:11434/v1/chat/completions
Model Name: gemma3
Context Window: 128000
Output Limit: 8196
Now, let's ensure the connection is active by switching the toggle, and also make sure to activate the "Enable Streaming" option as well.
From now on, the Gemma 3 model will be available in the models list.

Notes:
For the first query, model will be loaded into memory, so it will take much longer than subsequent requests.
The longer the conversation, the longer it will take to generate a response, so it's a good idea to keep threads short.
The overall performance and capabilities of open-source language models are significantly lower than those of commercial models.
You can add any text and vision model supported by Ollama. For vision models, just make sure to enable the "Vision Support" toggle.