If you’re looking to run the native Ollama app on Windows 10 and make it accessible to other computers on your local network, this guide walks you through the installation, configuration, and network setup. Whether you’re using Ollama for AI-powered tasks or exploring local hosting options, these steps ensure you can connect from multiple devices seamlessly.
Step 1: Download and Install the Ollama App
The first step is to get the native Windows version of Ollama and set it up:
- Download the Installer:
- Head over to the Ollama website and download the Windows installer.
- Install the Ollama App:
- Run the downloaded file and follow the installation instructions. It’s a straightforward process—just follow the prompts until it’s done.
Step 2: Run Ollama Locally
With the app installed, let’s verify that it’s running properly.
- Launch the Ollama App:
- Find Ollama in your Start menu and launch it. Alternatively, you can navigate to the installation directory and open it from there.
- Verify the Installation:
- Open Command Prompt and type:
bash ollama list
- You should see a list of available models if Ollama is correctly set up. This confirms that the app is running properly.
Step 3: Configure Ollama to Listen on Your Network
Ollama defaults to localhost
(only accessible from your local machine), so to access it from other devices, you’ll need to adjust its settings.
- Edit the Configuration File:
- Locate the configuration file (
config.toml
), usually found in:C:\Users\<YourUsername>\AppData\Roaming\Ollama\config.toml
- Open this file in a text editor (e.g., Notepad or Visual Studio Code) and add or modify the server section like this:
toml
[server]
host = “0.0.0.0” # Allows connections from any IP address port = 11400 # Specify your desired port number This configuration allows Ollama to accept connections from any IP address on the specified port.
- Restart Ollama:
- Close and relaunch the app to apply the changes.
Step 4: Configure Windows Firewall to Allow Connections
To make sure your Windows firewall allows access through the designated port, follow these steps:
- Access Firewall Settings:
- Press
Windows + R
, typecontrol
, and open the Control Panel. - Go to System and Security > Windows Defender Firewall > Advanced Settings.
- Create Inbound Rule:
- Click Inbound Rules on the left, then select New Rule on the right.
- Choose Port and click Next.
- Select TCP and enter the port number (e.g.,
11400
). - Choose Allow the connection and apply the rule to all profiles (Domain, Private, Public).
- Name the rule something like “Ollama Port Access” and finish.
- Create Outbound Rule:
- Similarly, create an outbound rule with the same settings to ensure traffic can flow out as well.
Step 5: Access Ollama from Another Device on Your Network
Now that everything is set up, let’s test and access Ollama from another computer on your local network.
- Find the IP Address:
- On the computer running Ollama, open Command Prompt and type:
bash ipconfig
- Note the IPv4 address of the active network adapter (e.g.,
192.168.1.100
).
- Test the Connection:
- On another computer in the same network, open a terminal or command prompt and enter:
bash curl http://192.168.1.100:11400/models
- You should see a response listing the available models, indicating the setup works.
- Make Requests:
- To send prompts to Ollama, use the following command format:
bash curl -X POST http://192.168.1.100:11400/generate -d '{"model": "llama3.1", "prompt": "Hello, Ollama!"}'
- Replace the IP address and port with your actual values. You can now access Ollama from other devices on your network, allowing distributed use of your AI resources.
Wrapping Up
By following these steps, you’ve set up Ollama to be accessible from any computer in your local network, expanding its use beyond a single device. This is especially useful for testing, collaborating, or sharing AI capabilities across multiple machines. With everything in place, you can make full use of Ollama’s models and functionality wherever you need it on your network.