Mastering Ollama: A Step-by-Step Guide to Installing Ollama and the Open WebUI frontend.

Introduction

Are you tired of relying on cloud services for your language models? Want to maintain control over your data while saving costs? Look no further! Today, we're excited to introduce you to Ollama, an open-source project written in golang that enables you to run large language models right on your own devices.

Ollama's compatibility with the Open WebUI project offers a seamless user experience without compromising on data privacy or security. And the best part? You can easily harness the power of your Nvidia GPU for processing requests using the Windows Installer approach!

Section 1: Installing Ollama

The Windows installation process is relatively simple and efficient; with a stable internet connection, you can expect to be operational within just a few minutes.

Step 1: Download and Install Ollama

  • Visit the Ollama GitHub page, scroll down to the "Windows preview" section, where you will find the "Download" link.
  • Click on the link to initiate the download process. Once the installer has successfully downloaded, run it to commence the installation.

Ollama Windows Installer

  • To start the installation click on the Install button.

Ollama Installing to user directory

The installer installs Ollama in the C:\Users\technerd\AppData\Local\Programs\Ollama> directory.

Step 2: Setup environment variables

  • Prior to launching Ollama and installing Open WebUI, it is necessary to configure an environment variable, ensuring that Ollama listens on all interfaces rather than just localhost.
  • To set the environment variable search for environmental variables and open the menu.

Search for environmental variables editor

Add OLLAMA_HOST variable

  • Under the user variables you will click “New…”, a menu will pop up where you can create the variable.
  • For the Variable name enter OLLAMA_HOST
  • For the Variable value enter 0.0.0.0:8080 ← if this port is in use pick a different port number.

Variable value

  • Once you have entered the information click “OK.”
  • If you want to do this through the CLI you can also issue the set command.

CLI add environmental variable

Step 3: Run Ollama

  • The easiest way to start Ollama on windows is to search for it in the start menu and run it.

Search for Ollama Windows App

  • You can check to see if it is running first by seeing if the icon is in the system tray.
  • Also to verify that Ollama is running on the port you defined, open a web browser and go to http://ip addr:8080, if it's running you will see the message “Ollama is running” in the browser.

Ollama API check

Section 2: Installing Open WebUI docker

Step 1: Creating and running the docker container

  • Installation instructions for various docker setups can be found at https://github.com/open-webui/open-webui
  • For our installation we used the Ollama on another server option.
  • To begin installation use the following docker command.
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://srv ip:8080 -v 
open-webui:/app/backend/data --name open-webui --restart always 
ghcr.io/open-webui/open-webui:main

If you have a conflict for the exposed docker container port you can change the first port number after the -p by default its set to 3000 I changed it over to port 9090.

Here is what the container will look like once it is installed and running.

ae20f2d369e9 ghcr.io/open-webui/open-webui:main "bash start.sh" 7 hours 
ago Up 7 hours (healthy) 0.0.0.0:9090->8080/tcp, :::9090->8080/tcp open-webui

Step 2: Logging into Open WebUI

  • Now that you see the docker container up and running we can hop over to a web browser and login.
  • The first time you access Open WebUI you will need to sign up by clicking on “Don’t have an account? Sign up”

Open WebUI create account

  • Enter your information and click Create Account

  • To sign in enter the email address, password and click on “Sign in”

Open WebUI login

  • After you login you will see an interface that is pretty similar to ChatGPT..

Open WebUI main page

Section 3: Configuring Ollama LLMs

  • One of the first things you will need to do after logging in for the first time is to configure the settings to Open WebUI by clicking on the username in the bottom left then clicking on Settings.

Open WebUI settings

  • In the "Settings" menu, you will find numerous adjustments, but the most crucial one involves incorporating models for use. To download new models, simply click on the "Models" option within the "Settings" menu.

Open WebUI models settings

  • If you don’t see “Pull a model from Ollama.com” then it is likely that Open WebUI isn’t able to contact the Ollama server. To verify connectivity go to connections option in the settings menu.

Open WebUI connections settings

  • In the Ollama Base URL field set the ip address and port number that you configured when installing Ollama. To check connectivity click on the arrows at the right side of the URL field.
  • If everything is working you will get a “Server connection verified” message at the top of the window.

Open WebUI connection verify

  • The initial model I chose was "llama3". To install this, return to the "Models" section, and under "Pull a model from Ollama.com", enter "llama3" then click on the "Download" button.

Open WebUI pull llama3

  • Once the download is completed you will be able to use the LLM from the main Open WebUI page.

Conclusion

Congratulations! You’ve successfully installed Ollama and Open WebUI on your Windows system. We went through the steps it takes to download the Ollama Windows installer, talked about how to make the server listen on all interfaces through configuring an environmental variable. Then we walked through the process of creating an Open WebUI docker container, downloading the llama3 LLM and how to troubleshoot connectivity issues between Open WebUI and Ollama. Now that you have Ollama and Open WebUI installed you can start to use the LLM to write documents, ask questions and even write code.

Additional Resources

  • For more information on Ollama and its features, visit the official website at www.ollama.com
  • You can find the Github repository at github.com/ollama/ollama.
  • Open WebUI’s Github repository is github.com/open-webui/open-webui
  • Join the Ollama discord server to connect with other users, share knowledge, and get help with any questions or issues you may have.
Next
Next

Threat Hunting with Datadog