How to test your LLMs locally with GUI tools, Open WebUI and LM Studio

How to test your LLMs locally with GUI tools, Open WebUI and LM Studio

learninjava
Feb 08, 2025 - AI

AI Series - Table of Contents

Part 1 : 2 Ways to run Ollama with DeepSeek-R1 or Tulu-3 locally - Using Docker and Directly on Local Machine

Part 2 : How to test your LLMs locally with GUI tools, Open WebUI and LM Studio [This article]

Part 3 : How to build and run your own DeepSeek-R1 RAG locally with LangChain [Upcoming]

 

Introduction 

In the last article, we have seen how to run Ollama DeepSeek locally with and without Docker. Click here to quickly read the article. Alright, welcome back. So, now, you have installed and run Ollama locally. Let's see how to test your LLMs locally with GUI tools, Open WebUI and LM Studio.
 

Why do you need a GUI tool to run an LLM ? 

Although running a prompt from the command line is sufficient, it's generally recommended to use a graphical interface tool instead. This is because a GUI tool offers a more user-friendly experience and provides options like choosing the model, prompt, or temperature settings.
Additionally, you can save the history of the conversations and can easily share the results with others.
 

Option 1: Install and Run Open WebUI with Ollama both as docker containers 

Step 1: Navigate and clone the repository: Open WebUI

The cloned repo already has a docker-composer.yaml file. We will try to modify this file slightly as per our needs like below.
loading...
  • The file has two services, one for Open WebUI and another for Ollama
  • The Ollama service has the configuration to pull the latest image and expose on port 11434
  • Additionally, there are health checks and command to start the ollama service
  • Open WebUI service uses Ollama service and start the UI

Step 2: Push both services to docker

docker compose up
Or, if you want to run the ollama service separately, run the below command
docker run -d --network=host -v /user-home/open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Step 3: Pull a model - Otherwise, by default, the ollama container would be empty:

docker exec -it ollama ollama pull deepseek-r1:1.5b
Or, if you want to run via docker compose,
docker compose exec ollama ollama pull deepseek-r1:1.5b
Or, just using a cURL command,
curl http://localhost:11435/api/pull -d '{"name": "deepseek-r1:1.5b"}'

Step 4: Run the GUI, http://localhost:3000/auth

https://github.com/learninjavagithub/assets/raw/master/articles/ai/open-webui.jpg
Login and you should see the below screen. As you can see the deepseek-r1 model is already available.
https://github.com/learninjavagithub/assets/raw/master/articles/ai/open-webui-home.jpg

Step 5: Check container logs

docker compose logs -f ollama

Step 5: Stop all containers

docker compose down
 

Option 2: Run any model directly in LM Studio - No code or commands needed 

Step 1: Navigate and download the LM Studio: LM Studio

This is probably the easiest way to run an LLM. You dont need to run any commands or write any code. Go ahead and run the downloaded executable. You should see something like below:
https://github.com/learninjavagithub/assets/raw/master/articles/ai/lm-studio-home.png

Step 2: Click on "Discover" button in the left menu and download a model

https://github.com/learninjavagithub/assets/raw/master/articles/ai/lm-studio-download-model.png

Step 3: Load the model

https://github.com/learninjavagithub/assets/raw/master/articles/ai/lm-studio-load-model.png

Step 4: Open a new chat window and run a prompt

https://github.com/learninjavagithub/assets/raw/master/articles/ai/lm-studio-chat-prompt.png

Step 5: Click on "Developer" button in the left menu

This option has some developer friendly features. LM Studio has the ability to run a local server and exposes an API to interact with the LLM. This can be used to run a prompt via an API.
https://github.com/learninjavagithub/assets/raw/master/articles/ai/lm-studio-developer.png
Copy the cURL and run in any client like Postman/Bruno. We can also run the cURL directly in a terminal.
https://github.com/learninjavagithub/assets/raw/master/articles/ai/lm-studio-local-server-curl.png

Step 6: Click on "My Models" button in the left menu

All the available models are listed. You can edit the parameters or any configuration here.
https://github.com/learninjavagithub/assets/raw/master/articles/ai/lm-studio-models-list.png
Thats all folks !! Happy coding. If you feel this helped you, keep supporting us by   or  or  below or on the articles on social media.
 
Like us on:
 
 
a