Goobla

  Goobla

Goobla

Get up and running with large language models.

macOS

Download

Windows

Download

Homebrew (macOS & Linux)

brew install goobla

Linux

curl -fsSL https://goobla.com/install.sh | sh

[!WARNING] Inspect the script or verify its checksum before running. You can download the script from install.sh to review it first. To check the checksum locally:

curl -fsSL https://goobla.com/install.sh -o install.sh
sha256sum install.sh

Compare the output against the value published on the releases page before running sh install.sh.

Manual install instructions

Docker

The official Goobla Docker image goobla/goobla is available on Docker Hub.

Libraries

Community

Quickstart

To run and chat with Gemma 3:

goobla run gemma3

Model library

Goobla supports a list of models available on goobla.com/library

Here are some example models that can be downloaded:

Model Parameters Size Download
Gemma 3 1B 815MB goobla run gemma3:1b
Gemma 3 4B 3.3GB goobla run gemma3
Gemma 3 12B 8.1GB goobla run gemma3:12b
Gemma 3 27B 17GB goobla run gemma3:27b
QwQ 32B 20GB goobla run qwq
DeepSeek-R1 7B 4.7GB goobla run deepseek-r1
DeepSeek-R1 671B 404GB goobla run deepseek-r1:671b
Llama 4 109B 67GB goobla run llama4:scout
Llama 4 400B 245GB goobla run llama4:maverick
Llama 3.3 70B 43GB goobla run llama3.3
Llama 3.2 3B 2.0GB goobla run llama3.2
Llama 3.2 1B 1.3GB goobla run llama3.2:1b
Llama 3.2 Vision 11B 7.9GB goobla run llama3.2-vision
Llama 3.2 Vision 90B 55GB goobla run llama3.2-vision:90b
Llama 3.1 8B 4.7GB goobla run llama3.1
Llama 3.1 405B 231GB goobla run llama3.1:405b
Phi 4 14B 9.1GB goobla run phi4
Phi 4 Mini 3.8B 2.5GB goobla run phi4-mini
Mistral 7B 4.1GB goobla run mistral
Moondream 2 1.4B 829MB goobla run moondream
Neural Chat 7B 4.1GB goobla run neural-chat
Starling 7B 4.1GB goobla run starling-lm
Code Llama 7B 3.8GB goobla run codellama
Llama 2 Uncensored 7B 3.8GB goobla run llama2-uncensored
LLaVA 7B 4.5GB goobla run llava
Granite-3.3 8B 4.9GB goobla run granite3.3

[!NOTE] You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

Customize a model

Import from GGUF

Goobla supports importing GGUF models in the Modelfile:

  1. Create a file named Modelfile, with a FROM instruction with the local filepath to the model you want to import.

    FROM ./vicuna-33b.Q4_0.gguf
    
  2. Create the model in Goobla

    goobla create example -f Modelfile
    
  3. Run the model

    goobla run example
    

Import from Safetensors

See the guide on importing models for more information.

Customize a prompt

Models from the Goobla library can be customized with a prompt. For example, to customize the llama3.2 model:

goobla pull llama3.2

Create a Modelfile:

FROM llama3.2

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system message
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

goobla create mario -f ./Modelfile
goobla run mario
>>> hi
Hello! It's your friend Mario.

For more information on working with a Modelfile, see the Modelfile documentation.

CLI Reference

Create a model

goobla create is used to create a model from a Modelfile.

goobla create mymodel -f ./Modelfile

Pull a model

goobla pull llama3.2

This command can also be used to update a local model. Only the diff will be pulled.

Remove a model

goobla rm llama3.2

Copy a model

goobla cp llama3.2 my-model

Multiline input

For multiline input, you can wrap text with """:

>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.

Multimodal models

goobla run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"

Output: The image features a yellow smiley face, which is likely the central focus of the picture.

Pass the prompt as an argument

goobla run llama3.2 "Summarize this file: $(cat README.md)"

Output: Goobla is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.

Show model information

goobla show llama3.2

List models on your computer

goobla list

List which models are currently loaded

goobla ps

Stop a model which is currently running

goobla stop llama3.2

Start Goobla

goobla serve is used when you want to start Goobla without running the desktop application.

Change the bind address

Goobla binds to 127.0.0.1:11434 by default. Set the GOOBLA_HOST environment variable to change the bind address:

GOOBLA_HOST=0.0.0.0:11434 goobla serve

See the FAQ for more details.

Building

See the developer guide

Running local builds

Next, start the server:

./goobla serve

Finally, in a separate shell, run a model:

./goobla run llama3.2

REST API

Goobla has a REST API for running and managing models.

Generate a response

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt":"Why is the sky blue?"
}'

Chat with a model

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.2",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ]
}'

See the API documentation for all endpoints.

Community Integrations

Web & Desktop

Cloud

Terminal

Apple Vision Pro

Database

Package managers

Libraries

Mobile

Extensions & Plugins

Supported backends

Observability