Ollama Thumbnail

Everything you need to know about Ollama in 2024

What is Ollama?

Ollama is a open source software that allows you to run open-source large language models (LLMs) locally on your machine.

You only need internet connection to download ollama and language models, after that you don't need internet connection to use ollama and language models.

Your chats with the language models will be stored on your machine locally meaning no one can access them (except you)

Benefits of using Ollama

Privacy & Security

Your chat messages and all other sensitive information you are giving to the model stays on your local machine so you don't have to worry others access your data.

Offline Access

You don't need access to internet to use the language models, so you can use them anytime and anywhere you want

Cost Savings

You don't have to pay anything to use the open source language models.

Flexibility on choosing LLM Models

There are hundereds of open source models available so you can try lot of different models. If you don't like a model then you can simply delete it and download a new model


How to install Ollama

The installation process is very straight forward, visit the ollama download page (opens in a new tab) and choose your operating system then download the installation file and install it like you install any other software.


Ollama Download
ī¸đŸ’Ą

Tip

Ollama is supported on macOS, Windows and Linux


System Requirements to run Open Source Language Models

You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

Here are some of the models based on the parameters, lower parameter models can be run on less powerful machine

Model NameParametersModel SizeMinumum RAM Required
Qwen0.5B395MB1 GB
Gemma2B1.4GB2 GB
Dolphin Phi2.7B1.6GB4 GB
Gemma7B4.8GB8 GB
Llama 2 13B13B7.3GB12 GB
Star Coder 215B9.1GB16 GB
Llama 2 70B70B39GB64 GB

If you have a less powerful machine, like a laptop with 4GB of RAM then you can try models like Qwen 0.5B or Gemma 2B

ī¸âš ī¸

Warning

RAM mentioned above are just a approximate number, you may be able to run higher Parameter models on less amount of RAM


How to install Open Source Language Models

Installing Open source langauge models are very easy, all you have to do is run a single command on your Terminal (or command line interface CLI)

Step 1. Open your Terminal

Step 2. Run the command Ollama run model-name


Installing Language Model from macOS Terminal

Installing Language Model from macOS Terminal

Thats it!! just one command.

So if you want to install Google's Gemma model then you have to run the command Ollama run Gemma, ollama will download the latest version of the gemma model and install it for you.

Once the installation is finished, you can start to interact with the model directly from your terminal.

ī¸â„šī¸

Note

You can only interact with the languge models from the Terminal by default but there are lot of plugins to connect ollama with a GUI (Graphical User Interface) and you can get those plugins from Ollama Github (opens in a new tab)


How to use Open Source Language Models on Ollama

Using the models on ollama is pretty simple.

Step 1. Open your Terminal

Step 2. Run the command Ollama run model-name


Installing Language Model from macOS Terminal

Installing Language Model from macOS Terminal

Yes, you will use the same command used while installing the model.

The command Ollama run model-name will first check your machine for the model and if ollama finds there is a model already installed on your machine then it will run the model on the terminal, otherwise ollama will download the model and then it will run the model


Useful Links

Summary

Ollama is a open source software that allows you to run open-source large language models (LLMs) locally on your machine. By using Ollama, you can download and use lot of different open-source models without an internet connection after the initial download.

Using language models locally can give you privacy and security as all interactions with the models are stored locally on your machine. Some of the benefits of using Ollama are enhanced privacy, offline access, cost savings and flexibility in choosing from a variety of available models.


Frequently Asked Questions

What is the use of ollama?

Ollama allows you to run open-source large language models (LLMs) locally on your machine.

Is Ollama open-source?

Yes, Ollama is Open Source and you can view the source code from their github (opens in a new tab)

How does ollama work?

Ollama works by downloads the language models to your local machine and allowing you to access it any time without internet connection.

What are the requirements for ollama?

To use language models using ollama you should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

Is Ollama available on Windows?

Yes, Ollama is available on Windows

How do I delete models on Ollama?

To remove a model, use the command ollama rm model_name on your terminal. Replace the model_name with your actual models name.

How do I know if Ollama is installed?

Open your terminal and run the command Ollama help you will get errors on your terminal if ollama is not installed

Does Ollama use GPU?

Yes, Ollama will attempt to use all the GPUs it can find.

Related articles