Run your own LLM locally with Ollama

D
3 min read1 hour ago
Generate using Meta AI

For a free version of this article this link should suffice.

When ChatGPT launched on the scene in November 2022 it made waves as a ready to use large language model (LLM). As time went by, ChatGPT launched more varitations and even made their chat easy to use without log ins. Interestingly, this wave of LLM developments paved the way for many home grown applications. As a hobbyist or even a software engineer in a large technological establishment, one might prefer to have their own local LLM running.

Local LLM

In order to run a local LLM one essentially needs to connect to a model locally. This is where Ollama comes in. Ollama is an easy to use library that makes it easy to install Models locally. These models can then be used to run local LLMs via other libraries such as the popular Langchain. The list of models that Ollama supports can be found in their library.

Open Source Models

There are many Large Language Models available these days, from the likes of Meta’s Llama to ChatGPT’s GPT-4o. Some of these models are free to use like the former. Others sit behind a paywalled api system like the latter. Almost all models’ training data is closed sourced though, so the idea of Open Source is limited to their availability and use. Another thing one may observe…

--

--

D

An aspiring Robotics Researcher. I am currently in my 4th year of undergraduate studies. I am working on optimising the ROS navigation packages. Follow 4 more!