Working with Ollama, Part 2
In the first part of our article on Ollama, we demonstrated how to install Ollama and local models. In this second part, we cover advanced usage of Ollama by customizing modelfiles and integrating with the AnythingLLM frontend. We show how these tools make managing and utilizing local AI models more efficient.
Modelfiles
A modelfile is a configuration file that defines and manages models. With modelfiles, Ollama allows you to “create new models” or tweak existing ones.
It’s important to note that this is not fine-tuning existing models (which can be a complex task beyond Ollama’s scope) but rather an adjustment of model configuration parameters. A modelfile can be used to embed a custom prompt into the model or to adjust context length, temperature, templates, and more, all of which can also be controlled via Ollama’s REST API or the various Ollama libraries.
When a model is installed under Ollama for the first time, a default modelfile is created and can be viewed with:
ollama show gemma2:2b --modelfile
This produces output like:
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM gemma2:2b
FROM /usr/share/ollama/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b
TEMPLATE """{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if or (eq .Role "user") (eq .Role "system") }}<start_of_turn>user
{{ .Content }}<end_of_turn>
{{ if $last }}<start_of_turn>model
{{ end }}
{{- else if eq .Role "assistant" }}<start_of_turn>model
{{ .Content }}{{ if not $last }}<end_of_turn>
{{ end }}
{{- end }}
{{- end }}"""
PARAMETER stop <start_of_turn>
PARAMETER stop <end_of_turn>
LICENSE """Gemma Terms of Use
[…]
You can modify this modelfile to generate a “new” model based on the original:
FROM gemma2:2b
TEMPLATE """{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if or (eq .Role "user") (eq .Role "system") }}<start_of_turn>user
{{ .Content }}<end_of_turn>
{{ if $last }}<start_of_turn>model
{{ end }}
{{- else if eq .Role "assistant" }}<start_of_turn>model
{{ .Content }}{{ if not $last }}<end_of_turn>
{{ end }}
{{- end }}
{{- end }}"""
PARAMETER stop <start_of_turn>
PARAMETER stop <end_of_turn>
SYSTEM """
You are a helpful assistant but you seem to have a dark past, riddled with many unfortunate events, to which you often make
dark and unsettling references which tend to make people uneasy around you.
"""
This modelfile can then be used with ollama create
to produce a new model:
ollama create jared --file jared.modelfile
Ollama will derive a model from gemma2:2b
that you can run normally with ollama run
.
AnythingLLM and Ollama
AnythingLLM is a popular all-in-one AI application for working with LLMs. Although there is overlap between Ollama and AnythingLLM, the desktop version of AnythingLLM provides native integration with Ollama.
AnythingLLM can detect a running Ollama instance, simplifying the setup of an Ollama-powered AnythingLLM workspace:
Once set up, locally installed Ollama models are available directly within AnythingLLM:
Outlook
This article provided an initial overview of Ollama. There are many exciting use cases we haven’t covered, especially around the Ollama API. These features enable powerful automation for evaluating LLMs, creating templates, and assessing the impact of parameter adjustments.