It’s been a bit since I’ve had a chance to write a blog post about something technical. As with most people in the world, I’ve been trying to dive deeper into using AI and making it make me more efficient and useful. I’ve spent quite a bit of time in the last week or so with Google’s Gemini, which is great, but I’m really interested in using AI to summarize or get informations about content that I’ve stored in Obsidian, which I’ve discussed before (see Productivity improvements with Obsidian, OmniFocus and Johnny Decimal. Because my Obsidian vault contains a lot of private information (I take a lot of meeting notes, for example) I didn’t want to connect it to any public services.

I couldn’t find an easy to guide to do this, so this is that guide. It was pretty straightforward and simple, so hopefully this will help others. I’ll likely follow up in the future with more things that I discover and learn.

We’ll be doing two things here. The first is installing Private GPT because my initial reading made me think I needed it. You don’t really, although it looks useful and might be a nice tie-in to some other applications. We’ll also be installing Ollama and some Obsidian plugins to talk directly to it, without Private GPT as an intermediary. The instructions will assume a MacOS system, although it shouldn’t be too hard to do this on Linux as well.

First, we install Ollama:

$ brew install --cask ollama
$ brew install ollama

You can use the GUI application, but we want to run Ollma on the commandline and make sure it works:

$ ollama serve

In another window, download the models you want to use:

$ ollama pull llama3.1
$ ollama pull nomic-embed-text

When this is done, install Private GPT by first cloning the git repository:

$ cd ~/git
$ git clone https://github.com/zylon-ai/private-gpt
$ cd private-gpt

Create a python virtual environment. If you don’t have python 3.11, you’ll need to install it. My Homebrew setup has 3.13, so I also needed to install 3.11 (it will not work with 3.13):

$ brew install python@3.11
$ python3.11 -m venv .venv
$ source .venv/bin/activate

Install the dependencies that Private GPT needs:

$ poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"

Now you can run Private GPT:

$ PGPT_PROFILES=ollama make run

It will take a bit to initialize and start, once you see

15:16:59.694 [INFO    ]             uvicorn.error - Application startup complete.
15:16:59.695 [INFO    ]             uvicorn.error - Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)

Now you can connect to http://localhost:8001 and use Private GPT through its web interface.

But you don’t need Private GPT to work with Obsidian, you just need Ollama. There are two plugins to install:

These community plugins can be installed within Obsidian.

Obsidian AI Providers

This is where you configure the AI provider for Local GPT to use. It’s where I expected to configure Private GPT but I found that it will talk to Ollama directly. The configuration is pretty straightforward, there are a number of providers you can chose from and Ollama is one of them. Give it a name and the URL should be http://127.0.0.1:11434. Click the refresh button next to the Model dropdown and you can select whichever model you like that you loaded into Ollama (in my case, llama3.1:latest). If you followed the instructions for Private GPT and Ollama above, you can add a second provider here for the nomic-embed-text:latest model as well.

Local GPT

There’s almost nothing to configure here at the outset; you can find some different prompts but the defaults are fine to start with. All you really need to set here is the Main AI Provider, and point it to what you configured in Obsidian AI Providers for llama3.1:latest. You can also add the Embedding AI Provider and point it to the nomic-embed-text:latest provider.

Out of the box there’s no context menu so you need to use the command palette, or set a hotkey. I went to Obsidian’s Hotkeys configuration, filtered on “GPT” and set OPT+CMD+G to Local GPT: Show context menu which will bring up all of the available prompts.

I don’t believe the models are trained on any of the data in my Obsidian vault, so in the context of Obsidian alone, I don’t really need Private GPT — for anything outside of my vault that’s public I can use ChatGPT or Google’s Gemini so there doesn’t appear to me to be a reason to run Private GPT unless I can train it on my vault data, which is something I will experiment with in the future.

To have Ollama run persistently in the background, start it with brew services:

$ brew services start ollama

At this point, if you like, you can explore other models in Ollama, such as the DeepSeek models and others. The full list is available here. And you can interact with your local model using Obsidian directly using the plugins as discussed, or using Private GPT.

I’ve experimented with it a bit and it’s pretty fast. Of course, mileage will vary but I’m using this on a Mac Studio M1 and it’s pretty zippy with what I’ve tried. I plan to fiddle with it more, and ideally train the local model in some way with my data to get some better targeted information across my Obsidian vault rather than just the current document I’m working on. Maybe this weekend ;)

Share on: TwitterLinkedIn


Related Posts


Published

Category

Macos

Tags

Stay in touch