I’ve been using Logseq for a year now and it’s become the backbone of my workflow. I have pages dedicated to specific topics, concepts, projects, meetings… all sorts of things.
During my day, when I want to note something down or write something out to think about it, the daily Logseq journal is the obvious place for it to go. It has been an invaluable habit to build. But there’s a catch: the journal can easily become a black hole. It ends up as a chaotic mix of meeting notes, fleeting thoughts, random ideas, task lists and the occasional moment of genuine insight.
Most of the time, I try to link journal items to the relevant pages. Sometimes I remember to update those pages in light of new information. But other times I forget, and those insights get buried in the timeline, only resurfacing if I explicitly search for them.
All of those things belong in the journal, but some of them also belong in permanent pages. I wanted a way to filter the signal from the noise and capture things that I can integrate into my pages, in a way that makes them traceable back to the journal, without leaving the keyboard.
Enter Logsqueak: a proof-of-concept experiment to see if a local AI model can act as an automated gardener for a Personal Knowledge Management (PKM) system.

How Logsqueak Works
It’s a Python-based terminal UI built with Textual, using RAG (Retrieval-Augmented Generation) via Ollama. Because PKM data is highly personal, my aim was to be able to build a tool that can run entirely on a local GPU, meaning your private journal entries never have to leave your machine. (Though you can certainly connect it to much larger cloud models if you prefer.)
The workflow is broken down into 3 phases:
1: Extraction (Signal vs. Noise)
In this phase, Logsqueak reads your Logseq journal and helps identify which items are ephemeral daily noise (e.g., “Morning standup at 9am”) and which are actual knowledge or insight worth keeping.
2: Refinement (Making it Evergreen)
Temporal context is stripped away, and additional context from parent bullet points is added in.
- Original Journal Entry:
- Working on the new analytics dashboard
- Finally figured out why the main chart was double-fetching data on load. The
useEffecthook was missing the empty dependency array.
- Finally figured out why the main chart was double-fetching data on load. The
- Working on the new analytics dashboard
- Logsqueak Refinement:
- To prevent double-fetching data on load in the analytics dashboard, ensure the
useEffecthook for the main chart includes an empty dependency array.
- To prevent double-fetching data on load in the analytics dashboard, ensure the
3: Integration (Filing it Away)
In this final phase, the most semantically relevant pages in your Logseq graph are tracked down, and the best insertion point is identified. Logsqueak will suggest exactly which page and heading the new insight belongs under.
Logseq is built around powerful block properties, so crucially this is where the traceability happens. When an insight is integrated, Logsqueak adds an extracted-to:: property to the original journal block, linking it directly to the new block. The new block on the target page gets an id:: property linking back. This means you can always jump from your polished knowledge base straight back to the original journal entry to see the full context of what you were doing that day.
All writes are performed using a custom engine specifically built for Logseq’s Markdown format, ensuring your notes stay safe. Because this is a proof-of-concept, all writes are guarded by explicit user approval—Logsqueak won’t change your files without you saying “yes.”
Try it out!
Logsqueak requires Python 3.11+ and an AI assistant. You can use Ollama to run everything locally.
If you’re on Fedora, getting the prerequisites running is incredibly straightforward. Since Fedora Workstation ships with recent Python versions out of the box, you’re already halfway there. You just need to grab Ollama to run the models locally, set up a virtual environment, and you’re good to go:
# Assuming you've installed Ollama
git clone https://github.com/twaugh/logsqueak.git
cd logsqueak
./setup-dev.sh
source venv/bin/activate
logsqueak init
Taming the Knowledge Graph
This tool can help you turn a pile of daily logs into a structured, searchable knowledge base. Although it can’t yet create new pages from scratch or be given custom instructions about how best to integrate things into the graph, it’s already useful enough for me to use in my daily routine.
It’s very much a proof-of-concept though, and I’d love to get some feedback from other developers and knowledge management enthusiasts. You can check out the code on GitHub.
Building Logsqueak made me realise just how much time I spend thinking about note-taking friction. While Logsqueak handles my fast, keyboard-driven daily logging, I actually do a lot of my deep thinking away from the screen on a Ratta Supernote e-ink tablet.
I recently found myself trying to solve a similar “black hole” problem over there. The result is Slipstream: a Zettelkasten framework to let you build infinitely nested idea networks by hand.
If you happen to be an e-ink user who prefers a stylus to a keyboard when you need to disconnect and focus, you might find it an interesting contrast. As a bonus, because Slipstream has a structured convention, exporting those handwritten notes to plain text makes them perfectly readable for the exact kind of LLM processing Logsqueak relies on. It’s analogue thinking, ready for the AI age.

Leave a Reply