Building Your (Local) LLM Second Brain
2025-05-01 , Ballroom (track sponsored by Checkmarx)

LLMs are hotter than ever, but most LLM-based solutions available to us require you to use models trained on data with unknown provenance, send your most important data off to corporate-controlled servers, and use prodigious amounts of energy every time you write an email.

What if you could design a “second brain” assistant with OSS technologies, that lives on your own laptop?

We’ll take a whirlwind tour through our second brain implementation, combining Ollama, LangChain, OpenWebUI, Autogen and Granite models to build a fully local LLM assistant.


LLM Second Brain project: https://github.com/obuzek/llm-second-brain

Olivia has been building machine learning and natural language processing models since before it was cool. She's spent several years at IBM working on opening up Watson tech, around the country and around the world.