🦀 v0.1.0 — Open Source & MIT License

Your Private, Local
AI Assistant

Always on. Always yours. Never in the cloud.
Connect your favorite messaging apps to a local AI — zero subscriptions, zero telemetry.

🚀 Get Started ⭐ Star on GitHub
Node.js ≥ 20 Ollama Telegram WhatsApp Discord Slack MIT License

// Why CrustAI

Everything you need.
Nothing you don't.

Built for developers and privacy enthusiasts who want a real AI assistant without giving up control of their data.

🔒

100% Private

All conversations stay on your machine. No data sent to external AI services. No telemetry. No tracking. Ever.

🧠

Local LLM Power

Powered by Ollama — supports llama3.2, tinyllama, phi3, mistral and any other compatible model.

📱

Multi-Platform

One assistant, four platforms. Telegram, WhatsApp, Discord and Slack — all managed from a single config file.

🧬

Long-term Memory

CrustAI remembers facts about you across conversations using the /remember command and local SQLite storage.

🗣️

Offline Voice

Speech-to-text and text-to-speech support in Portuguese (pt-BR) — works completely offline after setup.

REST API

Built-in Fastify REST API for custom integrations. Connect any tool or service to your local AI assistant.


// Installation

Up and running
in 5 steps.

No Docker required. No cloud accounts. Just Node.js and Ollama on your own machine.

1

Clone the repository

Download CrustAI to your machine using Git.

terminal
git clone https://github.com/DaveSimoes/CrustAI.git
cd CrustAI
2

Install dependencies

Install all required Node.js packages.

terminal
npm install
3

Start Ollama and pull a model

Start the Ollama server and download your preferred AI model. tinyllama works great on modest hardware.

terminal
# Start Ollama server (keep this terminal open)
ollama serve

# In a new terminal — pull a model
ollama pull tinyllama  # lightweight (600MB)
# or
ollama pull llama3.2   # powerful (2GB, needs 8GB RAM)
4

Configure the project

Copy the example config and add your Telegram bot token from @BotFather.

config/config.yml
model: tinyllama
ollama_url: http://localhost:11434
language: pt-BR

telegram:
  enabled: true
  token: YOUR_BOT_TOKEN_HERE
  allowed_user_ids: []
5

Launch CrustAI!

Start the assistant and open your Telegram bot to begin chatting.

terminal
npm start

# Expected output:
✓ Ollama connected     (tinyllama)
✓ Memory store ready  (./data/memory.db)
✓ Telegram ready
✓ REST API ready      (http://localhost:3000)

🦀 CrustAI is ready. Your shell awaits.

// Integrations

One assistant.
Every platform.

Enable only the platforms you use. Each adapter is independently configurable in config.yml.

✈️

Telegram

Full bot integration via @BotFather token

💬

WhatsApp

Connect via QR code scan — no business account needed

🎮

Discord

Full Discord bot with server and channel support

💼

Slack

Slack app integration for teams and workspaces

🌐

REST API

HTTP API always running on localhost:3000

🎙️

Voice

WebSocket voice server with offline STT/TTS


// Usage

Built-in commands.

Type these commands directly in any connected messaging platform.

/ping Check if the bot is alive and responding
/help Show all available commands
/model Show which AI model is currently running
/remember Store a fact in long-term memory
/forget Erase all stored facts about you
/clear Clear the current conversation history

// Privacy First

Your data.
Your rules.

CrustAI was built with privacy as its core principle — not an afterthought.

Stays on your machine

All conversations processed locally. Nothing leaves your hardware.

No external AI APIs

Zero calls to OpenAI, Anthropic, Google or any cloud AI service.

No telemetry

No usage tracking, no analytics, no crash reports sent anywhere.

Open source

Every line of code is auditable. MIT license — use it however you want.