← Back to Docs

Configuration

Memex environment variables and options

Memex supports configuration via environment variables or config file. All settings have sensible defaults.

Priority: Environment variables > Config file > Defaults


Config File

Optional JSON config at ~/.vimo/memex/config.json:

{
  "server": {
    "port": 10013
  },
  "ollama": {
    "api": "http://localhost:11434",
    "embedding_model": "bge-m3",
    "chat_model": "qwen3:0.6b"
  },
  "features": {
    "enable_ai_chat": false
  },
  "compact": {
    "enabled": false,
    "fts_tokenizer": "trigram"
  }
}

Environment Variables

VariableDefaultDescription
PORT10013HTTP server port
VIMO_HOME~/.vimoBase directory for all data
CLAUDE_PROJECTS_PATH~/.claude/projectsClaude Code sessions
CODEX_PATH~/.codexCodex CLI sessions
OPENCODE_PATH~/.local/share/opencodeOpenCode sessions
GEMINI_TMP_PATH~/.gemini/tmpGemini CLI sessions
OLLAMA_APIhttp://localhost:11434Ollama API endpoint
EMBEDDING_MODELbge-m3Model for embeddings
CHAT_MODELqwen3:0.6bModel for AI Q&A
ENABLE_AI_CHATfalseEnable AI Q&A feature
COMPACT_ENABLEDfalseEnable LLM Compact feature

FTS Tokenizer

Controls how full-text search indexes text. Configure via compact.fts_tokenizer in config file.

TokenizerLanguageMatchingIndex Size
trigram (default)CJK + EnglishSubstringLarger
unicode61English onlyWord boundarySmaller

Use trigram if you search in Chinese, Japanese, or Korean. Use unicode61 for English-only projects with smaller index size.


Compact (Experimental)

LLM-powered session summarization. Inspired by claude-mem. Disabled by default.

{
  "compact": {
    "enabled": true,
    "l2_talk_summary": true,
    "l3_session_summary": true
  }
}

Requires a chat model (Ollama or cloud provider). When enabled, generates summaries for MCP search results.


Models

  • Embedding: bge-m3 (default). For faster/smaller: nomic-embed-text
  • Chat: qwen3:0.6b (default). For faster/smaller: llama3.2:3b

Verify

# Health check
curl http://localhost:10013/health

# Stats
curl http://localhost:10013/api/stats

# Embedding status
curl http://localhost:10013/api/embedding/status

Troubleshooting

Semantic search not working

# Check Ollama
curl http://localhost:11434/api/tags

# Check model
ollama list

No sessions found

# Check data paths exist
ls ~/.claude/projects/
ls ~/.codex/
ls ~/.local/share/opencode/
ls ~/.gemini/tmp/

# Trigger manual collection
curl -X POST http://localhost:10013/api/collect