This content has not been translated yet.

Highlight discussion — inline LLM chat + summarize-to-annotation

Highlight text → discuss with Claude in an inline chat panel → one-click summarize the discussion into the highlight's comment. Conversation stored for later review, summary rendered in a distinct AI-annotation style.

Overview

Inline LLM discussion anchored to a highlight, with one-click summarization into the highlight’s comment field.

Interaction flow

  1. User selects text → highlight is created
  2. A chat panel opens (inline popover or sidebar), anchored to the highlight
  3. System prompt includes: highlighted text + surrounding paragraph/chapter context
  4. User discusses freely — “what does this mean?”, “how does this relate to X?”, “compare with Y”
  5. User clicks “Summarize & Save”
  6. Final API call asks Claude to distill the conversation into a concise annotation
  7. Summary is stored as the highlight’s comment with comment_type: ai_synthesized
  8. Full conversation history stored separately (MongoDB highlight_conversations or embedded in highlight doc)

Architecture

Frontend (akita-web)

  • Chat panel component (React or Astro island)
  • Direct Anthropic API calls (claude-sonnet-4-20250514, /v1/messages)
  • System prompt template: highlight text + source context + instruction to be analytical
  • Conversation state managed client-side (messages array)
  • “Summarize & Save” button triggers final summarization call + POST to highlights endpoint

Summarization prompt

  • Input: full conversation history
  • Output: concise analytical annotation (2-5 sentences)
  • Style: insight-focused, not conversational — suitable for re-reading months later

Storage

  • comment field on the highlight document (the visible annotation)
  • conversation field or subcollection (the full discussion, for revisiting)
  • comment_type: ai_synthesized to distinguish from manual notes

Visual design

  • AI-synthesized comments rendered in a distinct style (different background, small icon)
  • Expandable: click to reveal the full underlying conversation

Future: corpus-aware mode

  • Instead of direct API call, route through Akita MCP server
  • Claude can pull in related notes, dossiers, other highlights
  • Turns annotation into a research dialogue
  • Adds latency — keep as opt-in “deep mode”

Dependencies

  • Requires: highlights system (akita-backlog-highlights)
  • Requires: Anthropic API key available client-side (or proxied through Akita server)