I'm building ReviewLens AI, a review intelligence portal for an ORM (Online Reputation Management) consultancy. Users upload customer reviews (CSV or URL scrape), and then chat with an AI agent that analyzes those reviews through a guardrailed Q&A interface. It's a take-home assignment for FutureSight.
I have a prior project called BriefBot that used what I call a "sandbox agent" pattern — instead of scripting the agent step-by-step, you give it: knowledge (domain reference material), tools (capabilities it can call in any order), a goal (definition of done), and guidelines (quality bar, not step-by-step instructions). The agent then decides what to do, in what order, and how deep to go.
I want to bring that pattern to ReviewLens, adapted for a conversational agent (per-message invocation with ongoing chat) rather than BriefBot's one-shot autonomous run.
The app is a working FastAPI + HTMX + SSE app with:
- CSV upload with AI-powered column mapping (Claude Haiku maps arbitrary CSV headers to canonical fields)
- Optional URL scraping via Firecrawl
- ChromaDB vector store for semantic search over reviews
- Per-message agent invocation using claude-agent-sdk with a custom MCP server
- 5 MCP tools: search_reviews, analyze_sentiment, generate_chart, calculate_stats, suggest_follow_ups
- Three-layer scope guard: system prompt + retrieval confidence + architectural boundary (no tools access external data)
- SSE streaming with tool activity indicators
- Chart.js inline rendering in chat
- Follow-up question buttons
- Light/dark theme, session history sidebar, sample datasets
/knowledge folder with ORM domain knowledgeCreate a /knowledge directory with markdown files the agent reads contextually. These should contain practical domain knowledge that turns a generic chatbot into a review intelligence analyst. Files like:
review-analysis-framework.md — How to approach review analysis for ORM consultancies. Aspect-based sentiment analysis patterns. What constitutes a statistically meaningful finding vs noise. Common review aspects by industry (product quality, customer service, pricing, shipping, etc). How to identify emerging trends vs established patterns. What makes an insight actionable vs merely interesting.
analysis-patterns.md — Templates for common analysis types: competitive gap analysis from reviews, churn signal detection, feature request extraction, customer journey pain point mapping, sentiment trend analysis, rating distribution interpretation (what J-curves, bimodal distributions, etc. mean).
report-structure.md — How to structure an exportable analysis report. Section templates for executive summary, key findings, aspect breakdown, risk signals, recommendations. Citation standards (always reference specific reviews by quoting/paraphrasing).
Keep these practical and concise — they're reference material for the agent, not documentation for humans. Write them in a tone like an experienced ORM analyst briefing a junior analyst.
knowledge.py module (similar to BriefBot's)Port the pattern from BriefBot's app/knowledge.py — load all markdown files from /knowledge on startup, cache them, and provide get(name) and list_files() functions. The agent should be able to discover and read these via tools.
Rewrite app/prompts.py so build_system_prompt() follows the sandbox structure explicitly:
Knowledge section: Tell the agent it has a reference library and should consult it when relevant. Don't dump the knowledge inline — the agent should use tools to read files, just like BriefBot.
Tools section: Document each tool's purpose and when to use it. Current tools plus the new ones (below).
Goal section: For each message, the agent's goal is: answer the user's question using ONLY the ingested review data, grounded in actual search results, with specific review citations. When a chart would communicate better than text, use one. Always suggest follow-ups.
Guidelines section: The quality bar: - Every claim must be grounded in actual review data from search results. If search returns nothing relevant, say so honestly. - Cite specific reviews — quote or paraphrase actual text with attribution when possible. - Use charts strategically, not reflexively. A chart adds value for distributions, trends, and comparisons. - Be quantitative when possible — counts, percentages, averages. - If the user asks something outside the dataset scope, refuse gracefully and suggest an alternative. - Confidence awareness: if search results are sparse or tangential, acknowledge the limitation rather than overstating findings.
Keep the scope guard — it's the strongest part of the current prompt. The three-layer defense (prompt + retrieval + architecture) is good. Keep the explicit refusal instructions and the "verify before answering" checklist.
Add these tools in app/tools.py alongside the existing ones:
list_knowledge_files — List available reference files with one-line summaries. Agent calls this to discover what domain knowledge is available.
read_knowledge_file — Read a specific knowledge file by name. Agent calls this when it needs analytical frameworks or report templates.
save_to_report — Save a key finding to a running report. Takes a section (e.g., "key_findings", "sentiment_overview", "risk_signals", "recommendations") and content (markdown). Stored in the session alongside messages. The user can later ask "generate a report" or "summarize everything we've discussed" and the agent assembles from these saved findings. This is NOT conversation memory — it's a deliverable being built alongside the chat.
get_report — Retrieve the current saved report findings. Agent calls this when asked to generate a summary/report, or to check what's already been captured.
check_scope — Explicit scope validation tool. Takes a question string, runs a lightweight check against the dataset metadata (platform, product name, review count). Returns whether the question is in-scope, out-of-scope, or ambiguous. This makes the scope guard tool-backed rather than purely prompt-driven. The agent should call this when a question feels borderline.
In app/agent.py, the current approach flattens history to strings:
for msg in conversation_history[-20:]:
messages_for_context.append(f"{'User' if msg.role == 'user' else 'Assistant'}: {msg.content}")
This loses structure and context. Instead: - Pass richer context that preserves the distinction between user messages and assistant responses - Include a session context block at the top of each prompt that summarizes: what topics have been explored, what the agent has learned so far, how many messages deep we are - Increase or make the context window smarter — 20 messages is tight. Consider summarizing older messages rather than hard-truncating.
Add a way for the user to export/download the accumulated report findings. This could be: - A button in the chat UI that triggers report generation - The user asking "generate a report" and the agent assembling from saved findings - A downloadable markdown/PDF output
Keep it simple — even just a "Generate Report" button that asks the agent to compile saved findings into a structured document would be great.
In app/models.py, add a report_findings field to the Session model:
report_findings: dict[str, list[str]] = Field(default_factory=dict)
This stores the accumulated findings by section key. Update app/store.py with helpers to append findings and retrieve them.
templates/, static/style.css, or static/app.js unless absolutely necessary for new features.app/ingest.py.app/vectordb.py is fine as-is.search_reviews, analyze_sentiment, generate_chart, calculate_stats, suggest_follow_ups all work. Keep them, just add the new ones alongside.check_scope tool reinforces it but doesn't replace the other layers.Dockerfile, render.yaml, requirements.txt — only touch if new dependencies are needed.BriefBot's app/tools.py used create_sdk_mcp_server with closure-based tools (each tool closes over app_id and mutable state dicts). ReviewLens already follows this pattern — keep it.
BriefBot's knowledge module (app/knowledge.py) loaded markdown files on startup, cached them, and provided fuzzy name matching. Port this pattern.
BriefBot's system prompt (app/prompts.py) was one massive structured prompt with explicit sections for mission, tools, quality standards, and brief structure. ReviewLens should follow the same structure but adapted for per-message conversation rather than autonomous research.
The key philosophical difference: BriefBot's agent ran autonomously for 40 turns and needed self-assessment loops and working memory to manage its own process. ReviewLens's agent runs per-message with a human steering — so it needs good context passing and a way to build a deliverable (report) alongside the conversation, but NOT autonomous self-assessment loops or research planning tools. The human IS the steering mechanism.
Create:
- knowledge/review-analysis-framework.md
- knowledge/analysis-patterns.md
- knowledge/report-structure.md
- app/knowledge.py
Modify:
- app/prompts.py — restructure into sandbox quadrants
- app/tools.py — add list_knowledge_files, read_knowledge_file, save_to_report, get_report, check_scope
- app/agent.py — fix conversation context passing
- app/models.py — add report_findings to Session
- app/store.py — add report findings helpers
- app/main.py — add report generation endpoint, register knowledge module on startup
Don't touch:
- templates/*
- static/*
- app/ingest.py
- app/vectordb.py
- sample-data/*
- Dockerfile, render.yaml (unless new deps needed)
- requirements.txt (unless new deps needed)
{
"subagent_type": "Explore",
"prompt": "Explore the ReviewLens AI codebase at /Users/shaynelarocque/Documents/GitHub/futuresight. I need a thorough understanding of:\n\n1. Full directory structure\n2. Read these files completely:\n - app/prompts.py\n - app/tools.py\n - app/agent.py\n - app/models.py\n - app/store.py\n - app/main.py\n - app/knowledge.py (if exists)\n\nAlso check if there's a BriefBot reference anywhere (maybe a sibling directory or referenced project) - look for app/knowledge.py patterns.\n\nReturn the FULL contents of each file found."
}Perfect! Now I have all the information. Let me compile a comprehensive report with all the file contents and analysis.
## COMPLETE CODEBASE EXPLORATION REPORT: ReviewLens AI
### 1. DIRECTORY STRUCTURE
```
/Users/shaynelarocque/Documents/GitHub/futuresight/
├── knowledge/
│ ├── briefbot-casestudy.md (28.8 KB - BriefBot reference project)
│ └── take-home.md (4.9 KB - Assignment spec)
└── reviewlens/
├── app/
│ ├── __init__.py (Empty)
│ ├── agent.py (111 lines - Conversational agent loop)
│ ├── ingest.py (498 lines - CSV parsing & URL scraping)
│ ├── main.py (416 lines - FastAPI application)
│ ├── models.py (56 lines - Pydantic data models)
│ ├── prompts.py (69 lines - System prompt builder)
│ ├── store.py (106 lines - File-based session persistence)
│ ├── tools.py (410 lines - MCP tools for agent)
│ └── vectordb.py (152 lines - ChromaDB wrapper)
├── static/
│ ├── app.js (17 KB - Frontend JS)
│ └── style.css (27 KB - Styling)
├── templates/
│ ├── app.html (21 KB - Main app template)
│ └── partials/
│ └── error.html
├── .env.example
├── .gitignore
├── requirements.txt
├── README.md
├── render.yaml
└── Dockerfile
```
**Total Python LOC: ~1,810 lines**
---
### 2. COMPLETE FILE CONTENTS
#### **app/prompts.py** (69 lines)
```python
"""System prompt builder for the ReviewLens agent."""
from __future__ import annotations
from .models import IngestionSummary
def build_system_prompt(summary: IngestionSummary) -> str:
"""Build the scope-guarded system prompt for the conversational agent."""
rating_info = ""
if summary.rating_distribution:
dist_lines = [f" {k}★: {v} reviews" for k, v in sorted(summary.rating_distribution.items())]
rating_info = "\n".join(dist_lines)
return f"""You are ReviewLens AI, a review intelligence analyst. You help users analyse customer reviews that have been ingested into your system.
## Dataset Context
- Product/Entity: {summary.product_name}
- Platform: {summary.platform}
- Total Reviews: {summary.total_reviews}
- Date Range: {summary.date_range or "Unknown"}
- Average Rating: {summary.average_rating or "Unknown"}
- Rating Distribution:
{rating_info or " Not available"}
## CRITICAL: Scope Guard — Your #1 Rule
You MUST ONLY answer questions about the ingested review dataset described above. This is non-negotiable.
**You MUST refuse if the user asks about:**
- Reviews from other platforms (e.g., if data is from Amazon, refuse questions about Google Maps reviews)
- Competitor products or brands not mentioned in the reviews
- General knowledge, news, current events, or facts not in the dataset
- Predictions, forecasts, or speculation beyond what the data shows
- Anything requiring information you don't have from the reviews
**How to refuse:**
- Be friendly but firm: "I can only analyse the {summary.total_reviews} {summary.platform} reviews for {summary.product_name} that were uploaded. I don't have access to [what they asked about]. Would you like to explore something in this dataset instead?"
- Always suggest a relevant alternative question about the actual data.
**Before answering ANY question, verify:**
1. Can this be answered using ONLY the ingested reviews?
2. If you used the search_reviews tool, did it return relevant results?
3. Are you about to state something that comes from your general knowledge rather than the data? If so, STOP.
If your search returns no relevant results, say so honestly. Never fill gaps with general knowledge.
## Your Tools
You have access to these tools — use them to give data-grounded answers:
- **search_reviews**: Semantic search over the review database. Use this to find reviews relevant to the user's question. Always search before answering.
- **analyze_sentiment**: Extract aspects and sentiment from a set of reviews. Use for sentiment breakdowns, aspect analysis, and opinion mining.
- **generate_chart**: Create a Chart.js chart that renders inline in the chat. Use when a visual would communicate the answer better than text. Keep charts clean and focused.
- **calculate_stats**: Run aggregations, distributions, and trend analysis on the review data. Use for quantitative questions.
- **suggest_follow_ups**: Generate contextual follow-up question buttons based on what was just discussed. Call this at the end of EVERY response.
## Response Guidelines
1. **Always search first.** Before answering any question about the reviews, call search_reviews to ground your response in actual data.
2. **Cite specific reviews.** Quote or paraphrase actual review text to support your claims. Use phrases like "One reviewer noted..." or "Several reviews mention..."
3. **Use charts strategically.** A chart adds value for distributions, trends over time, and comparisons. Don't chart everything — use them when visual communication is genuinely better.
4. **Be quantitative.** When you can give numbers (counts, percentages, averages), do so. Use calculate_stats for aggregations.
5. **Always suggest follow-ups.** End every response by calling suggest_follow_ups to give the user contextual next steps.
6. **Be concise.** Users are analysts who want insights, not essays. Lead with the finding, support with data, suggest next steps.
"""
```
**Key Design:** Strict scope-guarding with 3-layer defense: explicit refusal instructions, retrieval validation, and architectural boundary (no tools access external data).
---
#### **app/tools.py** (410 lines)
```python
"""MCP tools for the ReviewLens agent, using create_sdk_mcp_server."""
from __future__ import annotations
import json
from collections import Counter
from typing import Any, Callable, Awaitable
from claude_agent_sdk import create_sdk_mcp_server, tool
from . import vectordb
# Type alias for the SSE emit callback
EmitFn = Callable[[str, str, str], Awaitable[None]]
def create_review_tools_server(
session_id: str,
emit_fn: EmitFn,
):
"""Create the MCP server with all review analysis tools.
Like briefbot, uses closure over session_id so tools access the right data.
"""
# ── search_reviews ───────────────────────────────────────────────
@tool(
name="search_reviews",
description="Semantic search over the ingested review database. Use this to find reviews relevant to the user's question. Returns the most relevant reviews ranked by similarity.",
input_schema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query — describe what you're looking for in natural language.",
},
"n_results": {
"type": "integer",
"description": "Number of results to return (default 10, max 25).",
"default": 10,
},
"min_rating": {
"type": "number",
"description": "Optional: only return reviews with rating >= this value.",
},
"max_rating": {
"type": "number",
"description": "Optional: only return reviews with rating <= this value.",
},
},
"required": ["query"],
},
)
async def search_reviews_tool(args: dict[str, Any]) -> dict[str, Any]:
query = args["query"]
n = min(args.get("n_results", 10), 25)
where = None
if "min_rating" in args and "max_rating" in args:
where = {
"$and": [
{"rating": {"$gte": args["min_rating"]}},
{"rating": {"$lte": args["max_rating"]}},
]
}
elif "min_rating" in args:
where = {"rating": {"$gte": args["min_rating"]}}
elif "max_rating" in args:
where = {"rating": {"$lte": args["max_rating"]}}
results = vectordb.search_reviews(session_id, query, n_results=n, where=where)
await emit_fn(
session_id,
f"Searched reviews: \"{query}\" — {len(results)} results",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"query": query,
"result_count": len(results),
"results": results,
"note": "If no results are relevant, tell the user you couldn't find matching reviews. Do NOT make up information.",
}
),
}
]
}
# ── analyze_sentiment ────────────────────────────────────────────
@tool(
name="analyze_sentiment",
description="Analyse sentiment and extract aspects from reviews matching a query. Returns aspect-sentiment pairs and overall sentiment distribution. Use for questions about what people like/dislike, pain points, praise, etc.",
input_schema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query to find relevant reviews for analysis.",
},
"n_reviews": {
"type": "integer",
"description": "Number of reviews to analyse (default 15, max 30).",
"default": 15,
},
},
"required": ["query"],
},
)
async def analyze_sentiment_tool(args: dict[str, Any]) -> dict[str, Any]:
query = args["query"]
n = min(args.get("n_reviews", 15), 30)
results = vectordb.search_reviews(session_id, query, n_results=n)
if not results:
return {
"content": [
{
"type": "text",
"text": json.dumps({"error": "No reviews found matching query.", "results": []}),
}
]
}
# Provide the raw reviews to the LLM — it does the actual sentiment analysis.
# This is more flexible and accurate than a rule-based approach.
await emit_fn(
session_id,
f"Analysing sentiment: \"{query}\" — {len(results)} reviews",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"query": query,
"review_count": len(results),
"reviews": results,
"instruction": (
"Analyse these reviews for sentiment and aspects. "
"For each review, identify: (1) key aspects mentioned "
"(e.g., 'battery life', 'customer service', 'price'), "
"(2) sentiment per aspect (positive/negative/neutral/mixed), "
"(3) overall sentiment. Then summarise the patterns across all reviews. "
"Only report what the reviews actually say."
),
}
),
}
]
}
# ── generate_chart ───────────────────────────────────────────────
@tool(
name="generate_chart",
description="Generate a Chart.js chart configuration that renders inline in the chat. Use for visualising rating distributions, sentiment breakdowns, trends over time, aspect comparisons, etc. The chart renders automatically — just return valid config.",
input_schema={
"type": "object",
"properties": {
"chart_type": {
"type": "string",
"enum": ["bar", "line", "pie", "doughnut"],
"description": "The type of chart to generate.",
},
"title": {
"type": "string",
"description": "Chart title displayed above the visualisation.",
},
"labels": {
"type": "array",
"items": {"type": "string"},
"description": "X-axis labels (categories, dates, etc.).",
},
"datasets": {
"type": "array",
"items": {
"type": "object",
"properties": {
"label": {"type": "string"},
"data": {
"type": "array",
"items": {"type": "number"},
},
},
"required": ["label", "data"],
},
"description": "One or more data series.",
},
},
"required": ["chart_type", "title", "labels", "datasets"],
},
)
async def generate_chart_tool(args: dict[str, Any]) -> dict[str, Any]:
chart_config = {
"type": args["chart_type"],
"title": args["title"],
"data": {
"labels": args["labels"],
"datasets": args["datasets"],
},
}
await emit_fn(
session_id,
f"Generated chart: {args['title']}",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"chart": chart_config,
"instruction": "This chart will render automatically in the chat. Reference it in your text response.",
}
),
}
]
}
# ── calculate_stats ──────────────────────────────────────────────
@tool(
name="calculate_stats",
description="Calculate aggregate statistics over the full review dataset. Use for quantitative questions: average ratings, distributions, counts by category, trends over time periods, etc.",
input_schema={
"type": "object",
"properties": {
"operation": {
"type": "string",
"enum": [
"rating_distribution",
"rating_over_time",
"review_volume_over_time",
"keyword_frequency",
"summary_stats",
],
"description": "The type of statistical analysis to run.",
},
"keyword": {
"type": "string",
"description": "For keyword_frequency: the keyword or phrase to count.",
},
},
"required": ["operation"],
},
)
async def calculate_stats_tool(args: dict[str, Any]) -> dict[str, Any]:
operation = args["operation"]
all_reviews = vectordb.get_all_reviews(session_id)
if not all_reviews:
return {
"content": [
{
"type": "text",
"text": json.dumps({"error": "No reviews in database."}),
}
]
}
result: dict[str, Any] = {"operation": operation}
if operation == "rating_distribution":
ratings = [r["metadata"].get("rating") for r in all_reviews if r["metadata"].get("rating") is not None]
dist = Counter(int(round(r)) for r in ratings)
result["distribution"] = {str(k): v for k, v in sorted(dist.items())}
result["total_with_ratings"] = len(ratings)
result["total_reviews"] = len(all_reviews)
if ratings:
result["average"] = round(sum(ratings) / len(ratings), 2)
elif operation == "rating_over_time":
by_month: dict[str, list[float]] = {}
for r in all_reviews:
date_str = r["metadata"].get("date", "")
rating = r["metadata"].get("rating")
if date_str and rating is not None:
month = date_str[:7] # YYYY-MM
by_month.setdefault(month, []).append(rating)
result["monthly_averages"] = {
m: round(sum(v) / len(v), 2)
for m, v in sorted(by_month.items())
}
elif operation == "review_volume_over_time":
by_month: dict[str, int] = {}
for r in all_reviews:
date_str = r["metadata"].get("date", "")
if date_str:
month = date_str[:7]
by_month[month] = by_month.get(month, 0) + 1
result["monthly_volume"] = dict(sorted(by_month.items()))
elif operation == "keyword_frequency":
keyword = args.get("keyword", "").lower()
if not keyword:
return {
"content": [
{
"type": "text",
"text": json.dumps({"error": "keyword parameter required"}),
}
]
}
count = sum(1 for r in all_reviews if keyword in r["text"].lower())
result["keyword"] = keyword
result["count"] = count
result["percentage"] = round(count / len(all_reviews) * 100, 1)
elif operation == "summary_stats":
ratings = [r["metadata"].get("rating") for r in all_reviews if r["metadata"].get("rating") is not None]
result["total_reviews"] = len(all_reviews)
result["total_with_ratings"] = len(ratings)
if ratings:
result["average_rating"] = round(sum(ratings) / len(ratings), 2)
result["min_rating"] = min(ratings)
result["max_rating"] = max(ratings)
dates = [r["metadata"].get("date", "") for r in all_reviews if r["metadata"].get("date")]
if dates:
result["earliest_review"] = min(dates)
result["latest_review"] = max(dates)
avg_length = sum(len(r["text"]) for r in all_reviews) / len(all_reviews)
result["average_review_length"] = round(avg_length)
await emit_fn(
session_id,
f"Calculated stats: {operation}",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(result),
}
]
}
# ── suggest_follow_ups ───────────────────────────────────────────
@tool(
name="suggest_follow_ups",
description="Generate contextual follow-up question buttons based on the current conversation. Call this at the END of every response. The questions should be specific to what was just discussed and lead the user deeper into the data.",
input_schema={
"type": "object",
"properties": {
"questions": {
"type": "array",
"items": {"type": "string"},
"description": "3-4 contextual follow-up questions. Make them specific and actionable.",
"minItems": 2,
"maxItems": 4,
},
},
"required": ["questions"],
},
)
async def suggest_follow_ups_tool(args: dict[str, Any]) -> dict[str, Any]:
questions = args["questions"]
await emit_fn(
session_id,
f"Suggested {len(questions)} follow-up questions",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"follow_ups": questions,
"instruction": "These will appear as clickable buttons below your message. Do not repeat them in your text response.",
}
),
}
]
}
# ── Build and return the MCP server ──────────────────────────────
return create_sdk_mcp_server(
name="reviewlens",
version="1.0.0",
tools=[
search_reviews_tool,
analyze_sentiment_tool,
generate_chart_tool,
calculate_stats_tool,
suggest_follow_ups_tool,
],
)
```
**Key Design:** 5 core tools bound via closure over session_id. Each tool emits progress via SSE, returns JSON with context/instructions for the agent.
---
#### **app/agent.py** (111 lines)
```python
"""Conversational agent loop — invoked per-message, not per-session."""
from __future__ import annotations
import json
import os
from typing import Any
from claude_agent_sdk import (
AssistantMessage,
ClaudeAgentOptions,
ClaudeSDKClient,
HookMatcher,
ResultMessage,
TextBlock,
)
from .models import ChatMessage, IngestionSummary
from .prompts import build_system_prompt
from .tools import create_review_tools_server
DEFAULT_MODEL = "claude-sonnet-4-6"
async def handle_message(
session_id: str,
user_message: str,
conversation_history: list[ChatMessage],
summary: IngestionSummary,
emit_fn,
) -> ChatMessage:
"""Run the agent for a single user message. Returns the assistant's response."""
model = os.getenv("CLAUDE_MODEL", DEFAULT_MODEL)
system_prompt = build_system_prompt(summary)
# Build conversation context for the agent
messages_for_context = []
for msg in conversation_history[-20:]: # Last 20 messages for context window
messages_for_context.append(f"{'User' if msg.role == 'user' else 'Assistant'}: {msg.content}")
prompt_parts = []
if messages_for_context:
prompt_parts.append("Previous conversation:\n" + "\n".join(messages_for_context))
prompt_parts.append(f"User: {user_message}")
prompt = "\n\n".join(prompt_parts)
# Create per-request MCP server (closure over session_id)
server = create_review_tools_server(
session_id=session_id,
emit_fn=emit_fn,
)
# Track tool outputs for charts and follow-ups
charts: list[dict[str, Any]] = []
follow_ups: list[str] = []
async def post_tool_hook(input_data, tool_use_id, context):
"""Intercept tool results to extract charts and follow-ups."""
# The tool result is in context
try:
result = context.get("result", {})
content = result.get("content", [])
for block in content:
if block.get("type") == "text":
data = json.loads(block["text"])
if "chart" in data:
charts.append(data["chart"])
if "follow_ups" in data:
follow_ups.extend(data["follow_ups"])
except (json.JSONDecodeError, KeyError, TypeError):
pass
return {}
options = ClaudeAgentOptions(
system_prompt=system_prompt,
allowed_tools=["mcp__reviewlens__*"],
permission_mode="bypassPermissions",
max_turns=15,
model=model,
mcp_servers={"reviewlens": server},
hooks={
"PostToolUse": [HookMatcher(matcher=".*", hooks=[post_tool_hook])],
},
)
response_text = ""
try:
async with ClaudeSDKClient(options=options) as client:
await client.query(prompt)
async for message in client.receive_response():
if isinstance(message, AssistantMessage):
for block in message.content:
if isinstance(block, TextBlock) and block.text.strip():
response_text += block.text.strip() + "\n"
elif isinstance(message, ResultMessage):
if message.result and not response_text.strip():
response_text = message.result
except Exception as e:
response_text = f"I encountered an error processing your question. Please try again.\n\nError: {str(e)}"
await emit_fn(session_id, f"Agent error: {e}", "error")
return ChatMessage(
role="assistant",
content=response_text.strip(),
charts=charts,
follow_ups=follow_ups,
)
```
**Key Design:** Per-message agent loop (not per-session). Closure pattern captures session_id for tool context. Hooks intercept tool outputs to extract charts and follow-ups for rich UI rendering.
---
#### **app/models.py** (56 lines)
```python
"""Pydantic models for ReviewLens."""
from __future__ import annotations
from datetime import datetime
from enum import Enum
from typing import Any
from pydantic import BaseModel, Field
class Review(BaseModel):
"""A single review record."""
id: str = ""
text: str
rating: float | None = None
date: datetime | None = None
author: str = ""
platform: str = ""
metadata: dict[str, Any] = Field(default_factory=dict)
class IngestionSummary(BaseModel):
"""Summary shown after ingestion."""
total_reviews: int = 0
date_range: str = ""
rating_distribution: dict[str, int] = Field(default_factory=dict)
average_rating: float | None = None
platform: str = ""
product_name: str = ""
source_type: str = "" # "csv" or "url"
class ChatMessage(BaseModel):
"""A single chat message."""
role: str # "user" or "assistant"
content: str
charts: list[dict[str, Any]] = Field(default_factory=list)
follow_ups: list[str] = Field(default_factory=list)
timestamp: datetime = Field(default_factory=datetime.utcnow)
class Session(BaseModel):
"""A review analysis session."""
session_id: str
product_name: str = ""
platform: str = ""
summary: IngestionSummary = Field(default_factory=IngestionSummary)
messages: list[ChatMessage] = Field(default_factory=list)
status: str = "pending" # pending, ready, error
created_at: datetime = Field(default_factory=datetime.utcnow)
```
**Key Design:** Simple, composable Pydantic models. ChatMessage includes charts and follow-ups for rich responses.
---
#### **app/store.py** (106 lines)
```python
"""File-based session and message persistence."""
from __future__ import annotations
import json
import os
import shutil
from pathlib import Path
from .models import ChatMessage, IngestionSummary, Session
DATA_DIR = Path(os.getenv("DATA_DIR", "data"))
def _session_dir(session_id: str) -> Path:
d = DATA_DIR / session_id
d.mkdir(parents=True, exist_ok=True)
return d
# ── Session lifecycle ────────────────────────────────────────────────
def save_session(session: Session) -> None:
path = _session_dir(session.session_id) / "session.json"
path.write_text(session.model_dump_json(indent=2))
def load_session(session_id: str) -> Session | None:
path = _session_dir(session_id) / "session.json"
if not path.exists():
return None
return Session.model_validate_json(path.read_text())
def set_status(session_id: str, status: str) -> None:
session = load_session(session_id)
if session:
session.status = status
save_session(session)
def update_summary(session_id: str, summary: IngestionSummary) -> None:
session = load_session(session_id)
if session:
session.summary = summary
session.product_name = summary.product_name
session.platform = summary.platform
save_session(session)
# ── Chat messages ────────────────────────────────────────────────────
def append_message(session_id: str, message: ChatMessage) -> None:
session = load_session(session_id)
if session:
session.messages.append(message)
save_session(session)
def get_messages(session_id: str) -> list[ChatMessage]:
session = load_session(session_id)
return session.messages if session else []
# ── Reviews (raw JSON for reference) ────────────────────────────────
def save_reviews_json(session_id: str, reviews: list[dict]) -> None:
path = _session_dir(session_id) / "reviews.json"
path.write_text(json.dumps(reviews, default=str, indent=2))
def load_reviews_json(session_id: str) -> list[dict]:
path = _session_dir(session_id) / "reviews.json"
if not path.exists():
return []
return json.loads(path.read_text())
# ── Session listing ──────────────────────────────────────────────────
def list_sessions() -> list[Session]:
"""Return all sessions, newest first."""
sessions = []
if not DATA_DIR.exists():
return sessions
for d in DATA_DIR.iterdir():
if not d.is_dir():
continue
path = d / "session.json"
if path.exists():
try:
sessions.append(Session.model_validate_json(path.read_text()))
except Exception:
continue
sessions.sort(key=lambda s: s.created_at, reverse=True)
return sessions
def delete_session(session_id: str) -> bool:
"""Delete a session and all its data. Returns True if it existed."""
d = DATA_DIR / session_id
if d.exists() and d.is_dir():
shutil.rmtree(d)
return True
return False
```
**Key Design:** Simple file-based persistence in `data/{session_id}/`. Session.json contains full session state including chat history.
---
#### **app/main.py** (416 lines)
**[Complete file content provided above - too long to repeat here]**
**Key Endpoints:**
- `GET /` - Home page with session list
- `GET /chat/{session_id}` - Chat interface
- `POST /upload` - CSV upload ingestion
- `POST /sample` - Load sample data
- `POST /scrape` - URL scraping (Firecrawl)
- `POST /chat/{session_id}/send` - Send message (returns HTML + SSE stream)
- `GET /chat/{session_id}/stream` - SSE event stream
- `GET /api/status/{session_id}` - Status polling
- `DELETE /api/sessions/{session_id}` - Delete session
**Key Design:** SSE streaming with in-memory event queues per session. Background task for URL scraping. HTML rendering with markdown + charts inline.
---
#### **app/vectordb.py** (152 lines)
**[Complete file content provided above]**
**Key Functions:**
- `index_reviews()` - Embed reviews into ChromaDB with batching (500 at a time)
- `search_reviews()` - Semantic search with optional rating filters
- `get_all_reviews()` - Full collection retrieval for stats
- `get_review_count()` - Count reviews in collection
**Key Design:** In-process ChromaDB (no external server). Collection names hashed from session_id for safety.
---
#### **app/ingest.py** (498 lines)
**[Complete file content provided above]**
**Key Components:**
1. **CSV Parsing:**
- Static alias-based column mapping with fallback
- AI-powered column mapping via Claude + tool use
- Date/rating parsers with format detection
- Supplementary text concatenation (pros/cons → review body)
- Unmapped columns preserved as metadata
2. **URL Scraping:**
- Firecrawl `/v2/agent` endpoint for autonomous navigation
- Structured output with schema validation
- Poll-based completion checking (2s intervals, 5min timeout)
- Multi-strategy fallback on transient errors
- Platform auto-detection (Amazon, G2, Capterra, etc.)
3. **Summary Building:**
- Rating distribution bucketing
- Date range calculation
- Average rating computation
- Platform detection from review metadata
**Key Design:** CSV-first, URL scraping as best-effort secondary. AI column mapping for flexibility; graceful fallback to static rules.
---
### 3. BRIEFBOT REFERENCE
The `knowledge/briefbot-casestudy.md` file is a detailed case study of **BriefBot**, a prior project also by Shayne La Rocque using Claude Agent SDK. Key insights that influenced ReviewLens:
**BriefBot Design Patterns (Applied to ReviewLens):**
1. **Sandbox Over Script** - Agent gets environment (knowledge, tools, goal) rather than task checklist
2. **Closure Pattern** - Tools access session-specific data via closure over session_id
3. **MCP Server Architecture** - Custom tool definitions with Claude Agent SDK
4. **Knowledge Base** - Context documents that ground the agent
5. **Multi-Strategy Fetch Cascade** - Resilient data retrieval (GitHub API → HTTP → Agent-driven → Jina → Wayback)
6. **Self-Assessment Loops** - Agent evaluates confidence after each phase
7. **Human-in-the-Loop** - Two touchpoints: mid-run clarification questions + post-run review flags
8. **Honest Gap Flagging** - Never fabricate; explicitly surface limitations
**ReviewLens Adaptation:**
- Scope guard enforces single-dataset boundary (vs BriefBot's multi-source research)
- Tools return JSON with instructions for chart/follow-up rendering
- Per-message loop (vs BriefBot's single brief run)
- SSE streaming for real-time UI updates
---
### 4. KEY ARCHITECTURAL DECISIONS
**1. Scope Guarding (3-Layer Defense)**
- System prompt instructions
- Retrieval validation (search must return relevant results)
- Architectural boundary (tools can't access external data)
**2. Per-Message Agent Loop**
- Fresh agent invocation for each user message (not per-session)
- Conversation history injected as context (last 20 messages)
- Simplifies reasoning about state and tool context
**3. CSV-First Ingestion**
- Primary path: CSV upload with AI + static fallback column mapping
- Secondary: URL scraping via Firecrawl agent (best-effort)
- Firecrawl agent handles pagination autonomously
**4. ChromaDB In-Process**
- No external vector DB server dependency
- Persistent storage: `data/chroma/`
- Per-session collection hashed from session_id
- Batched upserts (500 at a time) for performance
**5. SSE Streaming UI**
- Event queue per session (in-memory deque)
- Synchronization via asyncio.Event
- Charts and follow-ups extracted from tool outputs via hook
- HTML rendering with markdown + Chart.js inline
**6. File-Based Session Persistence**
- Simple, zero-dependency storage in `data/{session_id}/`
- Session.json contains full state (summary + chat history)
- Reviews.json backup for debugging
- Scales to thousands of sessions before DB needed
**7. Tool Design Pattern**
- Closure over session_id for data isolation
- JSON-in-JSON encoding (tool output contains instructions)
- Post-tool-use hooks extract charts/follow-ups for UI
- All tools emit progress via SSE
---
### 5. ASSIGNMENT CONTEXT
From `knowledge/take-home.md`, the task was to build **ReviewLens AI** for an ORM consultancy:
**Business Context:** Analysts manually spend hours reading reviews to find pain points.
**Solution:** Guardrailed Q&A interface over ingested reviews.
**Core Requirements:**
1. Ingestion (CSV or URL) + Summary
2. Scope-guarded Q&A (refuse out-of-scope questions)
3. Public deployment
**Stack Chosen:** FastAPI, Claude Agent SDK, ChromaDB, Chart.js, HTMX, Firecrawl
**Deployment:** Render (free tier) via `render.yaml`
---
### 6. FILE MANIFEST
| File | Lines | Purpose |
|------|-------|---------|
| `app/prompts.py` | 69 | System prompt with scope guard + response guidelines |
| `app/tools.py` | 410 | 5 MCP tools: search, sentiment, chart, stats, follow-ups |
| `app/agent.py` | 111 | Per-message agent loop with Claude SDK |
| `app/models.py` | 56 | Pydantic: Review, Session, ChatMessage, IngestionSummary |
| `app/store.py` | 106 | File-based session persistence |
| `app/main.py` | 416 | FastAPI endpoints: upload, scrape, chat, SSE stream |
| `app/vectordb.py` | 152 | ChromaDB wrapper: index, search, stats |
| `app/ingest.py` | 498 | CSV parsing + Firecrawl URL scraping |
| `app/__init__.py` | 0 | Package marker |
| `requirements.txt` | 12 | Dependencies (FastAPI, Claude SDK, ChromaDB, etc.) |
| `README.md` | 135 | Setup, architecture, env vars |
| `render.yaml` | 12 | Render deployment config |
| **Python Total** | **1,810** | |
---
### 7. NO "knowledge.py" FILE
There is **no `knowledge.py`** in the codebase. The knowledge base referenced in BriefBot (D3's rubric, stream definitions, SDG framework) is **not implemented in ReviewLens** because ReviewLens operates on generic review data, not structured application documents.
The closest pattern is the **IngestionSummary context** injected into the system prompt, which grounds the agent to the specific dataset.
---
This comprehensive exploration reveals a well-architected, production-ready review intelligence platform that successfully adapts BriefBot's agentic sandbox pattern to a focused, scope-guarded conversational interface.agentId: afeaf6bc84010b97c (use SendMessage with to: 'afeaf6bc84010b97c' to continue this agent) <usage>total_tokens: 65728 tool_uses: 28 duration_ms: 98967</usage>
{
"file_path": "/Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/main.py"
} 1→"""FastAPI application for ReviewLens AI."""
2→
3→from __future__ import annotations
4→
5→import asyncio
6→import html as html_module
7→import json
8→import os
9→import uuid
10→from collections import deque
11→from datetime import datetime
12→from pathlib import Path
13→from typing import Any
14→
15→import markdown
16→from dotenv import load_dotenv
17→from fastapi import FastAPI, File, Form, Request, UploadFile
18→from fastapi.responses import HTMLResponse, JSONResponse
19→from fastapi.staticfiles import StaticFiles
20→from fastapi.templating import Jinja2Templates
21→from sse_starlette.sse import EventSourceResponse
22→
23→from . import store, vectordb
24→from .agent import handle_message
25→from .ingest import build_summary, parse_csv, scrape_url
26→from .models import ChatMessage, IngestionSummary, Session
27→
28→load_dotenv()
29→
30→BASE_DIR = Path(__file__).resolve().parent.parent
31→app = FastAPI(title="ReviewLens AI")
32→app.mount("/static", StaticFiles(directory=str(BASE_DIR / "static")), name="static")
33→templates = Jinja2Templates(directory=str(BASE_DIR / "templates"))
34→
35→# ── In-memory SSE event queues (per-session) ────────────────────────
36→_event_queues: dict[str, deque[dict[str, str]]] = {}
37→_response_events: dict[str, asyncio.Event] = {}
38→
39→
40→def _get_queue(session_id: str) -> deque[dict[str, str]]:
41→ if session_id not in _event_queues:
42→ _event_queues[session_id] = deque()
43→ return _event_queues[session_id]
44→
45→
46→def _get_response_event(session_id: str) -> asyncio.Event:
47→ if session_id not in _response_events:
48→ _response_events[session_id] = asyncio.Event()
49→ return _response_events[session_id]
50→
51→
52→async def _emit(session_id: str, message: str, level: str = "info") -> None:
53→ _get_queue(session_id).append({"event": level, "data": message})
54→
55→
56→# ── Health check ─────────────────────────────────────────────────────
57→
58→@app.get("/health")
59→async def health():
60→ return {"status": "ok"}
61→
62→
63→# ── Sample data discovery ────────────────────────────────────────────
64→
65→SAMPLE_DIR = BASE_DIR / "sample-data"
66→
67→
68→def _list_sample_files() -> list[dict[str, str]]:
69→ """List available sample CSVs with human-readable labels."""
70→ if not SAMPLE_DIR.exists():
71→ return []
72→ files = []
73→ for f in sorted(SAMPLE_DIR.glob("*.csv")):
74→ label = f.stem.replace("_", " ").replace("-", " ").title()
75→ files.append({"filename": f.name, "label": label})
76→ return files
77→
78→
79→# ── App shell (home + chat share the same layout) ───────────────────
80→
81→def _shell_context(request: Request, session=None):
82→ """Build template context for the app shell."""
83→ sessions = store.list_sessions()
84→ ctx = {
85→ "request": request,
86→ "sessions": sessions,
87→ "session": session,
88→ "summary": session.summary if session else None,
89→ "messages": session.messages if session else [],
90→ "active_id": session.session_id if session else None,
91→ "sample_files": _list_sample_files(),
92→ }
93→ return ctx
94→
95→
96→@app.get("/", response_class=HTMLResponse)
97→async def index(request: Request):
98→ return templates.TemplateResponse("app.html", _shell_context(request))
99→
100→
101→@app.get("/chat/{session_id}", response_class=HTMLResponse)
102→async def chat_page(request: Request, session_id: str):
103→ session = store.load_session(session_id)
104→ if not session:
105→ return HTMLResponse("<h1>Session not found</h1>", status_code=404)
106→ return templates.TemplateResponse("app.html", _shell_context(request, session))
107→
108→
109→# ── CSV Upload ───────────────────────────────────────────────────────
110→
111→@app.post("/upload")
112→async def upload_csv(
113→ request: Request,
114→ file: UploadFile = File(...),
115→ product_name: str = Form(""),
116→ platform: str = Form(""),
117→):
118→ session_id = str(uuid.uuid4())
119→
120→ content = await file.read()
121→ try:
122→ reviews = await parse_csv(content, platform=platform, product_name=product_name)
123→ except Exception as e:
124→ return templates.TemplateResponse(
125→ "partials/error.html",
126→ {"request": request, "error": f"Failed to parse CSV: {e}"},
127→ status_code=400,
128→ )
129→
130→ if not reviews:
131→ return templates.TemplateResponse(
132→ "partials/error.html",
133→ {"request": request, "error": "No reviews found in the CSV. Make sure it has a text/review column."},
134→ status_code=400,
135→ )
136→
137→ summary = build_summary(reviews, source_type="csv", product_name=product_name, platform=platform)
138→
139→ session = Session(
140→ session_id=session_id,
141→ product_name=summary.product_name,
142→ platform=summary.platform,
143→ summary=summary,
144→ status="indexing",
145→ )
146→ store.save_session(session)
147→ store.save_reviews_json(session_id, [r.model_dump(mode="json") for r in reviews])
148→
149→ indexed = vectordb.index_reviews(session_id, reviews)
150→ summary.total_reviews = indexed
151→ store.update_summary(session_id, summary)
152→ store.set_status(session_id, "ready")
153→
154→ return HTMLResponse(
155→ status_code=200,
156→ content=f'<script>window.location.href="/chat/{session_id}";</script>',
157→ headers={"HX-Redirect": f"/chat/{session_id}"},
158→ )
159→
160→
161→# ── Sample Data ──────────────────────────────────────────────────────
162→
163→@app.post("/sample")
164→async def load_sample(
165→ request: Request,
166→ filename: str = Form(...),
167→):
168→ # Sanitise: only allow filenames that exist in sample-data/
169→ path = SAMPLE_DIR / filename
170→ if not path.exists() or not path.suffix == ".csv" or ".." in filename:
171→ return templates.TemplateResponse(
172→ "partials/error.html",
173→ {"request": request, "error": "Sample file not found."},
174→ status_code=400,
175→ )
176→
177→ content = path.read_bytes()
178→ # Derive product name and platform from the filename
179→ stem = path.stem.replace("_", " ").replace("-", " ")
180→ product_name = stem.title()
181→ platform = ""
182→ for plat in ("Amazon", "Google Maps", "G2", "Capterra", "Trustpilot", "Yelp"):
183→ if plat.lower().replace(" ", "_") in path.stem.lower() or plat.lower().replace(" ", "") in path.stem.lower():
184→ platform = plat
185→ break
186→
187→ reviews = await parse_csv(content, platform=platform, product_name=product_name)
188→
189→ if not reviews:
190→ return templates.TemplateResponse(
191→ "partials/error.html",
192→ {"request": request, "error": "Could not parse reviews from this sample file."},
193→ status_code=400,
194→ )
195→
196→ session_id = str(uuid.uuid4())
197→ summary = build_summary(reviews, source_type="csv", product_name=product_name, platform=platform)
198→
199→ session = Session(
200→ session_id=session_id,
201→ product_name=summary.product_name,
202→ platform=summary.platform,
203→ summary=summary,
204→ status="indexing",
205→ )
206→ store.save_session(session)
207→ store.save_reviews_json(session_id, [r.model_dump(mode="json") for r in reviews])
208→
209→ indexed = vectordb.index_reviews(session_id, reviews)
210→ summary.total_reviews = indexed
211→ store.update_summary(session_id, summary)
212→ store.set_status(session_id, "ready")
213→
214→ return HTMLResponse(
215→ status_code=200,
216→ content=f'<script>window.location.href="/chat/{session_id}";</script>',
217→ headers={"HX-Redirect": f"/chat/{session_id}"},
218→ )
219→
220→
221→# ── URL Scrape ───────────────────────────────────────────────────────
222→
223→@app.post("/scrape")
224→async def scrape(
225→ request: Request,
226→ url: str = Form(...),
227→ product_name: str = Form(""),
228→ platform: str = Form(""),
229→):
230→ session_id = str(uuid.uuid4())
231→
232→ # Create session immediately with "scraping" status so user gets feedback
233→ session = Session(
234→ session_id=session_id,
235→ product_name=product_name or "Unknown Product",
236→ platform=platform or "Web",
237→ status="scraping",
238→ )
239→ store.save_session(session)
240→
241→ # Run Firecrawl agent in background — it can take minutes
242→ asyncio.create_task(_run_scrape(session_id, url, product_name, platform))
243→
244→ # Redirect to chat page which shows a progress view
245→ return HTMLResponse(
246→ status_code=200,
247→ content=f'<script>window.location.href="/chat/{session_id}";</script>',
248→ headers={"HX-Redirect": f"/chat/{session_id}"},
249→ )
250→
251→
252→async def _run_scrape(session_id: str, url: str, product_name: str, platform: str):
253→ """Background task: scrape URL, index reviews, update session status."""
254→ try:
255→ reviews = await scrape_url(url, platform=platform)
256→
257→ if not reviews:
258→ store.set_status(session_id, "error")
259→ await _emit(session_id, "No reviews could be extracted from that URL.", "error")
260→ return
261→
262→ summary = build_summary(reviews, source_type="url", product_name=product_name, platform=platform)
263→ store.save_reviews_json(session_id, [r.model_dump(mode="json") for r in reviews])
264→ indexed = vectordb.index_reviews(session_id, reviews)
265→ summary.total_reviews = indexed
266→ store.update_summary(session_id, summary)
267→ store.set_status(session_id, "ready")
268→ await _emit(session_id, f"Scraping complete — {indexed} reviews indexed.", "info")
269→
270→ except Exception as e:
271→ store.set_status(session_id, "error")
272→ await _emit(session_id, f"Scraping failed: {e}", "error")
273→
274→
275→# ── Session status polling (for scraping progress) ───────────────────
276→
277→@app.get("/api/status/{session_id}")
278→async def get_status(session_id: str):
279→ session = store.load_session(session_id)
280→ if not session:
281→ return JSONResponse({"status": "not_found"}, status_code=404)
282→ return JSONResponse({"status": session.status})
283→
284→
285→# ── Archive (delete) session ──────────────────────────────────────────
286→
287→@app.delete("/api/sessions/{session_id}")
288→async def delete_session(session_id: str):
289→ deleted = store.delete_session(session_id)
290→ if not deleted:
291→ return JSONResponse({"error": "not found"}, status_code=404)
292→ return JSONResponse({"ok": True})
293→
294→
295→# ── Send chat message ───────────────────────────────────────────────
296→
297→@app.post("/chat/{session_id}/send")
298→async def send_message(
299→ request: Request,
300→ session_id: str,
301→ message: str = Form(...),
302→):
303→ session = store.load_session(session_id)
304→ if not session:
305→ return JSONResponse({"error": "Session not found"}, status_code=404)
306→
307→ user_msg = ChatMessage(role="user", content=message)
308→ store.append_message(session_id, user_msg)
309→
310→ event = _get_response_event(session_id)
311→ event.clear()
312→
313→ user_html = _render_message(user_msg)
314→ asyncio.create_task(_run_agent_and_respond(session_id, message, session))
315→
316→ thinking_html = (
317→ '<div id="thinking-indicator" class="message assistant-message thinking">'
318→ '<div class="message-content">'
319→ '<div class="thinking-dots"><span></span><span></span><span></span></div>'
320→ '</div></div>'
321→ )
322→ return HTMLResponse(user_html + thinking_html)
323→
324→
325→async def _run_agent_and_respond(session_id: str, message: str, session: Session):
326→ try:
327→ history = store.get_messages(session_id)
328→
329→ assistant_msg = await handle_message(
330→ session_id=session_id,
331→ user_message=message,
332→ conversation_history=history[:-1],
333→ summary=session.summary,
334→ emit_fn=_emit,
335→ )
336→
337→ store.append_message(session_id, assistant_msg)
338→ html = _render_message(assistant_msg)
339→ _get_queue(session_id).append({"event": "message", "data": html})
340→
341→ except Exception as e:
342→ error_msg = ChatMessage(
343→ role="assistant",
344→ content=f"Sorry, I encountered an error: {e}",
345→ )
346→ store.append_message(session_id, error_msg)
347→ html = _render_message(error_msg)
348→ _get_queue(session_id).append({"event": "message", "data": html})
349→
350→ _get_response_event(session_id).set()
351→
352→
353→# ── SSE stream ───────────────────────────────────────────────────────
354→
355→@app.get("/chat/{session_id}/stream")
356→async def chat_stream(session_id: str):
357→ async def event_generator():
358→ queue = _get_queue(session_id)
359→ event = _get_response_event(session_id)
360→
361→ while True:
362→ while queue:
363→ item = queue.popleft()
364→ yield {"event": item["event"], "data": item["data"]}
365→ if item["event"] == "message":
366→ yield {"event": "done", "data": ""}
367→ return
368→
369→ if event.is_set() and not queue:
370→ yield {"event": "done", "data": ""}
371→ return
372→
373→ await asyncio.sleep(0.15)
374→
375→ return EventSourceResponse(event_generator())
376→
377→
378→# ── HTML rendering helpers ───────────────────────────────────────────
379→
380→def _render_message(msg: ChatMessage) -> str:
381→ role_class = "user-message" if msg.role == "user" else "assistant-message"
382→ escaped = html_module.escape(msg.content)
383→
384→ if msg.role == "assistant":
385→ content_html = markdown.markdown(
386→ msg.content,
387→ extensions=["tables", "fenced_code"],
388→ )
389→ else:
390→ content_html = f"<p>{escaped}</p>"
391→
392→ parts = [f'<div class="message {role_class}">']
393→ parts.append(f'<div class="message-content">{content_html}</div>')
394→
395→ for i, chart in enumerate(msg.charts):
396→ chart_id = f"chart-{uuid.uuid4().hex[:8]}"
397→ parts.append(
398→ f'<div class="chart-container">'
399→ f'<canvas id="{chart_id}"></canvas>'
400→ f'<script>renderChart("{chart_id}", {json.dumps(chart)});</script>'
401→ f'</div>'
402→ )
403→
404→ if msg.follow_ups:
405→ parts.append('<div class="follow-ups">')
406→ for q in msg.follow_ups:
407→ escaped_q = html_module.escape(q)
408→ parts.append(
409→ f'<button class="follow-up-btn" onclick="sendFollowUp(this)" '
410→ f'data-question="{escaped_q}">{escaped_q}</button>'
411→ )
412→ parts.append('</div>')
413→
414→ parts.append('</div>')
415→ return "\n".join(parts)
416→{
"file_path": "/Users/shaynelarocque/Documents/GitHub/futuresight/knowledge/briefbot-casestudy.md",
"limit": 200
} 1→https://github.com/shaynelarocque/briefbot
2→Replacing the research pile with a briefing agent - Case Study - Shayne La Rocque
3→HomeWorksHistoryConnect
4→Back to works
5→Replacing the research pile with a briefing agent
6→
7→Turning a 4 hour research scramble into an AI-native workflow
8→
9→I had wanted to build this for District 3 since the commercial AI wave kicked off in mid-2022. The admissions workflow was exactly the kind of process that made me think "this wouldn't exist like this if built today," but I never had the right opening to pursue it. Wealthsimple's AI Builder prompt finally gave me one.
10→My RoleDesigner & Developer
11→StackClaude Agent SDK, FastAPI, HTMX
12→Built ForWealthsimple AI Builder Application
13→Cost Per Brief~$2.50 CAD
14→Time Per Brief~18 minutes (vs. 3–4 hours manual)
15→The Bottleneck
16→
17→District 3 is a publicly funded startup incubator at Concordia University in Montreal, processing roughly 200 to 250 applications per month. When an application lands, an operations team member spends one to four hours on background research: scouring LinkedIn, checking founder websites, mapping the competitive landscape, looking for patents and press. They write hasty bullet-point notes and assign a program stream. Stream leads walk into panel meetings and spend their first ten minutes just orienting themselves.
18→
19→The research is the bottleneck. It's tedious and inconsistent. Two people reviewing the same application can produce dramatically different notes. The quality of the research dictates the quality of the panel's decision, yet it's treated as rote admin work.
20→
21→This is the kind of process that evolved before modern AI. It wouldn't exist like this if built today.
22→Before
23→
24→ Application sits for days
25→ 1–4 hours of manual research
26→ Hasty, inconsistent bullet-point notes
27→ Stream leads walk in cold
28→ Quality depends on who's on shift
29→
30→After
31→
32→ Application triggers agent immediately
33→ ~18 minutes, ~$2.50 CAD in API costs
34→ 9-section cited brief with risk flags
35→ Stream leads arrive informed
36→ Consistent quality, every time
37→
38→What I Built
39→
40→I built an AI system that eliminates the research bottleneck. When a new application is submitted, an agent autonomously generates a research brief: founder profiles with verified sources, competitive analysis, SDG alignment assessment, stream classification with reasoning, a scored evaluation rubric, risk flags, and interview questions for both operations staff and panelists, all cited to real URLs or application fields.
41→
42→The human can now open an application that's already deeply researched. Operations shifts from doing research to reviewing research. Stream leads arrive at panel meetings with context instead of spending their first ten minutes catching up.
43→The Brief: Nine Sections, All Cited
44→
45→Every brief the agent produces contains nine sections, each with mandatory citations to URLs, application fields, or knowledge base documents:
46→Section Purpose
47→Synthesis What the startup does, overall confidence score, and a plain-language recommendation
48→Founder Profiles Per-founder background research with credibility signals and gaps
49→SDG Coherence Assessment of whether claimed UN Sustainable Development Goals actually match the work
50→Competitive Context Comparable ventures, market landscape, and differentiation analysis
51→Evaluation Scorecard Each rubric criterion scored with justification and confidence level
52→Stream Classification Best-fit D3 program stream and stage with reasoning
53→Key Risks Red flags, gaps, and concerns ranked by severity
54→Questions for Ops Gap-based questions for the operations team to investigate before interview
55→Questions for Panelists Evaluation-based questions to probe during the interview itself
56→Design Philosophy: Sandbox, Not Script
57→
58→Version 1 of this system was a scripted pipeline, a chain of prompts with predetermined steps and retry logic. It broke constantly. If a LinkedIn page was down or a founder didn't have a website, the whole chain derailed. Wrong output was worse than no output, because it created more work for the ops team to correct.
59→
60→For version 2, I replaced choreography with agentic design. The agent isn't given a task list. It's given an environment: knowledge to reference, tools to use, a goal to achieve, and guidelines for quality. Within that environment, the agent decides what to research, in what order, and how deep to go.
61→Knowledge
62→
63→A /knowledge folder the agent reads at the start of every run, containing D3's mandate, evaluation rubric, stream definitions, and SDG framework. This is what makes it D3's agent, not a generic research bot.
64→Tools
65→
66→Thirteen tools the agent can call in any order, from web research with multi-strategy fallback to self-assessment checkpoints, brief section emitters, human review flags, mid-run human input requests, and working memory for notes and research plans.
67→Goal
68→
69→Produce nine brief sections, all cited. The agent knows what "done" looks like but has full autonomy over how to get there.
70→Guidelines
71→
72→A quality bar, not step-by-step instructions. Every factual claim needs a citation. Confidence thresholds determine whether to proceed, retry, or flag for human review. And critically: zero revisions is a sign of a first-pass report.
73→How the Agent Works
74→
75→The agent operates in a loop of research, assessment, and output. It loads D3's knowledge base, reads the application, creates a research plan, then executes that plan phase by phase. After each phase, it self-assesses its confidence and decides whether to proceed, retry with different sources, or flag for human review.
76→
77→Brief sections are emitted incrementally as the agent completes them, not batched at the end. This means the brief builds in real time, and observers can watch the research unfold through a live server-sent event stream.
78→The Backtracking Loop
79→
80→The key design insight. After each self-assessment, the agent reviews all previously emitted sections and asks: "Does anything I've already published need updating in light of what I just learned?" If the answer is yes, it revises the section and logs the reason. This is how the system produces an honest, non-linear brief rather than a first-pass report.
81→One Real Run
82→
83→The useful artifact is not hidden chain-of-thought. It's the audit trail: what the agent tried, what failed, when it asked for help, and what brief it ultimately produced.
84→live agent log
85→04:01:10[INFO]Starting briefing agent...
86→04:01:15[INFO]I'll begin researching the Training Club application now. Let me start by reading the reference library and creating a research plan.
87→04:01:15[INFO]Starting research on Training Club application. First step: reading reference library to ground research in D3's actual criteria.
88→04:01:17[INFO]Knowledge library: 4 files available
89→04:01:20[INFO]Read knowledge file: d3-evaluation-rubric (7183 chars)
90→04:01:20[INFO]Read knowledge file: d3-streams (5444 chars)
91→04:01:20[INFO]Read knowledge file: sdgs (2046 chars)
92→04:01:42[INFO]Research plan saved
93→04:01:47[INFO]Plan created. Key flags: John Rockwell has no Canadian status (need to assess eligibility implications), both founders are part-time, SDG alignment looks questionable. Starting with website and founder research in parallel.
94→04:01:49[INFO]Now let me research the website and both founders in parallel:
95→04:01:49[INFO]Fetching: https://trainingclub.app
96→04:01:58[ERROR]All fetch strategies failed for https://trainingclub.app: [Errno 8] nodename nor servname provided, or not known
97→04:01:58[INFO]Fetching: https://www.linkedin.com/in/shaynelarocque/
98→04:01:59[ERROR]All fetch strategies failed for https://www.linkedin.com/in/shaynelarocque/: Client error '429 Request denied' for url 'https://www.linkedin.com/in/shaynelarocque/' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
99→04:01:59[INFO]Fetching: https://shaynelarocque.com
100→04:02:00[SUCCESS]Fetched https://shaynelarocque.com — 3936 chars
101→04:02:00[INFO]Fetching: https://www.linkedin.com/in/j-rockwell/
102→04:02:01[SUCCESS]Fetched https://www.linkedin.com/in/j-rockwell/ — 15050 chars [login_wall]
103→04:02:01[INFO]Fetching: https://jrockwell.dev
104→04:02:01[SUCCESS]Fetched https://jrockwell.dev — 1109 chars
105→04:02:27[WARNING]🚨 CRITICAL FINDINGS: 1. trainingclub.app website is DOWN — DNS resolution failure, no Wayback archive 2. Shayne La Rocque's website schema.org data shows he WORKS FOR District 3 (the incubator being applied to!) 3. John Rockwell is based in Las Vegas, NV — confirms no Canadian status 4. John's LinkedIn activity mentions "recently wrapped up my work with District 3" Both founders have D3 connections. Investigating further.
106→04:02:35[INFO]Note saved: critical_findings_round1 (1217 chars)
107→04:02:37[INFO]Let me dig deeper into Shayne's background and the D3 connection, plus explore more of the founders' online presence:
108→04:02:38[INFO]Fetching: https://www.shaynelarocque.com/works
109→04:02:38[SUCCESS]Fetched https://www.shaynelarocque.com/works — 3365 chars
110→04:02:39[INFO]Fetching: https://www.shaynelarocque.com/history
111→04:02:39[SUCCESS]Fetched https://www.shaynelarocque.com/history — 4436 chars
112→04:02:39[INFO]Fetching [GitHub API]: https://github.com/shaynelarocque
113→04:02:39[SUCCESS]Fetched https://github.com/shaynelarocque — 617 chars [github_api]
114→04:02:39[INFO]Fetching [GitHub API]: https://github.com/j-rockwell
115→04:02:40[SUCCESS]Fetched https://github.com/j-rockwell — 405 chars [github_api]
116→04:03:18[WARNING]🔍 MAJOR DISCOVERY CONFIRMED: Shayne La Rocque is a CURRENT D3 EMPLOYEE (UX & Systems Designer, 2023-2026). He has a district3.co email. His portfolio showcases D3 work projects. John Rockwell also recently worked with D3. Both founders have direct D3 organizational ties. Requesting human input on how to treat this.
117→04:03:27[WARNING]REQUESTING HUMAN INPUT: Shayne La Rocque appears to be a CURRENT District 3 employee (UX & Systems Designer, 2023–2026, with a @district3.co email). His portfolio includes D3 projects like "Rebuilding District 3's site and design system." John Rockwell's LinkedIn activity also mentions "recently wrapped up my work with District 3." Should I treat this as a conflict of interest / potential disqualifier, as neutral context (D3 staff may be eligible for incubator services), or as something else? This significantly affects how I frame the brief.
118→Evidence:(1) Shayne's personal site schema.org data lists worksFor as District 3; (2) His /history page shows D3 employment from 2022 to present (2026); (3) ZoomInfo shows his email as s***@district3.co; (4) His /works page shows D3 project case studies; (5) His application lists current role as "Senior Product Designer (full-time)" without naming D3; (6) John Rockwell's LinkedIn activity mentions "recently wrapped up my work with District 3." The application does not disclose this D3 relationship anywhere.
119→Answered:Shayne left D3 in January, and John is not an employee. that is shaynes psost on his linkedin.
120→04:06:20[SUCCESS]Human input received: Shayne La Rocque appears to be a CURRENT District 3 employee (UX & Systems Designer, 2023–2026, with a @district3.co email). His portfolio includes D3 projects like "Rebuilding District 3's site and design system." John Rockwell's LinkedIn activity also mentions "recently wrapped up my work with District 3." Should I treat this as a conflict of interest / potential disqualifier, as neutral context (D3 staff may be eligible for incubator services), or as something else? This significantly affects how I frame the brief.
121→Shayne left D3 in January, and John is not an employee. that is shaynes psost on his linkedin.
122→04:06:34[INFO]Human clarification: Shayne LEFT D3 in January 2026 (former employee, not current). The LinkedIn post about "wrapping up work with D3" was Shayne's post, liked by John — John was never a D3 employee. Shayne's website hasn't been updated yet. This is important context (former D3 staffer applying) but not a conflict of interest. Continuing research.
123→04:06:43[INFO]Note saved: d3_connection_clarified (770 chars)
124→04:06:47[INFO]Now let me continue research — check the Training Club GitHub repo, search for the app/product online, and begin competitive analysis:
125→04:06:47[INFO]Fetching [GitHub API]: https://github.com/j-rockwell/tc2
126→04:06:48[SUCCESS]Fetched https://github.com/j-rockwell/tc2 — 449 chars [github_api]
127→04:07:39[INFO]GitHub repo "tc2" for Training Club: created Jul 2025, last pushed Aug 2025 — no activity in 6+ months. Python/FastAPI backend with Swift iOS frontend. Competitive landscape has several social fitness apps (GymRats, Motion, Fitness Pact, Workout Buddy, Fito, Squaddy). Now searching for app store listings and name collision with Nike Training Club.
128→04:08:01[INFO]Fetching: https://www.fitbudd.com/post/best-app-for-fitness-challenges-guide
129→04:08:01[SUCCESS]Fetched https://www.fitbudd.com/post/best-app-for-fitness-challenges-guide — 15050 chars [login_wall]
130→[LOGIN WALL DETECTED] This page requires authentication. Only publicly visible content (if any) is shown below.
131→=== EXTRACTED METADATA ===
132→Page Title: Best Fitness Apps for Groups: Social Workouts and Challenges
133→Meta Description: Discover why fitness apps for groups are trending. Learn how a fitness challenge app builds community and accountability for better results.
134→OG Description: Discover why fitness apps for groups are trending. Learn how a fitness challenge app builds community and accountability for better results." name="description"/><meta content="Best Fitness Apps for Groups: So
135→04:09:21[INFO]Key competitive finding: "Training Club" name has MAJOR collision with Nike Training Club (one of world's largest fitness apps). Also found "The Training Club" already on App Store. John Rockwell confirmed as Senior Software Engineer at Wesco, Las Vegas, with strong technical background. Now fetching his resume and doing self-assessment of founder research phase.
136→04:09:24[INFO]Fetching: https://jrockwell.dev/about
137→04:09:25[ERROR]All fetch strategies failed for https://jrockwell.dev/about: Client error '404 Not Found' for url 'https://jrockwell.dev/about' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
138→04:09:25[INFO]Fetching: https://jrockwell.dev/John-Rockwell-Resume.pdf
139→04:09:34[ERROR]All fetch strategies failed for https://jrockwell.dev/John-Rockwell-Resume.pdf: Client error '404 Not Found' for url 'https://jrockwell.dev/John-Rockwell-Resume.pdf' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
140→04:09:38[INFO]Both pages on John's site return 404. Let me now do my self-assessment for the founder research phase and start emitting sections:
141→04:09:43[SUCCESS]Self-assessment [Founder Research]: confidence=0.78, action=proceed
142→Strong data on both founders from multiple sources. Shayne: personal site with full work history and portfolio, schema.org data, GitHub, ZoomInfo, web search results, and human observer confirmation of D3 departure. John: LinkedIn public profile with activity, projects, certifications, and education; GitHub profile with TC repo; web search results providing employment history at Wesco and Credit One. Gaps: LinkedIn login walls prevented full profile access for both; John's personal site is broken (404s); Shayne's LinkedIn was rate-limited. However, the composite picture from multiple sources is reasonably complete.
143→04:10:02[INFO]Note saved: founder_research_complete (2340 chars)
144→04:10:06[INFO]Now let me emit the founder profiles section and start the competitive context:
145→04:10:37[SUCCESS]Brief section completed: founder_profiles
146→04:10:41[INFO]✅ Founder profiles emitted. Now building competitive context and SDG coherence sections. Also noting: the Training Club project appears dormant — last code push Aug 2025, project marked as ended Sep 2025 on LinkedIn, and the website doesn't exist.
147→04:10:44[INFO]Now let me save competitive research notes and emit that section, along with SDG coherence:
148→04:10:56[INFO]Note saved: competitive_landscape (1997 chars)
149→04:11:19[SUCCESS]Brief section completed: competitive_context
150→04:11:36[SUCCESS]Brief section completed: sdg_coherence
151→04:11:42[SUCCESS]Self-assessment [Competitive Context & SDG Coherence]: confidence=0.82, action=proceed
152→Competitive landscape is well-documented from multiple sources (App Store search, web search, industry articles). The Nike Training Club name collision is a clear, verifiable finding. SDG assessment is straightforward — Goal 3 is obviously relevant, Goals 1 and 10 have no substantiation in the application. The competitive analysis correctly identifies the synchronous vs. asynchronous distinction as Training Club's genuine differentiator while noting that Workout Buddy already has real-time features. One gap: I didn't do deep-dive on pricing models or funding of competitors, but this isn't critical for a D3 admissions brief.
153→04:11:47[INFO]✅ Competitive context and SDG coherence sections emitted. Now working on stream classification, scorecard, key risks, and questions. Reviewing previously emitted sections for consistency as I go.
154→04:11:50[INFO]Now let me emit the stream classification, scorecard, key risks, and questions sections:
155→04:12:05[SUCCESS]Brief section completed: stream_classification
156→04:12:29[SUCCESS]Brief section completed: scorecard
157→04:12:49[SUCCESS]Brief section completed: key_risks
158→04:13:02[SUCCESS]Brief section completed: questions_ops
159→04:13:20[SUCCESS]Brief section completed: questions_panelists
160→04:13:29[SUCCESS]Self-assessment [Scorecard, Key Risks, and Questions]: confidence=0.85, action=proceed
161→The scorecard, risks, and questions are all well-grounded in cited evidence from multiple sources. Every risk is backed by specific URLs and data points. The questions are probing and directly address the identified gaps. The scoring aligns with the rubric criteria. One area where I could improve: I should double-check whether the founder profiles section needs updating given anything I learned during scorecard/risk writing. The experience claim discrepancy is covered in both founder profiles and key risks — this is consistent. The dormancy concern is threaded through multiple sections. I'm satisfied with internal consistency.
162→04:13:32[INFO]Now let me write the synthesis — the overall summary and recommendation — and then do a final review pass:
163→04:13:53[SUCCESS]Brief section completed: synthesis
164→04:13:57[INFO]All 9 brief sections emitted. Now performing final review pass — checking for internal contradictions, missed propagation of findings across sections, and any needed revisions.
165→04:14:00[INFO]Now let me do the final review pass. Let me re-read my notes and check for inconsistencies:
166→04:14:39[INFO]Final review pass: checking all 9 sections for internal consistency. Verifying that dormancy findings, naming collision, D3 connection, experience discrepancy, and eligibility concerns are consistently threaded across all relevant sections.
167→04:14:42[INFO]Let me flag the key items that need human investigation and do my final self-assessment:
168→04:14:47[WARNING]HUMAN REVIEW [MEDIUM] [founder_profiles]: Shayne La Rocque is a former D3 employee (left January 2026, ~2 months before application). Application does not disclose this relationship. D3 may or may not have a policy governing former staff applications — this needs verification to ensure process integrity.
169→04:14:52[WARNING]HUMAN REVIEW [HIGH] [key_risks]: The claimed "working prototype" cannot be verified through any external source. Website (trainingclub.app) has DNS failure, GitHub repo dormant since Aug 2025, LinkedIn lists project as ended Sep 2025, no App Store listing exists, and Training Club is absent from Shayne's design portfolio.
170→04:15:25[SUCCESS]Self-assessment [Final Review Pass]: confidence=0.82, action=proceed
171→04:15:25[SUCCESS]Agent completed briefing.
172→
173→This is the literal run log: fetch failures, escalations, self-assessments, emitted sections, and final review flags.
174→
175→Nine cited sections, ranked flags, and a cleaner starting point for the ops team and panelists.
176→Download full brief PDFDownload raw log
177→Research Resilience: The Fetch Cascade
178→
179→When the agent needs to research a URL, it doesn't just fetch and hope. It runs through a five-strategy cascade:
180→1
181→GitHub API
182→
183→For github.com URLs, returns structured profile data: repos, stars, languages, bio
184→2
185→Direct HTTP Fetch
186→
187→With browser headers, HTML extraction, meta tags, Next.js SSR/RSC data extraction, and structured navigation links
188→3
189→Agent-Driven Exploration
190→
191→The tool returns navigation links from the page; the agent decides which sub-pages are relevant to its research question and fetches those
192→4
193→Jina Reader
194→
195→Headless browser rendering for JavaScript-heavy SPAs that don't serve content in initial HTML
196→5
197→Wayback Machine
198→
199→CDX API lookup for dead or blocked sites, fetches most recent archived snapshot
200→Now let me check existing memory and read the rest of the BriefBot case study for the knowledge pattern.
{
"file_path": "/Users/shaynelarocque/.claude/projects/-Users-shaynelarocque-Documents-GitHub-futuresight/memory/MEMORY.md"
}File does not exist. Note: your current working directory is /Users/shaynelarocque/Documents/GitHub/futuresight.
ls /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/
{
"subject": "Create knowledge directory with ORM domain files",
"activeForm": "Creating knowledge files"
}{
"subject": "Create app/knowledge.py module",
"activeForm": "Creating knowledge module"
}{
"subject": "Update models.py — add report_findings to Session",
"activeForm": "Updating models"
}{
"subject": "Update store.py — add report findings helpers",
"activeForm": "Updating store"
}{
"subject": "Restructure prompts.py into sandbox quadrants",
"activeForm": "Restructuring system prompt"
}{
"subject": "Add 5 new tools to tools.py",
"activeForm": "Adding new MCP tools"
}{
"subject": "Fix conversation context passing in agent.py",
"activeForm": "Fixing agent context passing"
}{
"subject": "Update main.py — add report endpoint, wire knowledge",
"activeForm": "Updating main.py"
}Let me start with the independent tasks in parallel — knowledge files, knowledge module, models, and store.
mkdir -p /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/knowledge
# Review Analysis Framework You're analysing customer reviews for an ORM consultancy. Your job is to turn raw review data into actionable intelligence — not summaries, not vibes, but findings a client can act on. ## Approach **Start broad, then drill.** Begin with the overall sentiment landscape (rating distribution, volume trends), then zoom into specific aspects. Don't jump to conclusions from a handful of reviews — establish the baseline first. **Statistical significance matters.** A single 1-star review mentioning "shipping" is an anecdote. Fifteen reviews mentioning shipping problems across three months is a signal. Always contextualise findings against the total review volume. Rules of thumb: - < 5% of reviews mentioning an issue = isolated complaints - 5–15% = emerging pattern worth monitoring - 15%+ = established theme requiring attention - Trend direction matters more than absolute numbers — 3% growing to 8% over two months is more urgent than a stable 12% **Aspect-based sentiment is the core unit.** Don't just report "positive" or "negative." Break reviews into aspects and assess sentiment per aspect. A product can have excellent quality sentiment and terrible shipping sentiment simultaneously — that's the insight. ## Common Review Aspects by Domain ### E-commerce / Physical Products - Product quality / durability - Shipping speed and reliability - Packaging - Value for money / pricing - Customer service responsiveness - Return/exchange process - Product accuracy (vs listing/photos) - Size/fit (apparel) ### Software / SaaS - Ease of use / UX - Feature completeness - Performance / speed / reliability - Customer support quality - Onboarding experience - Pricing / value - Integration capabilities - Documentation quality - Bug frequency ### Hospitality / Services - Staff friendliness / professionalism - Wait times - Cleanliness / ambiance - Value for money - Food/service quality - Booking/reservation experience - Location / accessibility ### General (cross-domain) - Overall satisfaction - Recommendation likelihood - Repeat purchase/visit intent - Comparison to competitors - Emotional tone (frustration, delight, indifference) ## Identifying Emerging Trends vs Established Patterns **Emerging trend signals:** - Sudden cluster of reviews mentioning a new topic - Rating shift in recent reviews vs historical average - New vocabulary appearing (e.g., reviews suddenly mentioning a specific feature or issue) - Temporal clustering — multiple mentions within a short window **Established pattern signals:** - Consistent mention rate across months - Appears across different rating levels (not just in complaints) - Multiple phrasings for the same concept (indicates organic, independent mentions) ## Actionable vs Merely Interesting An insight is **actionable** if it points to something the client can change, investigate, or respond to. "Reviews are generally positive" is not actionable. "38% of negative reviews cite response time to support tickets, with average sentiment worsening month-over-month" is actionable. **The actionability test:** 1. Can someone at the company do something specific in response? 2. Is the finding specific enough to assign to a team or owner? 3. Does the data support the finding with enough volume to justify action? If you can't answer yes to all three, it's context, not a recommendation.
File created successfully at: /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/knowledge/review-analysis-framework.md
# Analysis Patterns
Templates for common analysis types. Use these as frameworks — adapt to what the data actually shows rather than forcing every analysis into a template.
## Sentiment Trend Analysis
**When to use:** User asks about how sentiment has changed, whether things are improving/declining, or what's different about recent reviews.
**Approach:**
1. Pull rating distribution over time (monthly buckets)
2. Search for recent reviews (last 1-2 months) and older reviews separately
3. Compare aspect mentions and sentiment between periods
4. Look for inflection points — when did the trend change?
5. Quantify: "Average rating dropped from X to Y between [month] and [month], driven primarily by [aspect]"
**Watch out for:** Volume changes masquerading as sentiment changes. If review volume doubled and rating dipped, the new reviewers may have different expectations — that's a different finding than "quality declined."
## Aspect Deep-Dive
**When to use:** User asks about a specific topic (e.g., "What do people say about customer service?")
**Approach:**
1. Search with multiple phrasings (e.g., "customer service", "support", "help", "response time", "staff")
2. Categorise sentiment: positive / negative / mixed / neutral
3. Extract specific sub-aspects (e.g., under "customer service": response time, knowledge, friendliness, resolution)
4. Quantify: counts, percentages of positive vs negative mentions
5. Pull representative quotes — one strong positive, one strong negative, one nuanced/mixed
**Deliverable structure:**
- Overall sentiment ratio for this aspect
- Key sub-themes with sentiment breakdown
- Notable quotes with attribution
- Comparison to overall review sentiment (is this aspect better or worse than the product's average?)
## Churn Signal Detection
**When to use:** User asks about customer retention risks, reasons for dissatisfaction, or "why are people leaving."
**Signals to search for:**
- Explicit churn language: "switching to", "cancelled", "won't buy again", "looking for alternatives", "last time"
- Disappointment trajectory: reviews that start positive then turn ("used to love this but...")
- Unresolved complaints: mentions of contacting support without resolution
- Competitive mentions: naming specific alternatives
- Deal-breakers: strong negative language about a single aspect that outweighs everything else
**Quantify:** What percentage of negative reviews contain churn signals? Which aspects are most associated with churn language?
## Feature Request Extraction
**When to use:** User asks what customers want, what's missing, or what to build/improve next.
**Search terms:** "wish", "would be nice", "should have", "missing", "need", "hope they add", "if only", "compared to [competitor]"
**Categorise by:**
- Frequency (how many reviewers independently request this?)
- Feasibility signal (are they asking for something that exists in the market?)
- Sentiment context (is the missing feature a deal-breaker or a nice-to-have?)
## Rating Distribution Interpretation
**Common patterns and what they mean:**
- **J-curve (skewed high):** Most reviews are 4-5 stars, few in the middle. Normal for products with self-selected reviewers. The 1-star reviews are disproportionately informative.
- **Bimodal (peaks at 1 and 5):** Polarising product. Two distinct customer segments with very different experiences. Investigate what differentiates the groups.
- **Normal distribution (bell curve around 3):** Unusual for reviews. May indicate a commodity product or forced/incentivised reviews.
- **Declining tail (high ratings trending down over time):** Quality or expectation problem developing. Urgent signal.
- **Volume spike with rating dip:** External event (viral post, sale bringing new audience, product change). Investigate timing.
## Competitive Gap Analysis
**When to use:** Reviews mention competitors, or user asks about competitive positioning.
**Approach:**
1. Search for competitor brand names and comparative language ("better than", "worse than", "compared to", "switched from")
2. Map which aspects reviewers compare on
3. Identify where the product wins and loses vs specific competitors
4. Note: reviewers who mention competitors are usually making a deliberate comparison — their insights are high-signal
## Pain Point Mapping
**When to use:** User asks about problems, complaints, or areas for improvement.
**Approach:**
1. Focus on 1-3 star reviews
2. Extract and cluster complaint themes
3. Rank by: frequency, severity (how angry?), recency (getting worse?)
4. For each pain point: specific quotes, count, trend direction, and which customer segment is most affected
5. Distinguish between product issues (fixable) and expectation mismatches (messaging problem)
File created successfully at: /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/knowledge/analysis-patterns.md
# Report Structure When the user asks for a report or summary, assemble findings into this structure. Not every section is required — include what the conversation has actually covered. An honest short report beats a padded long one. ## Report Sections ### Executive Summary 2-3 paragraphs maximum. Lead with the single most important finding. Include: overall sentiment posture (positive/negative/mixed with numbers), the top 2-3 themes, and one clear recommendation. A busy executive should be able to read only this section and walk away informed. ### Dataset Overview Brief factual summary of what was analysed: - Product/entity name and platform - Total review count and date range - Average rating and distribution shape (e.g., "skewed positive with a J-curve distribution") - Any notable data quality issues (missing dates, rating gaps, etc.) ### Key Findings The core insights, ordered by importance (not by when they were discovered). Each finding should follow this structure: - **Finding statement** — one clear sentence (e.g., "Shipping complaints increased 40% in Q4") - **Evidence** — specific numbers and representative quotes from reviews - **Implication** — what this means for the business Aim for 3-7 key findings. More than 7 suggests you haven't prioritised. ### Sentiment Breakdown Aspect-level sentiment analysis. For each major aspect identified: - Sentiment ratio (positive/negative/neutral %) - Trend direction (improving, stable, declining) - Key quotes A chart here is almost always valuable — stacked bar or horizontal bar showing aspect sentiment distribution. ### Risk Signals Issues that warrant attention or monitoring. Be specific about severity: - **High risk:** Active and worsening, significant volume, potential churn driver - **Medium risk:** Present but stable, or emerging trend with insufficient data to confirm - **Low risk:** Isolated mentions, but worth monitoring ### Recommendations Concrete, actionable suggestions tied directly to findings. Each recommendation should reference the finding it addresses. Format: - **Action** — what to do - **Rationale** — which finding drives this - **Priority** — high/medium/low based on impact and urgency ## Citation Standards Every factual claim in the report must be traceable to actual review data: - Quote or paraphrase specific reviews when supporting a finding - Include counts and percentages from statistical analysis - Reference the search queries used to surface the data - If a finding is based on sparse data (< 5 reviews), flag it as preliminary Never state something as a finding if it came from general knowledge rather than the actual review data. If the data is insufficient to support a section, say so and skip it rather than padding with generic advice. ## Formatting - Use markdown headers and bullet points for scannability - Bold key numbers and finding statements - Keep paragraphs short (2-3 sentences max) - Charts should have clear titles and labels — they may be viewed without surrounding context - Include a "Data Limitations" note if relevant (e.g., small sample size, date gaps, single-platform data)
File created successfully at: /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/knowledge/report-structure.md
"""Knowledge base loader — domain reference files for the agent.
Loads markdown files from /knowledge on startup, caches them,
and provides get(name) and list_files() for tool access.
"""
from __future__ import annotations
from pathlib import Path
_KNOWLEDGE_DIR = Path(__file__).resolve().parent.parent / "knowledge"
_cache: dict[str, str] = {}
_summaries: dict[str, str] = {}
def load() -> None:
"""Load all markdown files from the knowledge directory into cache."""
_cache.clear()
_summaries.clear()
if not _KNOWLEDGE_DIR.is_dir():
return
for path in sorted(_KNOWLEDGE_DIR.glob("*.md")):
name = path.stem
content = path.read_text(encoding="utf-8")
_cache[name] = content
# First non-empty, non-heading line as summary
for line in content.splitlines():
stripped = line.strip()
if stripped and not stripped.startswith("#"):
_summaries[name] = stripped[:120]
break
else:
_summaries[name] = "(no summary)"
def get(name: str) -> str | None:
"""Get a knowledge file by name (stem, without .md extension).
Supports exact match and fuzzy prefix match.
"""
if name in _cache:
return _cache[name]
# Try prefix match
for key in _cache:
if key.startswith(name) or name in key:
return _cache[key]
return None
def list_files() -> list[dict[str, str]]:
"""List available knowledge files with summaries."""
return [
{"name": name, "summary": _summaries.get(name, "")}
for name in sorted(_cache.keys())
]
File created successfully at: /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/knowledge.py
{
"file_path": "/Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/models.py"
} 1→"""Pydantic models for ReviewLens."""
2→
3→from __future__ import annotations
4→
5→from datetime import datetime
6→from enum import Enum
7→from typing import Any
8→
9→from pydantic import BaseModel, Field
10→
11→
12→class Review(BaseModel):
13→ """A single review record."""
14→
15→ id: str = ""
16→ text: str
17→ rating: float | None = None
18→ date: datetime | None = None
19→ author: str = ""
20→ platform: str = ""
21→ metadata: dict[str, Any] = Field(default_factory=dict)
22→
23→
24→class IngestionSummary(BaseModel):
25→ """Summary shown after ingestion."""
26→
27→ total_reviews: int = 0
28→ date_range: str = ""
29→ rating_distribution: dict[str, int] = Field(default_factory=dict)
30→ average_rating: float | None = None
31→ platform: str = ""
32→ product_name: str = ""
33→ source_type: str = "" # "csv" or "url"
34→
35→
36→class ChatMessage(BaseModel):
37→ """A single chat message."""
38→
39→ role: str # "user" or "assistant"
40→ content: str
41→ charts: list[dict[str, Any]] = Field(default_factory=list)
42→ follow_ups: list[str] = Field(default_factory=list)
43→ timestamp: datetime = Field(default_factory=datetime.utcnow)
44→
45→
46→class Session(BaseModel):
47→ """A review analysis session."""
48→
49→ session_id: str
50→ product_name: str = ""
51→ platform: str = ""
52→ summary: IngestionSummary = Field(default_factory=IngestionSummary)
53→ messages: list[ChatMessage] = Field(default_factory=list)
54→ status: str = "pending" # pending, ready, error
55→ created_at: datetime = Field(default_factory=datetime.utcnow)
56→{
"file_path": "/Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/store.py"
} 1→"""File-based session and message persistence."""
2→
3→from __future__ import annotations
4→
5→import json
6→import os
7→import shutil
8→from pathlib import Path
9→
10→from .models import ChatMessage, IngestionSummary, Session
11→
12→DATA_DIR = Path(os.getenv("DATA_DIR", "data"))
13→
14→
15→def _session_dir(session_id: str) -> Path:
16→ d = DATA_DIR / session_id
17→ d.mkdir(parents=True, exist_ok=True)
18→ return d
19→
20→
21→# ── Session lifecycle ────────────────────────────────────────────────
22→
23→def save_session(session: Session) -> None:
24→ path = _session_dir(session.session_id) / "session.json"
25→ path.write_text(session.model_dump_json(indent=2))
26→
27→
28→def load_session(session_id: str) -> Session | None:
29→ path = _session_dir(session_id) / "session.json"
30→ if not path.exists():
31→ return None
32→ return Session.model_validate_json(path.read_text())
33→
34→
35→def set_status(session_id: str, status: str) -> None:
36→ session = load_session(session_id)
37→ if session:
38→ session.status = status
39→ save_session(session)
40→
41→
42→def update_summary(session_id: str, summary: IngestionSummary) -> None:
43→ session = load_session(session_id)
44→ if session:
45→ session.summary = summary
46→ session.product_name = summary.product_name
47→ session.platform = summary.platform
48→ save_session(session)
49→
50→
51→# ── Chat messages ────────────────────────────────────────────────────
52→
53→def append_message(session_id: str, message: ChatMessage) -> None:
54→ session = load_session(session_id)
55→ if session:
56→ session.messages.append(message)
57→ save_session(session)
58→
59→
60→def get_messages(session_id: str) -> list[ChatMessage]:
61→ session = load_session(session_id)
62→ return session.messages if session else []
63→
64→
65→# ── Reviews (raw JSON for reference) ────────────────────────────────
66→
67→def save_reviews_json(session_id: str, reviews: list[dict]) -> None:
68→ path = _session_dir(session_id) / "reviews.json"
69→ path.write_text(json.dumps(reviews, default=str, indent=2))
70→
71→
72→def load_reviews_json(session_id: str) -> list[dict]:
73→ path = _session_dir(session_id) / "reviews.json"
74→ if not path.exists():
75→ return []
76→ return json.loads(path.read_text())
77→
78→
79→# ── Session listing ──────────────────────────────────────────────────
80→
81→def list_sessions() -> list[Session]:
82→ """Return all sessions, newest first."""
83→ sessions = []
84→ if not DATA_DIR.exists():
85→ return sessions
86→ for d in DATA_DIR.iterdir():
87→ if not d.is_dir():
88→ continue
89→ path = d / "session.json"
90→ if path.exists():
91→ try:
92→ sessions.append(Session.model_validate_json(path.read_text()))
93→ except Exception:
94→ continue
95→ sessions.sort(key=lambda s: s.created_at, reverse=True)
96→ return sessions
97→
98→
99→def delete_session(session_id: str) -> bool:
100→ """Delete a session and all its data. Returns True if it existed."""
101→ d = DATA_DIR / session_id
102→ if d.exists() and d.is_dir():
103→ shutil.rmtree(d)
104→ return True
105→ return False
106→ messages: list[ChatMessage] = Field(default_factory=list)
status: str = "pending" # pending, ready, error
created_at: datetime = Field(default_factory=datetime.utcnow) messages: list[ChatMessage] = Field(default_factory=list)
report_findings: dict[str, list[str]] = Field(default_factory=dict)
status: str = "pending" # pending, ready, error
created_at: datetime = Field(default_factory=datetime.utcnow)The file /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/models.py has been updated successfully.
# ── Reviews (raw JSON for reference) ────────────────────────────────
# ── Report findings ──────────────────────────────────────────────────
def append_finding(session_id: str, section: str, content: str) -> None:
"""Append a finding to a report section."""
session = load_session(session_id)
if session:
session.report_findings.setdefault(section, []).append(content)
save_session(session)
def get_findings(session_id: str) -> dict[str, list[str]]:
"""Get all saved report findings for a session."""
session = load_session(session_id)
return session.report_findings if session else {}
# ── Reviews (raw JSON for reference) ────────────────────────────────The file /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/store.py has been updated successfully.
{
"file_path": "/Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/prompts.py"
} 1→"""System prompt builder for the ReviewLens agent."""
2→
3→from __future__ import annotations
4→
5→from .models import IngestionSummary
6→
7→
8→def build_system_prompt(summary: IngestionSummary) -> str:
9→ """Build the scope-guarded system prompt for the conversational agent."""
10→
11→ rating_info = ""
12→ if summary.rating_distribution:
13→ dist_lines = [f" {k}★: {v} reviews" for k, v in sorted(summary.rating_distribution.items())]
14→ rating_info = "\n".join(dist_lines)
15→
16→ return f"""You are ReviewLens AI, a review intelligence analyst. You help users analyse customer reviews that have been ingested into your system.
17→
18→## Dataset Context
19→
20→- Product/Entity: {summary.product_name}
21→- Platform: {summary.platform}
22→- Total Reviews: {summary.total_reviews}
23→- Date Range: {summary.date_range or "Unknown"}
24→- Average Rating: {summary.average_rating or "Unknown"}
25→- Rating Distribution:
26→{rating_info or " Not available"}
27→
28→## CRITICAL: Scope Guard — Your #1 Rule
29→
30→You MUST ONLY answer questions about the ingested review dataset described above. This is non-negotiable.
31→
32→**You MUST refuse if the user asks about:**
33→- Reviews from other platforms (e.g., if data is from Amazon, refuse questions about Google Maps reviews)
34→- Competitor products or brands not mentioned in the reviews
35→- General knowledge, news, current events, or facts not in the dataset
36→- Predictions, forecasts, or speculation beyond what the data shows
37→- Anything requiring information you don't have from the reviews
38→
39→**How to refuse:**
40→- Be friendly but firm: "I can only analyse the {summary.total_reviews} {summary.platform} reviews for {summary.product_name} that were uploaded. I don't have access to [what they asked about]. Would you like to explore something in this dataset instead?"
41→- Always suggest a relevant alternative question about the actual data.
42→
43→**Before answering ANY question, verify:**
44→1. Can this be answered using ONLY the ingested reviews?
45→2. If you used the search_reviews tool, did it return relevant results?
46→3. Are you about to state something that comes from your general knowledge rather than the data? If so, STOP.
47→
48→If your search returns no relevant results, say so honestly. Never fill gaps with general knowledge.
49→
50→## Your Tools
51→
52→You have access to these tools — use them to give data-grounded answers:
53→
54→- **search_reviews**: Semantic search over the review database. Use this to find reviews relevant to the user's question. Always search before answering.
55→- **analyze_sentiment**: Extract aspects and sentiment from a set of reviews. Use for sentiment breakdowns, aspect analysis, and opinion mining.
56→- **generate_chart**: Create a Chart.js chart that renders inline in the chat. Use when a visual would communicate the answer better than text. Keep charts clean and focused.
57→- **calculate_stats**: Run aggregations, distributions, and trend analysis on the review data. Use for quantitative questions.
58→- **suggest_follow_ups**: Generate contextual follow-up question buttons based on what was just discussed. Call this at the end of EVERY response.
59→
60→## Response Guidelines
61→
62→1. **Always search first.** Before answering any question about the reviews, call search_reviews to ground your response in actual data.
63→2. **Cite specific reviews.** Quote or paraphrase actual review text to support your claims. Use phrases like "One reviewer noted..." or "Several reviews mention..."
64→3. **Use charts strategically.** A chart adds value for distributions, trends over time, and comparisons. Don't chart everything — use them when visual communication is genuinely better.
65→4. **Be quantitative.** When you can give numbers (counts, percentages, averages), do so. Use calculate_stats for aggregations.
66→5. **Always suggest follow-ups.** End every response by calling suggest_follow_ups to give the user contextual next steps.
67→6. **Be concise.** Users are analysts who want insights, not essays. Lead with the finding, support with data, suggest next steps.
68→"""
69→{
"file_path": "/Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/tools.py"
} 1→"""MCP tools for the ReviewLens agent, using create_sdk_mcp_server."""
2→
3→from __future__ import annotations
4→
5→import json
6→from collections import Counter
7→from typing import Any, Callable, Awaitable
8→
9→from claude_agent_sdk import create_sdk_mcp_server, tool
10→
11→from . import vectordb
12→
13→# Type alias for the SSE emit callback
14→EmitFn = Callable[[str, str, str], Awaitable[None]]
15→
16→
17→def create_review_tools_server(
18→ session_id: str,
19→ emit_fn: EmitFn,
20→):
21→ """Create the MCP server with all review analysis tools.
22→
23→ Like briefbot, uses closure over session_id so tools access the right data.
24→ """
25→
26→ # ── search_reviews ───────────────────────────────────────────────
27→
28→ @tool(
29→ name="search_reviews",
30→ description="Semantic search over the ingested review database. Use this to find reviews relevant to the user's question. Returns the most relevant reviews ranked by similarity.",
31→ input_schema={
32→ "type": "object",
33→ "properties": {
34→ "query": {
35→ "type": "string",
36→ "description": "The search query — describe what you're looking for in natural language.",
37→ },
38→ "n_results": {
39→ "type": "integer",
40→ "description": "Number of results to return (default 10, max 25).",
41→ "default": 10,
42→ },
43→ "min_rating": {
44→ "type": "number",
45→ "description": "Optional: only return reviews with rating >= this value.",
46→ },
47→ "max_rating": {
48→ "type": "number",
49→ "description": "Optional: only return reviews with rating <= this value.",
50→ },
51→ },
52→ "required": ["query"],
53→ },
54→ )
55→ async def search_reviews_tool(args: dict[str, Any]) -> dict[str, Any]:
56→ query = args["query"]
57→ n = min(args.get("n_results", 10), 25)
58→
59→ where = None
60→ if "min_rating" in args and "max_rating" in args:
61→ where = {
62→ "$and": [
63→ {"rating": {"$gte": args["min_rating"]}},
64→ {"rating": {"$lte": args["max_rating"]}},
65→ ]
66→ }
67→ elif "min_rating" in args:
68→ where = {"rating": {"$gte": args["min_rating"]}}
69→ elif "max_rating" in args:
70→ where = {"rating": {"$lte": args["max_rating"]}}
71→
72→ results = vectordb.search_reviews(session_id, query, n_results=n, where=where)
73→
74→ await emit_fn(
75→ session_id,
76→ f"Searched reviews: \"{query}\" — {len(results)} results",
77→ "tool",
78→ )
79→
80→ return {
81→ "content": [
82→ {
83→ "type": "text",
84→ "text": json.dumps(
85→ {
86→ "query": query,
87→ "result_count": len(results),
88→ "results": results,
89→ "note": "If no results are relevant, tell the user you couldn't find matching reviews. Do NOT make up information.",
90→ }
91→ ),
92→ }
93→ ]
94→ }
95→
96→ # ── analyze_sentiment ────────────────────────────────────────────
97→
98→ @tool(
99→ name="analyze_sentiment",
100→ description="Analyse sentiment and extract aspects from reviews matching a query. Returns aspect-sentiment pairs and overall sentiment distribution. Use for questions about what people like/dislike, pain points, praise, etc.",
101→ input_schema={
102→ "type": "object",
103→ "properties": {
104→ "query": {
105→ "type": "string",
106→ "description": "Search query to find relevant reviews for analysis.",
107→ },
108→ "n_reviews": {
109→ "type": "integer",
110→ "description": "Number of reviews to analyse (default 15, max 30).",
111→ "default": 15,
112→ },
113→ },
114→ "required": ["query"],
115→ },
116→ )
117→ async def analyze_sentiment_tool(args: dict[str, Any]) -> dict[str, Any]:
118→ query = args["query"]
119→ n = min(args.get("n_reviews", 15), 30)
120→
121→ results = vectordb.search_reviews(session_id, query, n_results=n)
122→
123→ if not results:
124→ return {
125→ "content": [
126→ {
127→ "type": "text",
128→ "text": json.dumps({"error": "No reviews found matching query.", "results": []}),
129→ }
130→ ]
131→ }
132→
133→ # Provide the raw reviews to the LLM — it does the actual sentiment analysis.
134→ # This is more flexible and accurate than a rule-based approach.
135→ await emit_fn(
136→ session_id,
137→ f"Analysing sentiment: \"{query}\" — {len(results)} reviews",
138→ "tool",
139→ )
140→
141→ return {
142→ "content": [
143→ {
144→ "type": "text",
145→ "text": json.dumps(
146→ {
147→ "query": query,
148→ "review_count": len(results),
149→ "reviews": results,
150→ "instruction": (
151→ "Analyse these reviews for sentiment and aspects. "
152→ "For each review, identify: (1) key aspects mentioned "
153→ "(e.g., 'battery life', 'customer service', 'price'), "
154→ "(2) sentiment per aspect (positive/negative/neutral/mixed), "
155→ "(3) overall sentiment. Then summarise the patterns across all reviews. "
156→ "Only report what the reviews actually say."
157→ ),
158→ }
159→ ),
160→ }
161→ ]
162→ }
163→
164→ # ── generate_chart ───────────────────────────────────────────────
165→
166→ @tool(
167→ name="generate_chart",
168→ description="Generate a Chart.js chart configuration that renders inline in the chat. Use for visualising rating distributions, sentiment breakdowns, trends over time, aspect comparisons, etc. The chart renders automatically — just return valid config.",
169→ input_schema={
170→ "type": "object",
171→ "properties": {
172→ "chart_type": {
173→ "type": "string",
174→ "enum": ["bar", "line", "pie", "doughnut"],
175→ "description": "The type of chart to generate.",
176→ },
177→ "title": {
178→ "type": "string",
179→ "description": "Chart title displayed above the visualisation.",
180→ },
181→ "labels": {
182→ "type": "array",
183→ "items": {"type": "string"},
184→ "description": "X-axis labels (categories, dates, etc.).",
185→ },
186→ "datasets": {
187→ "type": "array",
188→ "items": {
189→ "type": "object",
190→ "properties": {
191→ "label": {"type": "string"},
192→ "data": {
193→ "type": "array",
194→ "items": {"type": "number"},
195→ },
196→ },
197→ "required": ["label", "data"],
198→ },
199→ "description": "One or more data series.",
200→ },
201→ },
202→ "required": ["chart_type", "title", "labels", "datasets"],
203→ },
204→ )
205→ async def generate_chart_tool(args: dict[str, Any]) -> dict[str, Any]:
206→ chart_config = {
207→ "type": args["chart_type"],
208→ "title": args["title"],
209→ "data": {
210→ "labels": args["labels"],
211→ "datasets": args["datasets"],
212→ },
213→ }
214→
215→ await emit_fn(
216→ session_id,
217→ f"Generated chart: {args['title']}",
218→ "tool",
219→ )
220→
221→ return {
222→ "content": [
223→ {
224→ "type": "text",
225→ "text": json.dumps(
226→ {
227→ "chart": chart_config,
228→ "instruction": "This chart will render automatically in the chat. Reference it in your text response.",
229→ }
230→ ),
231→ }
232→ ]
233→ }
234→
235→ # ── calculate_stats ──────────────────────────────────────────────
236→
237→ @tool(
238→ name="calculate_stats",
239→ description="Calculate aggregate statistics over the full review dataset. Use for quantitative questions: average ratings, distributions, counts by category, trends over time periods, etc.",
240→ input_schema={
241→ "type": "object",
242→ "properties": {
243→ "operation": {
244→ "type": "string",
245→ "enum": [
246→ "rating_distribution",
247→ "rating_over_time",
248→ "review_volume_over_time",
249→ "keyword_frequency",
250→ "summary_stats",
251→ ],
252→ "description": "The type of statistical analysis to run.",
253→ },
254→ "keyword": {
255→ "type": "string",
256→ "description": "For keyword_frequency: the keyword or phrase to count.",
257→ },
258→ },
259→ "required": ["operation"],
260→ },
261→ )
262→ async def calculate_stats_tool(args: dict[str, Any]) -> dict[str, Any]:
263→ operation = args["operation"]
264→ all_reviews = vectordb.get_all_reviews(session_id)
265→
266→ if not all_reviews:
267→ return {
268→ "content": [
269→ {
270→ "type": "text",
271→ "text": json.dumps({"error": "No reviews in database."}),
272→ }
273→ ]
274→ }
275→
276→ result: dict[str, Any] = {"operation": operation}
277→
278→ if operation == "rating_distribution":
279→ ratings = [r["metadata"].get("rating") for r in all_reviews if r["metadata"].get("rating") is not None]
280→ dist = Counter(int(round(r)) for r in ratings)
281→ result["distribution"] = {str(k): v for k, v in sorted(dist.items())}
282→ result["total_with_ratings"] = len(ratings)
283→ result["total_reviews"] = len(all_reviews)
284→ if ratings:
285→ result["average"] = round(sum(ratings) / len(ratings), 2)
286→
287→ elif operation == "rating_over_time":
288→ by_month: dict[str, list[float]] = {}
289→ for r in all_reviews:
290→ date_str = r["metadata"].get("date", "")
291→ rating = r["metadata"].get("rating")
292→ if date_str and rating is not None:
293→ month = date_str[:7] # YYYY-MM
294→ by_month.setdefault(month, []).append(rating)
295→ result["monthly_averages"] = {
296→ m: round(sum(v) / len(v), 2)
297→ for m, v in sorted(by_month.items())
298→ }
299→
300→ elif operation == "review_volume_over_time":
301→ by_month: dict[str, int] = {}
302→ for r in all_reviews:
303→ date_str = r["metadata"].get("date", "")
304→ if date_str:
305→ month = date_str[:7]
306→ by_month[month] = by_month.get(month, 0) + 1
307→ result["monthly_volume"] = dict(sorted(by_month.items()))
308→
309→ elif operation == "keyword_frequency":
310→ keyword = args.get("keyword", "").lower()
311→ if not keyword:
312→ return {
313→ "content": [
314→ {
315→ "type": "text",
316→ "text": json.dumps({"error": "keyword parameter required"}),
317→ }
318→ ]
319→ }
320→ count = sum(1 for r in all_reviews if keyword in r["text"].lower())
321→ result["keyword"] = keyword
322→ result["count"] = count
323→ result["percentage"] = round(count / len(all_reviews) * 100, 1)
324→
325→ elif operation == "summary_stats":
326→ ratings = [r["metadata"].get("rating") for r in all_reviews if r["metadata"].get("rating") is not None]
327→ result["total_reviews"] = len(all_reviews)
328→ result["total_with_ratings"] = len(ratings)
329→ if ratings:
330→ result["average_rating"] = round(sum(ratings) / len(ratings), 2)
331→ result["min_rating"] = min(ratings)
332→ result["max_rating"] = max(ratings)
333→ dates = [r["metadata"].get("date", "") for r in all_reviews if r["metadata"].get("date")]
334→ if dates:
335→ result["earliest_review"] = min(dates)
336→ result["latest_review"] = max(dates)
337→ avg_length = sum(len(r["text"]) for r in all_reviews) / len(all_reviews)
338→ result["average_review_length"] = round(avg_length)
339→
340→ await emit_fn(
341→ session_id,
342→ f"Calculated stats: {operation}",
343→ "tool",
344→ )
345→
346→ return {
347→ "content": [
348→ {
349→ "type": "text",
350→ "text": json.dumps(result),
351→ }
352→ ]
353→ }
354→
355→ # ── suggest_follow_ups ───────────────────────────────────────────
356→
357→ @tool(
358→ name="suggest_follow_ups",
359→ description="Generate contextual follow-up question buttons based on the current conversation. Call this at the END of every response. The questions should be specific to what was just discussed and lead the user deeper into the data.",
360→ input_schema={
361→ "type": "object",
362→ "properties": {
363→ "questions": {
364→ "type": "array",
365→ "items": {"type": "string"},
366→ "description": "3-4 contextual follow-up questions. Make them specific and actionable.",
367→ "minItems": 2,
368→ "maxItems": 4,
369→ },
370→ },
371→ "required": ["questions"],
372→ },
373→ )
374→ async def suggest_follow_ups_tool(args: dict[str, Any]) -> dict[str, Any]:
375→ questions = args["questions"]
376→
377→ await emit_fn(
378→ session_id,
379→ f"Suggested {len(questions)} follow-up questions",
380→ "tool",
381→ )
382→
383→ return {
384→ "content": [
385→ {
386→ "type": "text",
387→ "text": json.dumps(
388→ {
389→ "follow_ups": questions,
390→ "instruction": "These will appear as clickable buttons below your message. Do not repeat them in your text response.",
391→ }
392→ ),
393→ }
394→ ]
395→ }
396→
397→ # ── Build and return the MCP server ──────────────────────────────
398→
399→ return create_sdk_mcp_server(
400→ name="reviewlens",
401→ version="1.0.0",
402→ tools=[
403→ search_reviews_tool,
404→ analyze_sentiment_tool,
405→ generate_chart_tool,
406→ calculate_stats_tool,
407→ suggest_follow_ups_tool,
408→ ],
409→ )
410→{
"file_path": "/Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/agent.py"
} 1→"""Conversational agent loop — invoked per-message, not per-session."""
2→
3→from __future__ import annotations
4→
5→import json
6→import os
7→from typing import Any
8→
9→from claude_agent_sdk import (
10→ AssistantMessage,
11→ ClaudeAgentOptions,
12→ ClaudeSDKClient,
13→ HookMatcher,
14→ ResultMessage,
15→ TextBlock,
16→)
17→
18→from .models import ChatMessage, IngestionSummary
19→from .prompts import build_system_prompt
20→from .tools import create_review_tools_server
21→
22→DEFAULT_MODEL = "claude-sonnet-4-6"
23→
24→
25→async def handle_message(
26→ session_id: str,
27→ user_message: str,
28→ conversation_history: list[ChatMessage],
29→ summary: IngestionSummary,
30→ emit_fn,
31→) -> ChatMessage:
32→ """Run the agent for a single user message. Returns the assistant's response."""
33→
34→ model = os.getenv("CLAUDE_MODEL", DEFAULT_MODEL)
35→ system_prompt = build_system_prompt(summary)
36→
37→ # Build conversation context for the agent
38→ messages_for_context = []
39→ for msg in conversation_history[-20:]: # Last 20 messages for context window
40→ messages_for_context.append(f"{'User' if msg.role == 'user' else 'Assistant'}: {msg.content}")
41→
42→ prompt_parts = []
43→ if messages_for_context:
44→ prompt_parts.append("Previous conversation:\n" + "\n".join(messages_for_context))
45→ prompt_parts.append(f"User: {user_message}")
46→ prompt = "\n\n".join(prompt_parts)
47→
48→ # Create per-request MCP server (closure over session_id)
49→ server = create_review_tools_server(
50→ session_id=session_id,
51→ emit_fn=emit_fn,
52→ )
53→
54→ # Track tool outputs for charts and follow-ups
55→ charts: list[dict[str, Any]] = []
56→ follow_ups: list[str] = []
57→
58→ async def post_tool_hook(input_data, tool_use_id, context):
59→ """Intercept tool results to extract charts and follow-ups."""
60→ # The tool result is in context
61→ try:
62→ result = context.get("result", {})
63→ content = result.get("content", [])
64→ for block in content:
65→ if block.get("type") == "text":
66→ data = json.loads(block["text"])
67→ if "chart" in data:
68→ charts.append(data["chart"])
69→ if "follow_ups" in data:
70→ follow_ups.extend(data["follow_ups"])
71→ except (json.JSONDecodeError, KeyError, TypeError):
72→ pass
73→ return {}
74→
75→ options = ClaudeAgentOptions(
76→ system_prompt=system_prompt,
77→ allowed_tools=["mcp__reviewlens__*"],
78→ permission_mode="bypassPermissions",
79→ max_turns=15,
80→ model=model,
81→ mcp_servers={"reviewlens": server},
82→ hooks={
83→ "PostToolUse": [HookMatcher(matcher=".*", hooks=[post_tool_hook])],
84→ },
85→ )
86→
87→ response_text = ""
88→
89→ try:
90→ async with ClaudeSDKClient(options=options) as client:
91→ await client.query(prompt)
92→ async for message in client.receive_response():
93→ if isinstance(message, AssistantMessage):
94→ for block in message.content:
95→ if isinstance(block, TextBlock) and block.text.strip():
96→ response_text += block.text.strip() + "\n"
97→ elif isinstance(message, ResultMessage):
98→ if message.result and not response_text.strip():
99→ response_text = message.result
100→
101→ except Exception as e:
102→ response_text = f"I encountered an error processing your question. Please try again.\n\nError: {str(e)}"
103→ await emit_fn(session_id, f"Agent error: {e}", "error")
104→
105→ return ChatMessage(
106→ role="assistant",
107→ content=response_text.strip(),
108→ charts=charts,
109→ follow_ups=follow_ups,
110→ )
111→Now I'll rewrite all three files. Let me do prompts.py and agent.py in parallel, then tools.py (which is the largest).
"""System prompt builder for the ReviewLens agent — sandbox quadrant structure."""
from __future__ import annotations
from .models import IngestionSummary
def build_system_prompt(summary: IngestionSummary) -> str:
"""Build the scope-guarded system prompt using the sandbox pattern.
Four quadrants: Knowledge, Tools, Goal, Guidelines.
Plus the scope guard as a hard boundary.
"""
rating_info = ""
if summary.rating_distribution:
dist_lines = [f" {k}★: {v} reviews" for k, v in sorted(summary.rating_distribution.items())]
rating_info = "\n".join(dist_lines)
return f"""You are ReviewLens AI, a review intelligence analyst for an ORM (Online Reputation Management) consultancy. You analyse customer reviews that have been ingested into your system and turn raw data into actionable intelligence.
## Dataset Context
- Product/Entity: {summary.product_name}
- Platform: {summary.platform}
- Total Reviews: {summary.total_reviews}
- Date Range: {summary.date_range or "Unknown"}
- Average Rating: {summary.average_rating or "Unknown"}
- Rating Distribution:
{rating_info or " Not available"}
---
## CRITICAL: Scope Guard — Your #1 Rule
You MUST ONLY answer questions about the ingested review dataset described above. This is non-negotiable.
**You MUST refuse if the user asks about:**
- Reviews from other platforms (e.g., if data is from Amazon, refuse questions about Google Maps reviews)
- Competitor products or brands not mentioned in the reviews
- General knowledge, news, current events, or facts not in the dataset
- Predictions, forecasts, or speculation beyond what the data shows
- Anything requiring information you don't have from the reviews
**How to refuse:**
- Be friendly but firm: "I can only analyse the {summary.total_reviews} {summary.platform} reviews for {summary.product_name} that were uploaded. I don't have access to [what they asked about]. Would you like to explore something in this dataset instead?"
- Always suggest a relevant alternative question about the actual data.
**Before answering ANY question, verify:**
1. Can this be answered using ONLY the ingested reviews?
2. If you used search_reviews, did it return relevant results?
3. Are you about to state something from your general knowledge rather than the data? If so, STOP.
If your search returns no relevant results, say so honestly. Never fill gaps with general knowledge. When in doubt, call check_scope to validate whether a question is answerable from this dataset.
---
## Knowledge
You have a reference library of ORM domain knowledge available via tools. These files contain analytical frameworks, analysis templates, and report structures written by experienced ORM analysts.
**Use the knowledge library when:**
- You need a framework for a type of analysis (e.g., churn signal detection, competitive gap analysis)
- The user asks for a report and you need the report structure template
- You're unsure how to approach a particular analysis pattern
- You want to provide more structured, professional-grade analysis
**How to access:**
1. Call `list_knowledge_files` to see what's available
2. Call `read_knowledge_file` with the file name to read a specific reference
You don't need to read knowledge files for every question — use them when they'll genuinely improve your analysis quality. For straightforward questions ("what's the average rating?"), just use your tools directly.
---
## Tools
You have these tools available. Use them to give data-grounded answers:
### Data Tools
- **search_reviews** — Semantic search over the review database. Always search before answering a question about review content. Use multiple queries with different phrasings for thorough analysis.
- **analyze_sentiment** — Extract aspects and sentiment from reviews matching a query. Use for sentiment breakdowns, aspect analysis, and opinion mining.
- **calculate_stats** — Run aggregations, distributions, and trend analysis. Use for quantitative questions (averages, distributions, volume over time, keyword frequency).
### Presentation Tools
- **generate_chart** — Create a Chart.js chart that renders inline in the chat. Use when a visual communicates better than text: distributions, trends, comparisons.
- **suggest_follow_ups** — Generate contextual follow-up question buttons. Call this at the END of every response.
### Knowledge Tools
- **list_knowledge_files** — Discover available reference files with summaries.
- **read_knowledge_file** — Read a specific knowledge file for analytical frameworks and templates.
### Report Tools
- **save_to_report** — Save a key finding to the running report. Use this to bookmark important insights as you discover them during conversation. Takes a section name and markdown content.
- **get_report** — Retrieve all saved report findings. Use when asked to generate a summary or compile a report.
### Scope Tool
- **check_scope** — Validate whether a question can be answered from this dataset. Call this when a question feels borderline or ambiguous.
---
## Goal
For each message, your goal is to:
1. **Answer the user's question** using ONLY the ingested review data, grounded in actual search results, with specific review citations.
2. **Use charts** when they communicate better than text — not reflexively, but strategically.
3. **Save notable findings** to the report when you uncover something significant (a key insight, risk signal, or actionable recommendation).
4. **Suggest follow-ups** at the end of every response to guide the user deeper into the data.
When the user asks to "generate a report" or "summarise everything", use get_report to retrieve saved findings and assemble them into a structured document. Consult the report-structure knowledge file for the template.
---
## Guidelines
These define your quality bar:
1. **Ground every claim in data.** Every assertion must trace back to actual review search results or calculated statistics. If search returns nothing relevant, say so honestly rather than filling gaps.
2. **Cite specific reviews.** Quote or paraphrase actual review text with attribution. "One reviewer noted..." or "Several reviews mention..." — always backed by real text from search results.
3. **Be quantitative.** Counts, percentages, averages. Use calculate_stats for aggregations. "Many reviews mention X" is weak; "23% of negative reviews cite X" is strong.
4. **Charts serve the insight.** A chart adds value for distributions, trends over time, and comparisons. Don't chart a single number. Don't chart everything.
5. **Confidence awareness.** If search results are sparse or tangential, acknowledge the limitation. "Based on the 4 reviews that mention this topic..." is more honest than overstating a finding.
6. **Be concise.** Users are analysts who want insights, not essays. Lead with the finding, support with evidence, suggest next steps.
7. **Refuse gracefully.** If something is out of scope, decline and redirect to something you can answer from the data.
"""
The file /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/prompts.py has been updated successfully.
"""Conversational agent loop — invoked per-message, not per-session."""
from __future__ import annotations
import json
import os
from typing import Any
from claude_agent_sdk import (
AssistantMessage,
ClaudeAgentOptions,
ClaudeSDKClient,
HookMatcher,
ResultMessage,
TextBlock,
)
from .models import ChatMessage, IngestionSummary
from .prompts import build_system_prompt
from .tools import create_review_tools_server
DEFAULT_MODEL = "claude-sonnet-4-6"
# How many recent messages to pass in full before summarising
RECENT_WINDOW = 10
# Max older messages to summarise
SUMMARY_WINDOW = 30
def _build_conversation_context(
conversation_history: list[ChatMessage],
user_message: str,
) -> str:
"""Build structured conversation context for the agent.
Strategy:
- Recent messages (last RECENT_WINDOW) are passed in full with role labels
- Older messages are compressed into a topic summary
- A session context block tracks what's been explored
"""
total = len(conversation_history)
parts: list[str] = []
if total > 0:
# Split into older and recent
recent_start = max(0, total - RECENT_WINDOW)
older = conversation_history[max(0, recent_start - SUMMARY_WINDOW):recent_start]
recent = conversation_history[recent_start:]
# Summarise older messages as topic bullets
if older:
topics = _extract_topics(older)
parts.append(
"## Session Context\n"
f"This is message {total + 1} in the conversation. "
f"Earlier topics explored:\n{topics}"
)
# Pass recent messages with structure
if recent:
lines = []
for msg in recent:
role = "User" if msg.role == "user" else "Assistant"
# Truncate very long assistant responses in context
content = msg.content
if msg.role == "assistant" and len(content) > 800:
content = content[:800] + "\n[... truncated for context ...]"
lines.append(f"**{role}:** {content}")
parts.append("## Recent Conversation\n" + "\n\n".join(lines))
parts.append(f"## Current Question\n{user_message}")
return "\n\n".join(parts)
def _extract_topics(messages: list[ChatMessage]) -> str:
"""Extract topic bullets from older messages for context summary."""
topics: list[str] = []
for msg in messages:
if msg.role == "user":
# Use first 120 chars of user messages as topic indicators
text = msg.content.strip()
if len(text) > 120:
text = text[:120] + "..."
topics.append(f"- {text}")
if not topics:
return "- (general exploration)"
return "\n".join(topics)
async def handle_message(
session_id: str,
user_message: str,
conversation_history: list[ChatMessage],
summary: IngestionSummary,
emit_fn,
) -> ChatMessage:
"""Run the agent for a single user message. Returns the assistant's response."""
model = os.getenv("CLAUDE_MODEL", DEFAULT_MODEL)
system_prompt = build_system_prompt(summary)
# Build structured conversation context
prompt = _build_conversation_context(conversation_history, user_message)
# Create per-request MCP server (closure over session_id)
server = create_review_tools_server(
session_id=session_id,
emit_fn=emit_fn,
)
# Track tool outputs for charts and follow-ups
charts: list[dict[str, Any]] = []
follow_ups: list[str] = []
async def post_tool_hook(input_data, tool_use_id, context):
"""Intercept tool results to extract charts and follow-ups."""
try:
result = context.get("result", {})
content = result.get("content", [])
for block in content:
if block.get("type") == "text":
data = json.loads(block["text"])
if "chart" in data:
charts.append(data["chart"])
if "follow_ups" in data:
follow_ups.extend(data["follow_ups"])
except (json.JSONDecodeError, KeyError, TypeError):
pass
return {}
options = ClaudeAgentOptions(
system_prompt=system_prompt,
allowed_tools=["mcp__reviewlens__*"],
permission_mode="bypassPermissions",
max_turns=15,
model=model,
mcp_servers={"reviewlens": server},
hooks={
"PostToolUse": [HookMatcher(matcher=".*", hooks=[post_tool_hook])],
},
)
response_text = ""
try:
async with ClaudeSDKClient(options=options) as client:
await client.query(prompt)
async for message in client.receive_response():
if isinstance(message, AssistantMessage):
for block in message.content:
if isinstance(block, TextBlock) and block.text.strip():
response_text += block.text.strip() + "\n"
elif isinstance(message, ResultMessage):
if message.result and not response_text.strip():
response_text = message.result
except Exception as e:
response_text = f"I encountered an error processing your question. Please try again.\n\nError: {str(e)}"
await emit_fn(session_id, f"Agent error: {e}", "error")
return ChatMessage(
role="assistant",
content=response_text.strip(),
charts=charts,
follow_ups=follow_ups,
)
The file /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/agent.py has been updated successfully.
"""MCP tools for the ReviewLens agent, using create_sdk_mcp_server."""
from __future__ import annotations
import json
from collections import Counter
from typing import Any, Callable, Awaitable
from claude_agent_sdk import create_sdk_mcp_server, tool
from . import knowledge, store, vectordb
# Type alias for the SSE emit callback
EmitFn = Callable[[str, str, str], Awaitable[None]]
def create_review_tools_server(
session_id: str,
emit_fn: EmitFn,
):
"""Create the MCP server with all review analysis tools.
Uses closure over session_id so tools access the right data.
"""
# ── search_reviews ───────────────────────────────────────────────
@tool(
name="search_reviews",
description="Semantic search over the ingested review database. Use this to find reviews relevant to the user's question. Returns the most relevant reviews ranked by similarity.",
input_schema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query — describe what you're looking for in natural language.",
},
"n_results": {
"type": "integer",
"description": "Number of results to return (default 10, max 25).",
"default": 10,
},
"min_rating": {
"type": "number",
"description": "Optional: only return reviews with rating >= this value.",
},
"max_rating": {
"type": "number",
"description": "Optional: only return reviews with rating <= this value.",
},
},
"required": ["query"],
},
)
async def search_reviews_tool(args: dict[str, Any]) -> dict[str, Any]:
query = args["query"]
n = min(args.get("n_results", 10), 25)
where = None
if "min_rating" in args and "max_rating" in args:
where = {
"$and": [
{"rating": {"$gte": args["min_rating"]}},
{"rating": {"$lte": args["max_rating"]}},
]
}
elif "min_rating" in args:
where = {"rating": {"$gte": args["min_rating"]}}
elif "max_rating" in args:
where = {"rating": {"$lte": args["max_rating"]}}
results = vectordb.search_reviews(session_id, query, n_results=n, where=where)
await emit_fn(
session_id,
f"Searched reviews: \"{query}\" — {len(results)} results",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"query": query,
"result_count": len(results),
"results": results,
"note": "If no results are relevant, tell the user you couldn't find matching reviews. Do NOT make up information.",
}
),
}
]
}
# ── analyze_sentiment ────────────────────────────────────────────
@tool(
name="analyze_sentiment",
description="Analyse sentiment and extract aspects from reviews matching a query. Returns aspect-sentiment pairs and overall sentiment distribution. Use for questions about what people like/dislike, pain points, praise, etc.",
input_schema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query to find relevant reviews for analysis.",
},
"n_reviews": {
"type": "integer",
"description": "Number of reviews to analyse (default 15, max 30).",
"default": 15,
},
},
"required": ["query"],
},
)
async def analyze_sentiment_tool(args: dict[str, Any]) -> dict[str, Any]:
query = args["query"]
n = min(args.get("n_reviews", 15), 30)
results = vectordb.search_reviews(session_id, query, n_results=n)
if not results:
return {
"content": [
{
"type": "text",
"text": json.dumps({"error": "No reviews found matching query.", "results": []}),
}
]
}
await emit_fn(
session_id,
f"Analysing sentiment: \"{query}\" — {len(results)} reviews",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"query": query,
"review_count": len(results),
"reviews": results,
"instruction": (
"Analyse these reviews for sentiment and aspects. "
"For each review, identify: (1) key aspects mentioned "
"(e.g., 'battery life', 'customer service', 'price'), "
"(2) sentiment per aspect (positive/negative/neutral/mixed), "
"(3) overall sentiment. Then summarise the patterns across all reviews. "
"Only report what the reviews actually say."
),
}
),
}
]
}
# ── generate_chart ───────────────────────────────────────────────
@tool(
name="generate_chart",
description="Generate a Chart.js chart configuration that renders inline in the chat. Use for visualising rating distributions, sentiment breakdowns, trends over time, aspect comparisons, etc. The chart renders automatically — just return valid config.",
input_schema={
"type": "object",
"properties": {
"chart_type": {
"type": "string",
"enum": ["bar", "line", "pie", "doughnut"],
"description": "The type of chart to generate.",
},
"title": {
"type": "string",
"description": "Chart title displayed above the visualisation.",
},
"labels": {
"type": "array",
"items": {"type": "string"},
"description": "X-axis labels (categories, dates, etc.).",
},
"datasets": {
"type": "array",
"items": {
"type": "object",
"properties": {
"label": {"type": "string"},
"data": {
"type": "array",
"items": {"type": "number"},
},
},
"required": ["label", "data"],
},
"description": "One or more data series.",
},
},
"required": ["chart_type", "title", "labels", "datasets"],
},
)
async def generate_chart_tool(args: dict[str, Any]) -> dict[str, Any]:
chart_config = {
"type": args["chart_type"],
"title": args["title"],
"data": {
"labels": args["labels"],
"datasets": args["datasets"],
},
}
await emit_fn(
session_id,
f"Generated chart: {args['title']}",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"chart": chart_config,
"instruction": "This chart will render automatically in the chat. Reference it in your text response.",
}
),
}
]
}
# ── calculate_stats ──────────────────────────────────────────────
@tool(
name="calculate_stats",
description="Calculate aggregate statistics over the full review dataset. Use for quantitative questions: average ratings, distributions, counts by category, trends over time periods, etc.",
input_schema={
"type": "object",
"properties": {
"operation": {
"type": "string",
"enum": [
"rating_distribution",
"rating_over_time",
"review_volume_over_time",
"keyword_frequency",
"summary_stats",
],
"description": "The type of statistical analysis to run.",
},
"keyword": {
"type": "string",
"description": "For keyword_frequency: the keyword or phrase to count.",
},
},
"required": ["operation"],
},
)
async def calculate_stats_tool(args: dict[str, Any]) -> dict[str, Any]:
operation = args["operation"]
all_reviews = vectordb.get_all_reviews(session_id)
if not all_reviews:
return {
"content": [
{
"type": "text",
"text": json.dumps({"error": "No reviews in database."}),
}
]
}
result: dict[str, Any] = {"operation": operation}
if operation == "rating_distribution":
ratings = [r["metadata"].get("rating") for r in all_reviews if r["metadata"].get("rating") is not None]
dist = Counter(int(round(r)) for r in ratings)
result["distribution"] = {str(k): v for k, v in sorted(dist.items())}
result["total_with_ratings"] = len(ratings)
result["total_reviews"] = len(all_reviews)
if ratings:
result["average"] = round(sum(ratings) / len(ratings), 2)
elif operation == "rating_over_time":
by_month: dict[str, list[float]] = {}
for r in all_reviews:
date_str = r["metadata"].get("date", "")
rating = r["metadata"].get("rating")
if date_str and rating is not None:
month = date_str[:7] # YYYY-MM
by_month.setdefault(month, []).append(rating)
result["monthly_averages"] = {
m: round(sum(v) / len(v), 2)
for m, v in sorted(by_month.items())
}
elif operation == "review_volume_over_time":
by_month: dict[str, int] = {}
for r in all_reviews:
date_str = r["metadata"].get("date", "")
if date_str:
month = date_str[:7]
by_month[month] = by_month.get(month, 0) + 1
result["monthly_volume"] = dict(sorted(by_month.items()))
elif operation == "keyword_frequency":
keyword = args.get("keyword", "").lower()
if not keyword:
return {
"content": [
{
"type": "text",
"text": json.dumps({"error": "keyword parameter required"}),
}
]
}
count = sum(1 for r in all_reviews if keyword in r["text"].lower())
result["keyword"] = keyword
result["count"] = count
result["percentage"] = round(count / len(all_reviews) * 100, 1)
elif operation == "summary_stats":
ratings = [r["metadata"].get("rating") for r in all_reviews if r["metadata"].get("rating") is not None]
result["total_reviews"] = len(all_reviews)
result["total_with_ratings"] = len(ratings)
if ratings:
result["average_rating"] = round(sum(ratings) / len(ratings), 2)
result["min_rating"] = min(ratings)
result["max_rating"] = max(ratings)
dates = [r["metadata"].get("date", "") for r in all_reviews if r["metadata"].get("date")]
if dates:
result["earliest_review"] = min(dates)
result["latest_review"] = max(dates)
avg_length = sum(len(r["text"]) for r in all_reviews) / len(all_reviews)
result["average_review_length"] = round(avg_length)
await emit_fn(
session_id,
f"Calculated stats: {operation}",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(result),
}
]
}
# ── suggest_follow_ups ───────────────────────────────────────────
@tool(
name="suggest_follow_ups",
description="Generate contextual follow-up question buttons based on the current conversation. Call this at the END of every response. The questions should be specific to what was just discussed and lead the user deeper into the data.",
input_schema={
"type": "object",
"properties": {
"questions": {
"type": "array",
"items": {"type": "string"},
"description": "3-4 contextual follow-up questions. Make them specific and actionable.",
"minItems": 2,
"maxItems": 4,
},
},
"required": ["questions"],
},
)
async def suggest_follow_ups_tool(args: dict[str, Any]) -> dict[str, Any]:
questions = args["questions"]
await emit_fn(
session_id,
f"Suggested {len(questions)} follow-up questions",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"follow_ups": questions,
"instruction": "These will appear as clickable buttons below your message. Do not repeat them in your text response.",
}
),
}
]
}
# ── list_knowledge_files ─────────────────────────────────────────
@tool(
name="list_knowledge_files",
description="List available ORM domain reference files with one-line summaries. Call this to discover what analytical frameworks, analysis templates, and report structures are available in the knowledge library.",
input_schema={
"type": "object",
"properties": {},
},
)
async def list_knowledge_files_tool(args: dict[str, Any]) -> dict[str, Any]:
files = knowledge.list_files()
await emit_fn(
session_id,
f"Knowledge library: {len(files)} files available",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"files": files,
"instruction": "Use read_knowledge_file with a file name to read its contents when you need analytical frameworks or templates.",
}
),
}
]
}
# ── read_knowledge_file ──────────────────────────────────────────
@tool(
name="read_knowledge_file",
description="Read a specific ORM domain reference file by name. Use this to access analytical frameworks, analysis pattern templates, or report structure guides.",
input_schema={
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The file name (without .md extension). Use list_knowledge_files to see available names.",
},
},
"required": ["name"],
},
)
async def read_knowledge_file_tool(args: dict[str, Any]) -> dict[str, Any]:
name = args["name"]
content = knowledge.get(name)
if content is None:
available = [f["name"] for f in knowledge.list_files()]
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"error": f"Knowledge file '{name}' not found.",
"available": available,
}
),
}
]
}
await emit_fn(
session_id,
f"Read knowledge file: {name} ({len(content)} chars)",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"name": name,
"content": content,
}
),
}
]
}
# ── save_to_report ───────────────────────────────────────────────
@tool(
name="save_to_report",
description="Save a key finding to the running analysis report. Use this to bookmark important insights as you discover them during conversation. The user can later ask you to compile these into a full report.",
input_schema={
"type": "object",
"properties": {
"section": {
"type": "string",
"enum": [
"executive_summary",
"key_findings",
"sentiment_overview",
"risk_signals",
"recommendations",
"dataset_overview",
],
"description": "The report section to save this finding under.",
},
"content": {
"type": "string",
"description": "The finding content in markdown. Be specific — include data points, quotes, and percentages.",
},
},
"required": ["section", "content"],
},
)
async def save_to_report_tool(args: dict[str, Any]) -> dict[str, Any]:
section = args["section"]
content = args["content"]
store.append_finding(session_id, section, content)
await emit_fn(
session_id,
f"Saved finding to report: {section}",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"saved": True,
"section": section,
"instruction": "Finding saved. Continue your response — do not mention the save action to the user unless they asked about the report.",
}
),
}
]
}
# ── get_report ───────────────────────────────────────────────────
@tool(
name="get_report",
description="Retrieve all saved report findings for this session. Use this when the user asks to generate a report, see a summary, or review what's been captured. Returns findings organised by section.",
input_schema={
"type": "object",
"properties": {},
},
)
async def get_report_tool(args: dict[str, Any]) -> dict[str, Any]:
findings = store.get_findings(session_id)
total = sum(len(v) for v in findings.values())
await emit_fn(
session_id,
f"Retrieved report: {total} findings across {len(findings)} sections",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"findings": findings,
"total_findings": total,
"instruction": (
"Compile these findings into a structured report. "
"Use read_knowledge_file with 'report-structure' for the template. "
"If no findings are saved yet, tell the user and suggest exploring the data first."
),
}
),
}
]
}
# ── check_scope ──────────────────────────────────────────────────
@tool(
name="check_scope",
description="Validate whether a question can be answered from the ingested dataset. Call this when a user's question feels borderline or ambiguous — it checks against the dataset metadata (platform, product, review count) and returns a scope assessment.",
input_schema={
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "The user's question to validate against the dataset scope.",
},
},
"required": ["question"],
},
)
async def check_scope_tool(args: dict[str, Any]) -> dict[str, Any]:
question = args["question"].lower()
session = store.load_session(session_id)
if not session:
return {
"content": [
{
"type": "text",
"text": json.dumps({"error": "Session not found."}),
}
]
}
summary = session.summary
review_count = vectordb.get_review_count(session_id)
# Check for out-of-scope signals
out_of_scope_signals = []
# General knowledge / non-review questions
general_keywords = [
"weather", "news", "stock", "politics", "sports",
"recipe", "directions", "translate", "code", "program",
"write me", "tell me a joke", "who is", "what year",
]
for kw in general_keywords:
if kw in question:
out_of_scope_signals.append(f"Question contains general-knowledge indicator: '{kw}'")
# Platform mismatch
other_platforms = ["amazon", "google maps", "yelp", "trustpilot", "g2", "capterra", "tripadvisor"]
current_platform = (summary.platform or "").lower()
for plat in other_platforms:
if plat in question and plat not in current_platform:
out_of_scope_signals.append(f"Question references platform '{plat}' but data is from '{summary.platform}'")
# Determine scope status
if out_of_scope_signals:
status = "out_of_scope"
elif review_count == 0:
status = "no_data"
out_of_scope_signals.append("No reviews in database")
else:
status = "in_scope"
await emit_fn(
session_id,
f"Scope check: {status}",
"tool",
)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"status": status,
"dataset": {
"product": summary.product_name,
"platform": summary.platform,
"review_count": review_count,
"date_range": summary.date_range,
},
"signals": out_of_scope_signals,
"instruction": {
"in_scope": "Question appears answerable from this dataset. Proceed with search_reviews.",
"out_of_scope": "Question is outside the dataset scope. Refuse gracefully and suggest an alternative.",
"no_data": "No review data available. Ask the user to upload reviews first.",
}.get(status, ""),
}
),
}
]
}
# ── Build and return the MCP server ──────────────────────────────
return create_sdk_mcp_server(
name="reviewlens",
version="1.0.0",
tools=[
search_reviews_tool,
analyze_sentiment_tool,
generate_chart_tool,
calculate_stats_tool,
suggest_follow_ups_tool,
list_knowledge_files_tool,
read_knowledge_file_tool,
save_to_report_tool,
get_report_tool,
check_scope_tool,
],
)
The file /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/tools.py has been updated successfully.
Now the final task — update main.py to wire knowledge loading and add the report endpoint.
from . import store, vectordb from .agent import handle_message from .ingest import build_summary, parse_csv, scrape_url from .models import ChatMessage, IngestionSummary, Session
from . import knowledge, store, vectordb from .agent import handle_message from .ingest import build_summary, parse_csv, scrape_url from .models import ChatMessage, IngestionSummary, Session
The file /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/main.py has been updated successfully.
BASE_DIR = Path(__file__).resolve().parent.parent app = FastAPI(title="ReviewLens AI")
BASE_DIR = Path(__file__).resolve().parent.parent app = FastAPI(title="ReviewLens AI") # Load knowledge base on startup knowledge.load()
The file /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/main.py has been updated successfully.
# ── Send chat message ───────────────────────────────────────────────
# ── Report generation ────────────────────────────────────────────────
@app.get("/api/report/{session_id}")
async def get_report(session_id: str):
"""Return the accumulated report findings as structured JSON."""
session = store.load_session(session_id)
if not session:
return JSONResponse({"error": "Session not found"}, status_code=404)
findings = store.get_findings(session_id)
return JSONResponse({
"product_name": session.product_name,
"platform": session.platform,
"findings": findings,
"total_findings": sum(len(v) for v in findings.values()),
})
@app.post("/chat/{session_id}/report")
async def generate_report(request: Request, session_id: str):
"""Trigger report generation by sending a report request to the agent."""
session = store.load_session(session_id)
if not session:
return JSONResponse({"error": "Session not found"}, status_code=404)
# Treat this as a chat message asking for a report
message = "Generate a comprehensive analysis report from everything we've discussed. Use the saved report findings and the report-structure knowledge file to compile a well-structured document."
user_msg = ChatMessage(role="user", content="Generate report")
store.append_message(session_id, user_msg)
event = _get_response_event(session_id)
event.clear()
user_html = _render_message(user_msg)
asyncio.create_task(_run_agent_and_respond(session_id, message, session))
thinking_html = (
'<div id="thinking-indicator" class="message assistant-message thinking">'
'<div class="message-content">'
'<div class="thinking-dots"><span></span><span></span><span></span></div>'
'</div></div>'
)
return HTMLResponse(user_html + thinking_html)
# ── Send chat message ───────────────────────────────────────────────The file /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/main.py has been updated successfully.
cd /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens && python -c "
import ast, sys
files = ['app/knowledge.py', 'app/models.py', 'app/store.py', 'app/prompts.py', 'app/tools.py', 'app/agent.py', 'app/main.py']
for f in files:
try:
ast.parse(open(f).read())
print(f' {f}: OK')
except SyntaxError as e:
print(f' {f}: SYNTAX ERROR: {e}')
sys.exit(1)
print('All files parse successfully.')
"cd /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens && python3 -c "
import ast, sys
files = ['app/knowledge.py', 'app/models.py', 'app/store.py', 'app/prompts.py', 'app/tools.py', 'app/agent.py', 'app/main.py']
for f in files:
try:
ast.parse(open(f).read())
print(f' {f}: OK')
except SyntaxError as e:
print(f' {f}: SYNTAX ERROR: {e}')
sys.exit(1)
print('All files parse successfully.')
"app/knowledge.py: OK app/models.py: OK app/store.py: OK app/prompts.py: OK app/tools.py: OK app/agent.py: OK app/main.py: OK All files parse successfully.
# Memory Index - [user_profile.md](user_profile.md) — Shayne's role and background context - [project_reviewlens.md](project_reviewlens.md) — ReviewLens AI project context and architecture decisions
File created successfully at: /Users/shaynelarocque/.claude/projects/-Users-shaynelarocque-Documents-GitHub-futuresight/memory/MEMORY.md
--- name: user_profile description: Shayne La Rocque — designer/developer building ReviewLens AI, previously built BriefBot type: user --- Shayne is a designer and developer. Previously worked at District 3 (Concordia University startup incubator) as UX & Systems Designer. Left in January 2026. Built BriefBot — an autonomous AI research agent using Claude Agent SDK for D3's admissions workflow. Coined the "sandbox agent" pattern (knowledge + tools + goal + guidelines instead of scripted steps). Now building ReviewLens AI as a take-home for FutureSight — an ORM consultancy review intelligence portal. Comfortable with FastAPI, HTMX, Claude Agent SDK, and agentic architecture patterns. Thinks in systems, not scripts.
File created successfully at: /Users/shaynelarocque/.claude/projects/-Users-shaynelarocque-Documents-GitHub-futuresight/memory/user_profile.md
--- name: project_reviewlens description: ReviewLens AI architecture — sandbox agent pattern rearchitecture for ORM review intelligence type: project --- ReviewLens AI is a review intelligence portal for FutureSight (ORM consultancy take-home assignment). **Why:** Rearchitected from a basic prompt-driven chatbot to a sandbox agent pattern (ported from BriefBot) — giving the agent knowledge files, tools, goals, and guidelines rather than step-by-step instructions. **How to apply:** The app lives in `reviewlens/` subdirectory. Key architecture: - `knowledge/` — ORM domain reference files the agent reads via tools (not injected into prompt) - `app/knowledge.py` — Loads/caches knowledge markdown files, provides `get()` and `list_files()` - `app/tools.py` — 10 MCP tools (5 original + 5 new: list_knowledge_files, read_knowledge_file, save_to_report, get_report, check_scope) - `app/prompts.py` — Sandbox quadrant structure (Knowledge, Tools, Goal, Guidelines) with scope guard - `app/agent.py` — Per-message agent with structured context passing (topic summary for older messages, full recent window) - `app/models.py` — Session has `report_findings` dict for building deliverables alongside chat - `app/store.py` — File-based persistence with `append_finding()` / `get_findings()` helpers - Frontend (templates/, static/) intentionally untouched — UI is solid
File created successfully at: /Users/shaynelarocque/.claude/projects/-Users-shaynelarocque-Documents-GitHub-futuresight/memory/project_reviewlens.md
All done. Here's what was created and modified:
| File | Purpose |
|---|---|
knowledge/review-analysis-framework.md |
ORM analysis approach, aspect categories by domain, significance thresholds, actionability tests |
knowledge/analysis-patterns.md |
Templates for 7 analysis types: sentiment trends, aspect deep-dive, churn detection, feature extraction, rating distributions, competitive gaps, pain point mapping |
knowledge/report-structure.md |
Report template with 6 sections (exec summary through recommendations), citation standards, formatting guidance |
app/knowledge.py |
Loads markdown files on startup, caches content, provides get(name) with fuzzy matching and list_files() |
| File | Changes |
|---|---|
app/models.py |
Added report_findings: dict[str, list[str]] to Session |
app/store.py |
Added append_finding() and get_findings() helpers |
app/prompts.py |
Restructured into sandbox quadrants (Knowledge → Tools → Goal → Guidelines), kept scope guard, documented all 10 tools |
app/tools.py |
Added 5 new tools: list_knowledge_files, read_knowledge_file, save_to_report, get_report, check_scope. Now imports knowledge and store modules |
app/agent.py |
Replaced flat string history with structured context: topic summary for older messages, full recent window (last 10), session depth tracking. Truncates long assistant responses in context |
app/main.py |
Imports and loads knowledge module on startup. Added GET /api/report/{session_id} (JSON) and POST /chat/{session_id}/report (triggers agent report generation) endpoints |
save_to_report accumulates findings by section alongside chat; get_report retrieves them for compilation. Separate from conversation memorykill 37051 54347 2>/dev/null; sleep 1 && uvicorn app.main:app --reload --reload-exclude '.venv' --reload-exclude 'data' &
Command running in background with ID: bqav6nagk. Output is being written to: /private/tmp/claude-501/-Users-shaynelarocque-Documents-GitHub-futuresight/5c8423a6-db28-4eb0-98eb-ccb8759929ff/tasks/bqav6nagk.output
cat /private/tmp/claude-501/-Users-shaynelarocque-Documents-GitHub-futuresight/5c8423a6-db28-4eb0-98eb-ccb8759929ff/tasks/bqav6nagk.output
INFO: Will watch for changes in these directories: ['/Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [54481] using WatchFiles
Process SpawnProcess-1:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/multiprocessing/process.py", line 313, in _bootstrap
self.run()
~~~~~~~~^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/uvicorn/_subprocess.py", line 80, in subprocess_started
target(sockets=sockets)
~~~~~~^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/uvicorn/server.py", line 75, in run
return asyncio_run(self.serve(sockets=sockets), loop_factory=self.config.get_loop_factory())
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 194, in run
return runner.run(main)
~~~~~~~~~~^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/uvicorn/server.py", line 79, in serve
await self._serve(sockets)
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/uvicorn/server.py", line 86, in _serve
config.load()
~~~~~~~~~~~^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/uvicorn/config.py", line 441, in load
self.loaded_app = import_from_string(self.app)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/uvicorn/importer.py", line 22, in import_from_string
raise exc from None
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1022, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/main.py", line 23, in <module>
from . import knowledge, store, vectordb
File "/Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/app/vectordb.py", line 9, in <module>
import chromadb
ModuleNotFoundError: No module named 'chromadb'ls /Users/shaynelarocque/Documents/GitHub/futuresight/reviewlens/.venv/bin/uvicorn 2>/dev/null || ls /Users/shaynelarocque/Documents/GitHub/futuresight/.venv/bin/uvicorn 2>/dev/null || which uvicorn
kill 54481 2>/dev/null; sleep 1 && .venv/bin/uvicorn app.main:app --reload --reload-exclude '.venv' --reload-exclude 'data'
Command running in background with ID: bj4ycd3eg. Output is being written to: /private/tmp/claude-501/-Users-shaynelarocque-Documents-GitHub-futuresight/5c8423a6-db28-4eb0-98eb-ccb8759929ff/tasks/bj4ycd3eg.output
Read the output file to retrieve the result: /private/tmp/claude-501/-Users-shaynelarocque-Documents-GitHub-futuresight/5c8423a6-db28-4eb0-98eb-ccb8759929ff/tasks/bqav6nagk.output