# Bookly A conversational customer support agent for a fictional online bookstore. The agent handles two depth use cases (order status and returns) and one breadth use case (policy questions) over a vanilla HTML chat UI, backed by Anthropic Claude with a four-layer guardrail strategy. The full agent design rationale lives in `DESIGN.md`. ## Stack - Python 3.11+, FastAPI, Uvicorn - Anthropic Claude (Sonnet) with prompt caching - Vanilla HTML / CSS / JS chat frontend (no build step) - Pytest ## Setup ``` python3 -m venv venv ./venv/bin/pip install -r requirements.txt cp .env.example .env # edit .env and set ANTHROPIC_API_KEY ``` ## Run locally ``` ./venv/bin/uvicorn server:app --host 127.0.0.1 --port 8014 ``` Then open in a browser. ## Tests ``` ./venv/bin/python -m pytest tests/ -v ``` Tests mock the Anthropic client, so no API key or network access is required. ## Project layout ``` agent.py System prompt, guardrails (layers 1, 2, 4), agentic loop tools.py Tool schemas, handlers, SessionGuardState (layer 3) mock_data.py Orders, return policy, FAQ policies server.py FastAPI app: /api/chat, /health, static mount config.py pydantic-settings config loaded from .env static/ index.html, style.css, chat.js tests/ test_tools.py, test_agent.py deploy/ systemd unit + nginx site config for the production droplet ``` ## Design See `DESIGN.md` for the architecture, conversation design, hallucination and safety controls, and production-readiness tradeoffs.