Tool #2

ArchitectureDecomposition

Describe your AI project once. Get a complete system architecture — data flows, component specs, effort estimates, and JIRA tickets — ready to hand off to your development team. AI generates everything, you make every decision.

Start Building
5-Step Guided ProcessFrom idea to architecture in minutes
Step: 1. Describe

Project Overview

I want to build an AI-powered customer support triage system that classifies tickets and generates draft responses grounded in historical resolutions.

The Gap Between Idea and Engineering Plan

Whether you're drafting a client proposal or handing off to developers, the same problems slow every AI project down.

Proposals Take Weeks

Software houses spend days analyzing a client's AI idea before they can quote it. By the time the proposal is ready, the client has moved on to a competitor.

Knowledge Stays in Heads

The architect understands the system. The PM wrote the brief. But the developers get a Slack message and a vague diagram. Critical context is lost in translation.

Estimation Is Guesswork

"How long will this take?" gets answered with gut feelings. Stakeholders get unreliable timelines, and teams inherit unrealistic deadlines.

No Single Source of Truth

Architecture lives in scattered documents, whiteboard photos, and chat threads. When someone new joins the project, they spend the first week just getting context.

Process

Five Steps to a Complete Architecture

AI drives the process. You make every decision. Describe your project, refine the architecture interactively, and export a document your team can execute from.

1

Project Overview

Paste your idea or client brief. AI synthesizes it into a structured project overview, then asks smart follow-up questions to fill in the gaps. You refine through conversation — no blank-page problem.

Software houses: paste a client's email and get a structured brief in seconds.

2

Input & Output Interfaces

Define what goes into your system and what comes out. AI suggests data sources, formats, and interface options based on your project context. You pick, adjust, and finalize.

Clear I/O contracts mean every developer knows exactly what to build against.

3

Data Flow Architecture

AI generates a complete multi-stage data flow with components, transformations, and sample data for each step. You can validate, iterate, and regenerate individual stages interactively.

The AI proposes the architecture. You review each stage and make every decision.

4

Project Planning

Get effort estimates, demo milestone identification, LLM usage analysis, and auto-generated JIRA tickets with acceptance criteria and subtasks — all tied to your architecture stages.

Hand this to your team and they can start sprinting immediately.

5

Export & Documentation

Export your complete architecture as a print-ready PDF document. Includes every detail — flow diagrams, component specs, effort estimates, and implementation tickets.

One document that serves as proposal, technical spec, and sprint backlog.

Capabilities

What You Get

Every feature is designed to bridge the gap between AI idea and engineering execution.

01

AI-Generated Data Flow

Complete multi-stage pipeline with components, transformations, and real sample data. Your developers see exactly how data moves through the system.

02

Smart Component Discovery

AI identifies required components from a library of 20+ AI building blocks — knowledge bases, guardrails, evaluation pipelines — and asks questions to uncover hidden needs.

03

Effort Estimation

T-shirt size estimates for each pipeline stage with AI reasoning. Override with your own values — story points, hours, or any label your team uses.

04

JIRA Ticket Generation

Implementation-ready tickets with descriptions, acceptance criteria, subtasks, and technical notes. Import directly into your project management tool.

05

Human-in-the-Loop Throughout

AI generates every artifact. You review, edit, and approve at each step. Nothing is final until you decide it is — the AI is your co-pilot, not your autopilot.

06

Client-Ready Export

One-click PDF export produces a document that works as a client proposal, technical spec, and sprint backlog — all in one.

Output Preview

Sample Architecture Export

Here's a real example of the comprehensive architecture document generated by the tool — complete with data flow, components, effort estimates, and JIRA tickets.

Export Architecture

Review your complete architecture design below. When ready, export to a print-optimized PDF document.

Project Overview

An intelligent customer support system that automatically triages incoming tickets, classifies urgency and topic, and generates draft responses based on historical resolution data. The system integrates with Zendesk via API, uses structured output for classification, retrieves similar past resolutions from a knowledge base, and presents agents with AI-suggested responses ranked by confidence. A human-in-the-loop review step ensures quality before any response is sent to customers.

Input

Customer support tickets arriving via Zendesk API webhook. Each ticket includes a subject line, the message body (may contain HTML), the customer's account tier (free, pro, or enterprise), account age, a history of recent interactions, and any file attachments.

Format:JSON API (Webhook)

Output

A triage result containing the urgency level, topic classification, and one or more AI-generated draft responses with quality scores and source citations. The approved response is pushed back to Zendesk as an internal note that the agent can send to the customer.

Format:JSON API Response + Zendesk Internal Note

System Components

Ticket Ingestion Service

Webhook endpoint that receives, validates, and normalizes ticket data.

Topic & Urgency Classifier

LLM-based classifier for ticket categorization and urgency scoring.

Knowledge Base (Vector store)

Vector database for semantic search of historical resolutions.

Embedding model

Generates embeddings for semantic similarity matching.

Large language model (LLM)

Generates draft responses grounded in retrieved historical data.

LLM-as-Judge

Scores draft quality on multiple dimensions and flags low-quality outputs.

Custom UI

Agent dashboard for reviewing, editing, and approving AI drafts.

Guardrails

Validates responses for policy compliance and content safety.

Data Flow Architecture

Scenario: An enterprise customer submits a ticket: "Our SSO integration broke after the latest update. Users are getting 403 errors on login. This is blocking our entire team from accessing the platform."

1

Ticket Ingestion & Validation

Raw Zendesk Webhook → Normalized Ticket

The Zendesk webhook fires when a new ticket is created. The ingestion service receives the payload, validates required fields, enriches with customer metadata from the CRM, and queues the ticket for processing.

Ticket Ingestion Service

Receives webhook POST from Zendesk, validates the schema against expected fields, fetches customer tier and account age from the CRM, then publishes a normalized ticket to the processing queue.

Input:

A raw webhook payload arrives from Zendesk containing the ticket subject ("SSO broken after update"), the HTML message body, and customer metadata showing an enterprise-tier account that has been active for over two years.

Output:

A clean, normalized ticket record with the HTML stripped from the body, the customer tier set to enterprise, a priority hint of "high" based on account value, and the detected language set to English.

2

Topic Classification & Urgency Assessment

Normalized Ticket → Classification Result

The normalized ticket text is sent to the LLM with a structured output schema that enforces a consistent classification format. The model assigns a topic category and urgency level. Enterprise-tier tickets mentioning blocking keywords receive an automatic urgency boost.

Topic & Urgency Classifier

LLM with structured output schema returns: category = technical/authentication, urgency = critical. Urgency was boosted from high to critical because the ticket is from an enterprise customer and contains the word "blocking".

Input:

The clean ticket text about an SSO integration failure, along with customer context indicating enterprise tier and a two-year account history.

Output:

A structured classification result: category "technical/authentication", sub-category "sso_integration", urgency "critical" (boosted due to enterprise tier and "blocking" keyword), with a list of the urgency signals that contributed to the decision.

3

Historical Resolution Retrieval

Ticket Text + Classification → Retrieved Resolution Patterns

The ticket text is embedded and used to search the knowledge base for historically resolved tickets with similar topics and symptoms. The top matches are retrieved along with their resolution steps and customer satisfaction scores.

Embedding model

Generates a vector embedding of the ticket text for semantic similarity search against the knowledge base.

Knowledge Base (Vector store)

Pinecone vector search filtered by the "technical/authentication" category. Returns 8 results above the 0.75 similarity threshold. Top match is ticket ZD-31024 about SSO certificate rotation.

Input:

The ticket text "SSO 403 error after update" combined with the classification result (technical/authentication) to scope the search.

Output:

Eight similar historically resolved tickets found. The top match (similarity 0.92) describes an SSO certificate rotation fix after a platform update with a 4.8/5 customer satisfaction score. The second match (0.87) covers a SAML assertion URL mismatch with 4.5/5 satisfaction.

4

Draft Response Generation

Retrieved Patterns + Ticket Context → Draft Responses

Using the retrieved resolution patterns as context, the foundation model generates one or more draft responses tailored to the specific ticket. Responses are grounded in historical resolutions to minimize hallucination. The guardrails layer checks each draft for policy compliance before proceeding.

Large language model (LLM)

Generates 2 draft responses using GPT-4o with the retrieved resolutions as RAG context. Draft 1 is based on tickets ZD-31024 and ZD-38471. Both drafts use a professional tone appropriate for enterprise customers.

Guardrails

Scanned both drafts for policy violations. No issues found — no unsupported promises, no disclosure of internal processes, tone is appropriate for enterprise tier.

Input:

The top three resolution patterns from the knowledge base, the original ticket text, and the customer tier (enterprise) for tone calibration.

Output:

Two draft responses generated. Draft 1 walks the customer through SSO certificate rotation with step-by-step instructions and offers to schedule a screen-share if the issue persists. Draft 2 suggests checking the SAML assertion URL configuration. Both drafts include citations to the historical tickets they drew from.

5

Quality Scoring

Draft Responses → Scored & Ranked Responses

Each draft response is evaluated by the quality evaluator using an LLM-as-Judge approach. It scores factual grounding, tone appropriateness, completeness, and actionability. Low-scoring drafts are flagged for mandatory human review.

LLM-as-Judge

LLM-as-Judge evaluation complete. Draft 1: recommended for approval (0.89 overall). Draft 2: flagged for review (0.72 overall, missing resolution steps). Both scores include a breakdown across four quality dimensions.

Input:

Two draft responses along with their source citations and the original ticket for context.

Output:

Draft 1 scores 0.89 overall (factual grounding: 0.95, tone: 0.88, completeness: 0.85, actionability: 0.90) and is recommended for approval. Draft 2 scores 0.72 overall and is flagged for review because it is missing detailed resolution steps.

6

Agent Review & Approval

Scored Drafts → Agent-Approved Response

The support agent receives the ticket in their review dashboard alongside the AI-generated draft responses, quality scores, and source citations. They can approve a draft as-is, edit it, or reject it entirely. Their decision and any edits are captured as feedback for continuous model improvement.

Custom UI

Agent reviewed both drafts in the dashboard. Approved Draft 1 with one edit (added documentation link). Feedback captured: quality rating "good", suggestion to auto-include doc links. Approved response pushed to Zendesk.

Input:

The ticket details, two scored draft responses with quality breakdowns, and source citations — all displayed in the agent review interface.

Output:

The agent approves Draft 1 with a minor edit (added a direct link to the SSO documentation page). Approval took 45 seconds. The agent rated the draft quality as "good" and suggested including documentation links by default in future drafts.

Demo Milestones

Topic Classification & Urgency Assessment

Display the structured classification output with confidence indicators for category and urgency. Show how the urgency boost triggers for enterprise customers.

Historical Resolution Retrieval

Visualize semantic search results showing the top similar historical tickets with similarity scores and resolution previews.

Draft Response Generation

Side-by-side comparison of the retrieved context and the generated draft response. Highlight which sections are grounded in source data.

Quality Scoring

Quality scorecard showing the breakdown across factual grounding, tone, completeness, and actionability with approve/review recommendations.

Agent Review & Approval

Full agent review dashboard mockup with draft responses, confidence scores, inline editing, and one-click approve.

Implementation Tasks5/6 Steps Defined

Ticket Ingestion & ValidationHigh

Implement Zendesk Webhook Ingestion Service

EffortS
Description:

Build a FastAPI service that receives Zendesk webhook payloads for new ticket creation events. The service must validate the payload schema, extract relevant fields (subject, body, customer metadata), enrich with CRM data via internal API, and publish a normalized ticket record to the processing queue.

  • Zendesk Webhook API (inbound)
  • Internal CRM API (customer enrichment)
  • Redis queue (outbound)
Acceptance Criteria:
  • Webhook endpoint correctly receives and validates Zendesk ticket payloads
  • Invalid payloads return 400 with descriptive error messages
  • Customer metadata (tier, account age) is enriched from the CRM
  • Normalized ticket is published to the processing queue with delivery guarantee
  • Health check endpoint returns service status and queue depth
Subtasks:
1.
Set up FastAPI project structure Initialize project with FastAPI, Pydantic models for payload validation, and a health check endpoint.
2.
Implement webhook payload validation Create a Pydantic model matching the Zendesk webhook schema. Handle optional fields gracefully.
3.
Build CRM enrichment client Async client to fetch customer tier and account data. Implement caching to avoid redundant lookups.
4.
Set up Redis queue publisher Implement reliable queue publishing with retry logic and a dead letter queue for failed messages.
5.
Add structured logging Log ingestion metrics: tickets received, enrichment results, and queue publish outcomes.
Technical Notes:

Use Zendesk webhook signature verification for security. Consider rate limiting to handle ticket creation bursts during incidents. Redis Streams is a good fit for guaranteed, ordered delivery.

Topic Classification & Urgency AssessmentHigh

Build Topic & Urgency Classification with Structured Output

EffortM
Description:

Implement the classification stage using an LLM with structured output to assign topic categories and urgency levels to incoming tickets. The structured output schema ensures consistent, parseable results. Include urgency boost rules for enterprise-tier customers and blocking keywords.

  • category (string) — e.g. "technical/authentication"
  • sub_category (string) — e.g. "sso_integration"
  • urgency (enum) — critical, high, normal, low
  • urgency_signals (string[]) — keywords or factors that influenced the urgency level
Acceptance Criteria:
  • LLM returns classification results matching the defined structured output schema
  • Category taxonomy covers all major support topics identified by the team
  • Enterprise-tier tickets with blocking keywords are correctly boosted to critical urgency
  • Few-shot examples in the prompt produce consistent results across similar tickets
  • Classification results include the reasoning signals that influenced the decision
Subtasks:
1.
Define category taxonomy Work with the support team to define the full list of categories and sub-categories based on historical ticket data.
2.
Design structured output schema Define the JSON schema for classification output. Include category, urgency, and signal fields.
3.
Write few-shot prompt examples Create 8-10 representative examples covering each major category and urgency level.
4.
Implement urgency boost rules Post-processing step that checks customer tier and blocking keywords to adjust urgency levels.
5.
Build evaluation test set Label 200+ historical tickets with expected classifications. Run the classifier and measure accuracy per category.
Technical Notes:

Use OpenAI structured output (response_format with JSON schema) for reliable parsing. Start with a broad category taxonomy and refine based on classification accuracy. The urgency boost logic should be configurable — not hardcoded — so the support team can adjust thresholds.

Historical Resolution RetrievalHigh

Set Up Resolution Knowledge Base & Retrieval Pipeline

EffortL
Description:

Build the vector database infrastructure for storing and retrieving historical ticket resolutions. This includes embedding historical tickets using the embedding model, setting up Pinecone indexes with category-based filtering, implementing semantic search, and building the retrieval API.

  • Backfill: Embed and index historical resolved tickets
  • Live: Auto-embed new resolutions as tickets are closed
  • Search: Semantic search with optional category filtering
Acceptance Criteria:
  • Historical resolved tickets are embedded and indexed in the vector database
  • Semantic search returns relevant resolutions for test queries
  • Category-filtered search narrows results to the relevant topic area
  • New resolutions are automatically indexed when tickets are marked resolved
  • Retrieval API returns the top matches with similarity scores and resolution text
Subtasks:
1.
Design vector index schema Define metadata fields (category, resolution date, satisfaction score), namespace strategy, and embedding dimensions.
2.
Build historical data embedding pipeline Batch job to process historical tickets. Include chunking for long tickets and progress tracking.
3.
Implement retrieval API FastAPI endpoint accepting query text and optional category filter. Return top-k results with similarity scores.
4.
Build live indexing pipeline Event-driven pipeline that embeds and indexes new resolutions when tickets are closed in Zendesk.
5.
Tune similarity thresholds Evaluate retrieval quality at different thresholds. Find the balance between returning enough results and keeping them relevant.
Technical Notes:

Use the same embedding model for both indexing and querying to ensure consistency. Consider a re-ranking step for the top results to improve precision. Start the backfill early — it may take time to process the full ticket history.

Draft Response GenerationMedium

Implement RAG-Based Response Generation with Guardrails

EffortM
Description:

Build the response generation service that uses retrieved historical resolutions as context to generate tailored draft responses. Include a guardrails layer that checks each draft for policy compliance before it reaches human review.

  • Responses should be grounded in retrieved historical data
  • Each draft should cite which source tickets it drew from
  • Fallback behavior when no good historical matches exist
Acceptance Criteria:
  • Generated responses are grounded in retrieved historical resolutions with source citations
  • System generates 1-3 draft variants per ticket
  • Guardrails layer blocks drafts containing unsupported promises or policy violations
  • Graceful fallback when no relevant historical data exists — drafts acknowledge the novel issue
  • Response tone adapts to customer tier (more formal for enterprise)
Subtasks:
1.
Design system prompt and RAG template Create the prompt template that incorporates retrieved context, ticket details, and customer tier for tone calibration.
2.
Implement generation service Azure OpenAI GPT-4o integration. Handle token limits and assemble the context window from retrieved results.
3.
Build source citation tracking Track which retrieved tickets influenced each draft. Include citation references in the output.
4.
Implement guardrails checks Post-generation validation for policy compliance, unsupported promises, and tone appropriateness.
5.
Implement fallback strategy When the best retrieval score is low, generate a response that acknowledges the novel issue and offers to escalate.
Technical Notes:

Use structured output with GPT-4o for consistent response formatting. The guardrails check should be a separate step (not part of the generation prompt) so it can be updated independently. Log all blocked drafts for review — false positives in guardrails are frustrating for agents.

Agent Review & ApprovalMedium

Build Agent Review Dashboard

EffortL
Description:

Design and implement the web-based review dashboard where support agents interact with AI-generated draft responses. The dashboard must display the original ticket, AI drafts with quality scores, source citations, and provide editing and approval workflows. Agent feedback is captured for continuous model improvement.

  • Minimize time-to-approve for high-quality drafts
  • Make source citations easy to verify
  • Capture structured feedback without adding friction
Acceptance Criteria:
  • Agents can view ticket details alongside AI-generated draft responses
  • Quality scores and source citations are clearly displayed for each draft
  • Agents can approve a draft and have it posted to Zendesk as an internal note
  • Inline editing preserves the original draft for comparison
  • Agent feedback (quality rating, improvement suggestions) is captured and stored
  • Dashboard shows a queue of tickets assigned to the agent with urgency indicators
Subtasks:
1.
Design dashboard wireframes Create wireframes for the ticket queue view, draft review panel, and editing interface. Get feedback from the support team.
2.
Implement ticket queue view List of assigned tickets with urgency indicators, customer tier badges, and a summary of AI confidence.
3.
Build draft review component Display drafts with quality score breakdowns, source citations, and a diff view for edited drafts.
4.
Implement approval workflow Approve/reject actions that integrate with the Zendesk API to post the response. Include confirmation for edits.
5.
Add feedback capture Simple rating and optional text field after each approval or rejection. Data flows to a feedback database for model tuning.
Technical Notes:

Build with Next.js for fast page loads. Consider keyboard shortcuts for power users (A for approve, E for edit, J/K for navigation). Track time-to-approve as a key metric — the goal is to save agents time, not add steps.

Ready to Architect?

Describe your AI project and walk away with a complete architecture document — proposal-ready, developer-ready, sprint-ready.

Start Building Your Architecture
GuidedHuman-in-the-LoopExportable
storm of intelligenceAI Risk Prevention Tools

Building tools and resources for robust AI infrastructure.
From idea validation to production evaluation.

© 2026 storm of intelligence. All rights reserved.