Skip to content

UI/UX Improvement PRD

Source files: Architechture & Research/RapidDraft/UX & UI/UIUX Improvement for RapidDraft.md

Last synthesized: March 2026

Executive Summary

RapidDraft's primary UX blockers are navigation clarity, overloaded primary actions, panel inconsistency, and fragmented review + issue workflows. The core fix is simple: reframe RapidDraft around "Reviews + Findings/Issues" as the north-star workflow, making every tool panel (DFM, Vision, Fusion, Report, DraftLint) feel like part of a single review pipeline rather than a collection of utilities. This document details the current problems, recommended fixes, and a phased implementation roadmap.


Current State: What RapidDraft Does Well

RapidDraft already has the foundation of a power tool:

  • Viewer-first layout (large 3D viewport; minimal chrome) supports inspection tasks well
  • Right-side task panels (Simple DFM Review, Vision Analysis, Fusion Analysis, Report Template Builder) are conceptually clean: "inputs → controls → generate"
  • DraftLint's empty state is straightforward ("Ready for scan… Select a drawing file… then run scan")
  • The breadth of capabilities (3D + 2D + DFM + AI analysis + reporting) is genuinely differentiated

The UX Problems: Why Good Capabilities Feel Scattered

Problem 1: Navigation Is Icon-Heavy, Not Task-Driven

Current state: Multiple vertical icon rails (left and right) plus hamburger menu; "what is where" is unclear

Why it matters: For complex tools, engineers don't tolerate "hunt-and-peck navigation." Users need clear mental models of where to find reviews, findings, reports, and DraftLint.

What it looks like: A user wants to find their previous review → must click through multiple panels and remember which icon corresponds to reviews

Problem 2: Too Many Primary CTAs Compete at Once

Current state: "Create Drawing," "Compare Models," and "Collaborate" all visible in strong orange simultaneously

Why it matters: Users need one "next best action" at a time. Competing CTAs create decision paralysis and make the tool feel cluttered.

What it looks like: A designer opens RapidDraft and sees three equally important actions, unsure which to take first

Problem 3: Implementation Details Leak Into the UI

Current state: Labels like "Open CASCADE STEP translator 7.8" appear as file identities

Why it matters: Backend details harm trust and comprehension. Users should see "Part name / file name / revision," not pipeline internals.

Problem 4: Panel Visual Language Is Inconsistent

Current state: Mix of dark navy (part profile), white (analysis panels), light beige (comments/reviews). Feels like multiple UI systems stitched together.

Why it matters: Inconsistency erodes perceived quality and professionalism. Enterprise engineering software depends on visual consistency for trust.

Problem 5: Collaboration Features Feel Underpowered

Current state: Reviews/Comments panes show "No items yet"; disconnected from viewer and from each other

Why it matters: Review workflows are the core value prop. Without visible engagement signals ("unseen by you," "unresolved counts," "activity," "participant list"), the collaboration layer feels incomplete.


P0: Navigation and Page Model

Issue Fix Effort Impact Acceptance Criteria
Icon-heavy navigation; hierarchy unclear Implement single primary left sidebar with text labels + icons: Home, Workspaces, Files, Reviews, Findings, Reports, DraftLint, Settings High High New users answer "Where am I?" and "Where are Reviews?" in <5 sec; tree test success ≥80%
No context breadcrumbs Add breadcrumbs + workspace indicator Medium Medium Breadcrumb always shows: Project → Review → File

P0: Single Primary Action Per Context

Issue Fix Effort Impact Acceptance Criteria
Multiple orange CTAs compete Define one primary CTA per screen. Example: "Generate Draft" (primary) + secondary actions as outlined/ghost or overflow menu Medium High Visual scan shows only one primary-colored CTA per screen
Internal tech terms leak into UI Replace "Open CASCADE STEP translator 7.8" with "Part name / file name / revision"; put technical details in collapsible "Import details" Low Medium–High No backend component names in primary headers

P0: Review Container as Mission Control

Issue Fix Effort Impact Acceptance Criteria
Reviews/Comments panes are disconnected Create Review page with tabs: Files, Findings, Comments, Activity, Export High High Review has: participants, status, unresolved count, activity log; comments deep-link to view states
No unified findings model Introduce Findings schema: title, severity, confidence, evidence, linked geometry; state (New → Triaged → Resolved); actions (Accept, Dismiss, Convert to Issue, Assign) High High Every analysis run emits findings into a table; clicking a finding highlights the geometry

P1: Panel Ergonomics & Consistency

Issue Fix Effort Impact Acceptance Criteria
Inconsistent panel colors (navy, white, beige) Standardize panel visual language: left sidebar (consistent styling), right task panels (consistent headers with title, help, close buttons) Medium Medium–High All panels follow same header pattern and color scheme
Analysis panels feel "expert-only" Add "guided mode" + "advanced mode"; inline tooltips; "Recommended defaults" badge Medium Medium–High First-time users can run DFM/Vision/Fusion without opening advanced controls

P1: Output Clarity & Actionability

Issue Fix Effort Impact Acceptance Criteria
Results unclear; where do they go? Standardize all analysis outputs into Findings: severity, confidence, evidence, actions (Accept, Convert to Issue, Assign) High High Every finding lands in Findings tab; sortable/filterable; each finding can be resolved or converted to an issue
Report builder is functional but unclear Add live preview ("What your report will contain") + template gallery + duplication Medium Medium User can preview before saving; template dependencies are visible

P2: Trust and Accountability Signals

Issue Fix Effort Impact Acceptance Criteria
Who ran what, when, and how? Add Provenance: run history, model versions, rule set version, timestamps, audit info in exports Medium High Every export shows "Generated on… using…"; audit trail is available
No personal task queue Add My Work: assigned reviews/findings/issues; filters for "Needs my response" and "Unseen by me" Medium High Users can clear queue; each item links to exact viewer state

Detailed PRD Requirements

Epic A: Navigation and Information Architecture (P0)

Goal: Users immediately understand where they are and how to navigate to Files/Reviews/Outputs.

Requirements: - Replace mixed rails + hamburger with one primary left sidebar - Sidebar includes: workspace context (switcher or dropdown), labeled items (icon + text), notifications, profile/settings - Persistent breadcrumbs showing: Project → Review → File - Consistent focus/selection styling across navigation

CoLab precedent: Updated sidebar + workspace switcher + context-adaptive breadcrumbs

Acceptance criteria: - 5-task tree test (Find: Reviews, Findings, Export report, DraftLint, Settings) success ≥80% - Breadcrumb always shows current container hierarchy - New users can locate primary tasks within 2 clicks


Epic B: Review Container as Primary UX Unit (P0/P1)

Goal: All generation tools (DFM/Vision/Fusion/DraftLint) feed into consistent review artifact with findings, issues, comments, and exports.

Requirements: - Review object has: - Name, owner, participants, status, due date (optional) - Files list (multi-file supported) - Findings list (AI-generated) - Comments (human) - Activity feed - Export configuration - From any file, create a review in ≤2 clicks - Review exports include all findings/comments/decisions into single package - Unresolved count visible at file and review level

CoLab precedent: Request reviews + Track issues and reviews; multi-file reviews; exportable review record

Acceptance criteria: - Review can be created from file detail page with one button - Export includes all findings/comments with links to geometry


Epic C: Viewer + Panel Ergonomics (P0/P1)

Goal: Keep viewer large while making tool panels discoverable, consistent, and non-overwhelming.

Requirements: - Dockable panels: Left (file/part context + properties), Right (task panel: DFM/Vision/Fusion/Report/DraftLint) - Panels are resizable and collapsible; auto-collapse into drawers on narrow screens - Consistent panel header pattern: title, one-line purpose, help icon, close/dock controls - Viewer never shrinks below minimum usable width on 1366px screens

Acceptance criteria: - Panels collapse and expand smoothly - Minimum viewer width maintained on standard displays - Help icon opens contextual documentation


Epic D: Output Modeling - Findings and Issues (P1)

Goal: AI output becomes actionable work items, not just text.

Requirements: - Standard Findings schema: - title, severity, confidence, evidence snippet, linked geometry/view - state: New → Triaged → Resolved (or Equivalent) - actions: Accept, Dismiss, Convert to Issue, Assign, Comment - Findings table with sorting (severity, type, status), filtering, saved views - Clicking a finding highlights geometry or opens relevant 2D region - Findings can be bulk-actioned (accept multiple at once)

CoLab precedent: Track issues/reviews with search/filter/sort; file-level unresolved counts; viewer engagement indicators

Acceptance criteria: - Every analysis run emits findings into a table - Findings are sortable/filterable by status, severity, type - Clicking finding jumps to relevant geometry


Proposed Information Architecture

Home
├── Workspaces / Projects
│   ├── Files
│   │   ├── File Detail
│   │   │   ├── Create / Join Review
│   │   │   └── Run Analysis (DFM / Vision / Fusion / DraftLint)
│   │   └── Findings (generated by analysis)
│   ├── Reviews
│   │   ├── Review Container
│   │   │   ├── Viewer (3D/2D)
│   │   │   ├── Findings (AI)
│   │   │   ├── Comments (Human)
│   │   │   ├── Activity / Provenance
│   │   │   └── Export / Reports
│   │   └── Issues / Tasks
│   ├── My Work (personal queue)
│   ├── Reports / Templates
│   ├── DraftLint (2D Review mode)
│   └── Settings / Standards
└── Profile / Account

Accessibility Requirements (WCAG 2.2 AA)

Focus and Keyboard Support

  • Visible focus indicators on all interactive controls
  • Focus not hidden behind sticky headers/overlays (WCAG 2.2: "Focus Not Obscured")
  • All non-native UI widgets (custom menus, drawers, dialogs) follow keyboard interaction guidance (Escape, Tab trapping, predictable focus return)

Target Sizes

  • Toolbar icons and buttons meet minimum target size expectations (WCAG 2.2: "Target Size Minimum")

Contrast

  • Dark panels maintain sufficient contrast for text and UI component boundaries
  • Non-text contrast for icons/borders used as affordances

Validation

  • Keyboard-only walkthrough of 5 critical flows: open file, run DFM, open findings, export report, scan in DraftLint
  • Automated checks (axe, lighthouse) plus manual checks for viewer toolbars and complex grids

Performance Targets (Core Web Vitals)

Target field data at p75: - LCP ≤ 2.5s (largest contentful paint) - INP ≤ 200ms (interaction to next paint; replaces FID) - CLS ≤ 0.1 (cumulative layout shift)

RapidDraft-Specific Tactics

  • Use skeleton loaders for panels that load results
  • Virtualize tables for findings/issues/comments (large datasets likely)
  • Add staged progress for long jobs (upload → convert → analyze → render results)
  • Defer non-critical sidebar content until user interaction

Phased Implementation Roadmap

Phase 1: Navigation + Panel Consistency (Weeks 1–4)

Output: Users can navigate clearly; panels are consistent

Deliverables: - Left sidebar with labeled navigation (Home, Workspaces, Files, Reviews, Findings, DraftLint, Settings) - Breadcrumb navigation - Consistent panel headers across all task panels - Color palette / design system update

Effort: High (UI architecture refactor) Impact: High (immediate UX improvement)

Phase 2: Review Container + Findings Model (Weeks 5–8)

Output: Review is the primary unit; findings are actionable

Deliverables: - Review page with tabs (Files, Findings, Comments, Activity, Export) - Findings schema and table with sorting/filtering - Integration of DFM/Vision/Fusion/DraftLint outputs into unified Findings - "Convert finding to issue" workflow

Effort: High (data model + UI integration) Impact: High (core workflow redesign)

Phase 3: Output Clarity + Provenance (Weeks 9–12)

Output: Every analysis output is transparent and traceable

Deliverables: - "My Work" personal queue - Activity feed and provenance tracking - Enhanced report builder with live preview - Audit trail in exports

Effort: Medium Impact: Medium–High (trust + accountability)

Phase 4: Advanced Ergonomics (Weeks 13–16, Optional)

Output: Advanced users get power-user features

Deliverables: - Custom saved views (favorite filtering/sorting) - Bulk actions on findings - Advanced DraftLint comparison (overlay, side-by-side) - Template customization and sharing

Effort: Medium Impact: Medium (polish + power-user stickiness)


Success Metrics

Track these after each phase:

  1. Navigation: Task completion time for "find reviews," "navigate to findings," "export report" (target: <30 sec per task)
  2. Panel consistency: User perception of "professional/cohesive" UI (qualitative feedback)
  3. Findings adoption: % of findings that get triaged/resolved (target: >70% within 2 weeks of generation)
  4. Review engagement: # of participants per review, comment/issue creation rate
  5. Performance: LCP, INP, CLS at p75 (target: meet Core Web Vitals thresholds)
  6. User satisfaction: NPS or satisfaction survey post-implementation (target: >50 NPS)