Skip to content

Problems and Mitigations Reference

Source files: Architechture & Research/RapidDraft/Product Scope & PRDs/MVP Scopes and Problems.md (LIST OF PROBLEMS sections for v0 and v1) Last synthesized: March 2026 Purpose: Detailed catalog of known hard problems for each MVP version, with concrete mitigation strategies and test methods.


How to Use This Document

This is a reference for engineers building RapidDraft. Each problem is: 1. Stated clearly — what will actually fail in the real world 2. Analyzed — why it's hard 3. Mitigated — what we'll build to prevent it 4. Tested — the smallest experiment that proves it works

Use this as a checklist during implementation and QA.


MVP v0 Problems

Problem 1: Reliable Drawing Understanding Across Real Templates

What breaks: As a designer, you know every team has different title blocks, layers, note styles, symbols, and weird legacy drafting habits. The same "simple" check breaks the moment you hit a new template or an old drawing with manual edits.

Why it's hard: Drawing semantics vary wildly. A "title block" is a convention, not a standard. Extracting the same facts from 100 different templates requires either extreme flexibility or painful case-by-case tuning.

Mitigation: Start with a tight "supported template set" (2–3 templates). Build a golden corpus of 30–50 real drawings and treat every new customer template as an onboarding task (map fields + update rules). Log "unsupported" elements instead of guessing.

Test method: Take 5 drawings made with the same template. Write an exporter/checker that only validates: part number, revision, material, units, scale. Show a red/green list. Then try the same demo on 5 drawings from a different template and confirm it fails gracefully (flags "unmapped field") instead of producing wrong results.


Problem 2: Stable "This Comment Refers to THAT" (Feature/Dimension) Across Revisions

What breaks: Designers expect "this issue is on that hole callout" to remain true after edits. But after model/drawing changes, IDs and references move, views get rebuilt, dims get recreated. Carry-forward becomes messy.

Why it's hard: Object IDs change. Visual matching is fragile. Context can shift. There's no perfect "stable reference" across edits.

Mitigation: Use layered linking: - Best case = direct association (NX internal ID) - Fallback = location + text + context matching - Always provide a manual "rebind this issue to the new dimension" action - Accept that 100% auto-matching is unrealistic

Test method: Create Rev A and Rev B where you only rename a view or move a dimension. Show that the issue either (a) auto-follows correctly or (b) is marked "needs rebind" with one-click reassignment.


Problem 3: Change Detection That's Actually Useful (Not Noise)

What breaks: Designers don't want an essay. They want: "these 3 things changed and these 5 drawing items must be rechecked." Naive diffs will scream because drawings reorder objects or rebuild views.

Why it's hard: Drawings are complex objects with rebuild logic. Small internal changes produce big diffs. Noise kills credibility.

Mitigation: Use "controlled change pairs" as calibration (one change per revision pair). Only report categorized changes that matter (dimension value change, tolerance change, note change, view scale change, title block change).

Test method: Make 8 revision pairs where each pair changes exactly ONE thing (e.g., note text, tolerance, one dim value). The diff output must list exactly that one change. If it can't, diff isn't ready.


Problem 4: DFM That Doesn't Become Hand-Wavy

What breaks: DFM is only valuable if it's grounded: tool access, minimum radii, bend rules, thickness rules—specific to a process. If it feels like generic advice, designers will ignore it.

Why it's hard: DFM is context-dependent. Different suppliers, machines, and materials have different rules. Encoding rules is straightforward; knowing which rules to start with is hard.

Mitigation: Pick ONE process (machining or sheet metal) and one part family. Encode only the rules that are "obviously true" and agreed with manufacturing/suppliers. Make everything configurable.

Test method: For machining: take a prismatic part and intentionally create an internal sharp corner, a deep small hole, a thin wall. The demo flags those 3. Nothing else. That proves the DFM engine is concrete, not fluff.


Problem 5: Drawing Checks That Require Semantics (GD&T, Functional Tolerancing Intent)

What breaks: A designer uses GD&T with intent. Software can detect symbols, but deciding "this is correct" depends on the datum scheme and function. That's where tools overreach and lose trust.

Why it's hard: GD&T encodes engineering judgment. You can't automate judgment.

Mitigation: In v0, treat GD&T as "presence/format/consistency checks" only (e.g., missing datum reference, symbol present but not attached, inconsistent units). Avoid judging correctness of the tolerancing strategy.

Test method: Use two drawings: - One with a datum callout missing/incorrectly attached - One correct

Demo only detects attachment/presence, not whether the scheme is "right."


Problem 6: Teamcenter Integration That Works in More Than Your Lab

What breaks: Every company's Teamcenter is customized. Permissions, dataset naming, release states—different everywhere. Designers just want "pick revision, run review."

Why it's hard: Every customer is a unique snowflake.

Mitigation: Start read-only + export bundle. Add write-back only where permitted. Make the connector modular and expect per-customer mapping.

Test method: Build a "fake Teamcenter" mode: a folder that mimics ItemRev → datasets. Prove your flow end-to-end without TC. Then plug into one real TC environment and show the same selection workflow works.


Problem 7: Collaboration That People Actually Adopt (Without Becoming Jira)

What breaks: Designers hate admin overhead. If creating/closing issues is slower than sending a screenshot, they won't use it. Also: duplicates and messy threads kill trust.

Why it's hard: Adoption requires simplicity AND power. Hard to thread both needles.

Mitigation: One-click "issue from finding," minimal fields, strong defaults, and automatic carry-forward. Duplicates prevented by design.

Test method: Take a drawing with 10 findings. Convert them into issues in under 60 seconds. Re-run checks and show it doesn't create duplicates and preserves status/comments.


Problem 8: Proof of Fix (Closing the Loop)

What breaks: Designers want confidence: "we fixed it." But after edits, the original evidence may move, and the tool must confirm the issue is gone, not just closed manually.

Why it's hard: You need to re-check and match across edits.

Mitigation: Define closure rules: resolved means "finding no longer present" OR "accepted deviation with rationale." Auto-reopen if the finding returns.

Test method: Create a dangling dimension in Rev A → tool flags it → fix associativity in Rev B → tool shows it gone and auto-closes (or prompts closure). Then break it again in Rev C and show auto-reopen.


MVP v1 Problems

Drawing Generation Problems

Problem 1: One-Click Drawing Generation That Doesn't Produce Junk Drawings

What breaks: NX can generate views, but "good drawings" depend on template discipline, model orientation, and company conventions. If the base views come out weird (wrong front view, bad scaling, cluttered hidden lines), designers will abandon it.

Why it's hard: View selection is an art. Automation looks easy until you see edge cases.

Mitigation: Start with one part family + one template + one agreed "front view rule." Provide a simple "choose orientation" step (or approved view set) instead of guessing.

Test method: Pick 10 similar parts (e.g., brackets). Run "Generate drawing" and check: view set created, correct sheet size, no views off-sheet, readable scale. Count how many need manual rework.


Problem 2: Template + Title Block Mapping Across Companies

What breaks: Title blocks are customized everywhere. Designers expect material/finish/rev/part number to land in the right fields automatically, and they get angry if it's wrong.

Why it's hard: Every company customizes Teamcenter differently.

Mitigation: Treat title block mapping as a configurable form per template (not hardcoded). If a field is unmapped, flag it loudly rather than filling something wrong.

Test method: Take one template, map 8 key fields. Generate 5 drawings and confirm fields populate correctly every time. Then switch template and confirm the system reports "unmapped fields" instead of guessing.


Problem 3: Update Drawing After CAD Changes Without Breaking Associations

What breaks: After model edits, drawings can rebuild and dimensions can go dangling, views can shift, notes can detach. Designers hate re-dimensioning.

Why it's hard: NX dimension logic is complex. You can't guarantee stability.

Mitigation: v1 should not promise auto-dimensioning. Focus on: regenerate views + detect what broke + give a punch list. Optionally enforce modeling best practices that keep associativity stable.

Test method: Create Rev A drawing, then Rev B with a controlled edit (hole moved). Update drawing and measure: how many dims stayed associated vs dangled; tool must report danglers reliably.


Problem 4: Auto-Labeling / View Naming That Matches Drafting Standards

What breaks: Drafting conventions differ: section labels, detail callouts, view names, arrows, reference format. Getting this slightly wrong makes drawings look amateurish.

Why it's hard: There are many conventions; hard to pick the right one.

Mitigation: Start with format checks + enforcement of naming rules, not "smart naming." Provide a small set of company rules ("Section A-A", "Detail B", etc.).

Test method: Generate 5 drawings and verify every section/detail label matches the exact naming convention regex. If not, it flags it and suggests the corrected label.


Problem 5: Keeping Drawing Checks Credible When RapidDraft Is Generating the Drawing

What breaks: If your tool generates a drawing and then flags tons of issues, it looks incompetent. Designers will say "why did you create this mess?"

Why it's hard: You're both the builder and the inspector.

Mitigation: Separate issues into: - NX generation limitations (needs human layout decisions) vs - True errors (metadata missing, dangling dims, required notes missing)

Keep the generated output minimal and clean.

Test method: For a part family, track average findings per generated drawing. Goal: mostly "layout suggestions" and very few "hard errors."


Vision-DFM Problems

Problem 6: Vision-Based DFM That Isn't a Toy

What breaks: Screenshots lie. Angle/zoom/section cuts change what you see. The same part can look "fine" from one view and "bad" from another. Models may hallucinate or over-warn.

Why it's hard: Vision is probabilistic. It's not deterministic geometry.

Mitigation: Use a standardized screenshot set (fixed views + fixed section cuts). Treat vision output as DFM hints with confidence, not pass/fail. Always pair with a few deterministic checks for credibility.

Test method: Create 20 parts with known DFM problems (sharp internal corners, thin walls, deep slots). Generate standardized screenshots and see if the vision model flags the correct region >70% of the time. Anything below that = keep it as "assistant notes" only.


Problem 7: Vision DFM: "Where Exactly Is the Problem?"

What breaks: A vague statement like "tool access issue" is useless unless the designer can see the exact feature. Designers need a pointer, not philosophy.

Why it's hard: Localization is hard for vision models.

Mitigation: Force the output to reference the specific screenshot + region (even if approximate): "In Section view 2, the slot depth looks high."

Test method: For 10 flagged parts, ask a designer (you) to locate the issue in <30 seconds based on the output. If you can't, the feature isn't useful yet.


Problem 8: Mixing Deterministic DFM and Vision DFM Without Confusing the User

What breaks: If both systems disagree, designers won't know what to trust. That destroys adoption.

Why it's hard: Two different signals need clear labels.

Mitigation: Label outputs clearly: - Measured checks (rules) = "Hard Findings" - Vision checks = "Review Prompts / Potential Risks"

Require manual confirmation for vision items.

Test method: Run both on the same part set and ensure the UI/report always separates "Hard Findings" vs "DFM Assistant Notes." No mixing.


Problem 9: Batch Processing and Speed

What breaks: Designers will not wait minutes per part, especially when iterating. NX operations can be heavy.

Why it's hard: Drawing generation + screenshots + checks is compute-intensive.

Mitigation: Cache everything by revision, allow "fast mode" (skip vision, skip heavy checks), and run generation/checks as a background job while the engineer keeps working.

Test method: Time 10 parts end-to-end. Set targets: generate+check within acceptable time for your workflow (e.g., <1–2 min/part initially). Track worst cases.


Problem 10: Trust and Responsibility: Who "Owns" the Generated Drawing Quality?

What breaks: If RapidDraft triggers NX drawing creation, people may assume it's "approved." That's risky culturally.

Why it's hard: Clear accountability is a human/org problem, not a tech problem.

Mitigation: Make it explicit: "Generated draft drawing — requires checker sign-off." Embed a checklist and sign-off line in the report.

Test method: Ensure every generated drawing/report includes a clear status watermark and a required sign-off step before it can be marked "ready."


Summary: Testing Strategy

For each problem, follow this pattern:

  1. Implement mitigation → build the safeguard
  2. Run the test method → prove it works on controlled data
  3. Iterate on edge cases → test the boundaries
  4. Document results → what passed, what failed, what's next

This approach, borrowed from mechanical engineering testing, ensures that hard problems are solved systematically, not by luck or hope.