Skip to content

DFM Pipeline — Live Code Explainer

Source: DFM Benchmarking/dfm_pipeline_explainer.docx — direct inspection of branch codex/dfm-benchmark-sidebar in D:\02_Code\21_RapidDraft_ProductversionFinal_DFMBenchmark Status: Reference — explains the live codebase as of March 2026 Purpose: Understand what the system is actually doing, not a formal specification


The Single Most Important Mental Model

"RapidDraft is not asking 'what did the CAD file say?' It is asking 'what manufacturing story does this geometry imply, and which rules does that story violate or leave under-specified?'"

The system is a layered engineering pipeline: 1. CAD geometry understanding 2. Manufacturing interpretation (Part Facts) 3. Rule evaluation (standards-backed) 4. Report presentation (UI payload)

It is not: - Full machining physics simulation - Live reading of standards text - A monolithic AI understanding the part in one step


1. Big Picture: The Pipeline

Question the system answers: Given a STEP model and part context, what manufacturing problems are likely to matter, which process does the part most resemble, and which standards-backed rules does that story violate or leave under-specified?

User selects part and review settings
POST /api/models/{model_id}/dfm/review-v2
PartFactsService.get_or_create(...)        ← CAD geometry extraction
build_extracted_facts_from_part_facts(...)  ← Flatten to review vocabulary
plan_dfm_execution(...)                     ← Process classification + route selection
generate_dfm_review_v2(...)                 ← Rule evaluation
findings + standards + cost + geometry_evidence
DfmReviewSidebar or DfmBenchmarkSidebar

Main inputs: STEP file, selected component, part profile (material, industry), review settings from sidebar Main outputs: Review-v2 payload containing route recommendations, findings, standards list, standards trace, cost outputs, geometry evidence

The live review path reads: STEP → CAD extraction → Part Facts → review facts → planning → rules → UI payload. The benchmark JSON (Cadex) is used for comparison and calibration, not as the production UI contract.


2. CAD and Geometry Logic (server/cnc_geometry_occ.py)

The CAD engine is the most important technical part. It is mostly deterministic geometry reasoning built on OCC (OpenCASCADE), not a machine-learning model.

The part is read as a boundary-representation solid (B-Rep): faces, edges, and surfaces — not voxels or a mesh.

Face Inventory

CncGeometryAnalyzer._face_inventory_record(...) — for each face, records: - Surface type (plane, cylinder, cone, torus, NURBS B-spline) - Area, centroid, bounding box, sample point, sample normal - Interior vs exterior classification (heuristic) - Surface axis (if revolved) - Adjacent faces (via shared edges)

Key insight: A single face rarely tells the manufacturing story. A pocket floor only looks like a pocket floor when you see the surrounding walls. A cylindrical face only looks like a bore or a shaft when you see how it sits in the rest of the part.

Turning and Rotational Symmetry

detect_turning_from_face_inventory(...) — detects whether the part is organized around a dominant turning axis.

The detector asks: Do enough exterior revolved surfaces line up around one spindle axis to justify calling this a turning-like part? It evaluates: - Fraction of exterior revolved area - Axial span covered by the cluster - Whether that axis agrees with the dominant part dimension

Not asking whether every face is perfectly lathe-made. Asking whether the part is organised around a dominant turning axis.

Once a primary axis is accepted, turned geometry is separated into: turned diameter faces, turned end faces, and turned profile faces. Shorter/complex lathe parts add detection for OD grooves, end-face grooves, short bores, and circular milled interruptions.

Holes and Bores

Cylinder records are normalised (axis line, diameter, depth, completeness, axial span, neighbouring faces), then grouped by common axis. The system decides whether a set of cylindrical faces behaves like: - Through hole - Partial hole - Stepped hole - Bore - Exterior turned body (and explicitly rejects these as holes)

A bore is simply a hole-like cylindrical region whose geometry suggests a larger coaxial internal diameter. The difference comes from radius, span, axis grouping, and context.

Pockets

Built around the idea of a recessed floor face. Looks for planar faces with: - Enough area - Enough wall support - Enough distance from the outer boundary - Enough recess depth

Surroundings determine open vs enclosed pocket. The detector cares about: nearest boundary offset, recess ratio, wall-neighbour count, whether wall neighbours are planar or curved.

Not reading semantics from the file — inferring machining meaning from local topology of a recessed region.

Bosses, Milled Faces, and Grooves

  • Bosses: exterior cylinders/cones that behave like protrusions (exclude main turned body)
  • Milled faces: what remains after turning, holes, pockets, and bosses are claimed; classified as flat milled, flat-side milled, curved milled, circular milled, convex profile-edge milled, concave fillet-edge milled
  • Grooves: short cylindrical bands with axial spans, radii, neighbour radii analysis

Honesty About What the System Does Not Do

The system is not: - Simulating chip load, cutting force, tool deflection, thermal growth, or fixture rigidity - Performing machining simulation

It is: - Manufacturability inference from geometry - Knowing that a deep pocket with a tiny corner radius is risky because geometry implies a small tool and poor reach — not because it numerically simulates the milling operation


3. Manufacturing Interpretation: Part Facts (server/part_facts.py)

Part Facts convert raw geometry into manufacturing language. Schema has five sections:

Section What it contains
geometry What was measured (bounding box, volume, face counts)
manufacturing_signals Interpreted clues (rotational symmetry, turned face fraction)
declared_context User profile inputs (material, selected process, industry)
process_inputs Fields the process classifier expects
rule_inputs Boolean flags for rule readiness (e.g., pocket_depth_available)

The bridge in server/dfm_part_facts_bridge.py flattens these sections into a simpler fact map with convenience booleans: - hole_features, through_hole_features, pockets_present - milled_faces_present, boss_features

Process Classification

server/dfm/process_classifier.json — heuristic scorer: - CNC turning: boosted when rotational_symmetry and turned_faces_present are true - CNC milling: boosted by pockets_present, threaded_holes_count, or feature_complexity_score

Process recommendation is a thin decision layer built on top of extracted facts — not a separate AI model.


4. Standards and Rules (server/dfm/)

The runtime does not read standards PDFs and reason over them live. Standards and design-guide knowledge have been encoded into structured JSON assets via manual knowledge encoding.

Bundle Structure

File Purpose
rule_library.json All DFM rules with inputs, severity, standards citations
references.json Standards reference library (ISO, DIN, AS, etc.)
process_classifier.json Heuristic process scoring weights
overlays.json Industry/process-specific rule pack additions
roles.json User role definitions (design engineer, manufacturing engineer, etc.)
report_templates.json Report layout templates
ui_bindings.json UI display logic
cost_model.json Manufacturing cost model parameters

How Rules Work (Example: CNC-005)

CNC-005: pocket internal corners with small radii drive small tools → depth and cost problems.

{
  "rule_id": "CNC-005",
  "required_inputs": ["pocket_depth", "pocket_corner_radius"],
  "deterministic": true,
  "severity": "warning",
  "citation": "REF-CNC-2"
}

Evaluator _evaluate_cnc_005(...) computes pocket_depth / pocket_corner_radius and checks > 8.0.

The chain:

Standard → cited by Rule → Rule evaluates Part Facts → Finding emitted with evidence box

If inputs are missing, the system records an evidence gap instead of pretending it knows the answer.

Rule Evaluation Sequence

In _evaluate_plan(...) inside server/dfm_review_v2.py, each rule goes through: 1. Gather references 2. List required inputs 3. Detect missing inputs 4. Run registered evaluator if one exists 5. Classify result: passed / rule_violation / evidence_gap / unresolved 6. Update standards trace


5. Effective Context and Planning

Not all sidebar controls do the same kind of work: - Evaluation controls: change the actual evaluation path (process, overlay, role) - Presentation controls: change packaging and display only

Effective Context Resolution (server/dfm_effective_context.py)

Decides whether process and overlay came from: - A profile mapping - An explicit override - Auto mode

The UI can tell you not just what process was used, but where that decision came from.

Process Planning (server/dfm_planning.py)

  1. Scores candidate processes using process_classifier.json heuristics
  2. Compares AI recommendation to any user override
  3. If mismatch and policy allows: runs both routes
  4. Each route gets: process-specific default packs + overlay pack + selected role + selected template

6. Benchmarking (Two Lanes)

Two benchmark modes serve different purposes:

Mode Input What it tests
logic_only Cadex feature file → adapted directly to RapidDraft facts Did the rules evaluate correctly given clean facts?
end_to_end STEP → full live extraction → compare to Cadex references Did geometry extraction land in the right feature families?

Feature benchmarking: Did our extracted geometry land in the same feature families that Cadex called out, and did we count them correctly?

DFM benchmarking: Did our review produce issue categories that resemble the categories in Cadex DFM output? (Inherently looser — RapidDraft findings are standards-backed rule outputs, not Cadex-native issue labels)

Key Benchmark Cases

  • Sample 2: Early proof that turning-like geometry was being under-surfaced. Made the gap visible early.
  • Sample 6: Stresses the newer short-turning logic — OD grooves, end-face grooves, circular milled faces on a turning-dominant part. Proves the newer lathe-specific work.

7. Current Limitations and Approximations

Limitation Detail
Heuristic feature detection No guarantee every CAD model maps cleanly into one canonical feature taxonomy; a shallow recess can look like a pocket in one context and a blend in another
Live path vs Cadex reference divergence Not always a bug — two extraction paths are related but not identical
UI compresses geometry The DFM Evidence sidebar shows curated summaries; the backend often knows more than the UI exposes
No exact geometric blame map Many findings come from aggregate facts (minimum internal radius, maximum pocket depth); provenance is not yet carried into a precise viewer highlight
Standards layer is manually encoded Explicit and testable, but does not automatically stay aligned with every nuance of a standards document

8. What Good Looks Like: The Honest Stack

STEP file
    ↓ OpenCASCADE B-Rep reader
Face inventory (surface type, area, normal, adjacency)
    ↓ Feature detectors (turning, holes, pockets, bosses, milled faces, grooves)
Geometry features (classified, counted, measured)
    ↓ Part Facts schema (geometry / signals / context / inputs)
Manufacturing vocabulary (flat fact map)
    ↓ Process classifier + planning
Selected route + active rule packs
    ↓ Rule evaluators (deterministic threshold checks)
Findings (rule_violation, evidence_gap, passed)
    ↓ Standards trace + cost model
Review-v2 payload
    ↓ UI sidebar
Engineer reviews findings with evidence boxes

Key Code Files

File Role
server/cnc_geometry_occ.py Face inventory, feature detectors (turning, holes, pockets, etc.)
server/part_facts.py PartFactsService — converts geometry to Part Facts schema
server/dfm_part_facts_bridge.py Flattens Part Facts to review fact map
server/dfm_effective_context.py Resolves effective process/overlay/role context
server/dfm_planning.py Process scoring, route planning
server/dfm_review_v2.py Rule evaluation pipeline
server/dfm_benchmark.py Benchmark runner (logic_only and end_to_end modes)
server/dfm_bundle.py Loads and validates the DFM rule bundle JSON files
server/dfm/rule_library.json Rule definitions with inputs, severity, citations
server/dfm/references.json Standards reference library