Skip to content

Fatigue Agent — Technical Pitch (Slide Content)

Source: Architechture & Research/Fatigue Agent/Pitch/FatigueAI Technical Pitch v2.pptx (v2, February 2026) Status: Active — use for investor/customer conversations Audience: Engineering managers, stress analysts, industrial equipment manufacturers

This page captures the full content of the Fatigue Agent technical pitch presentation (v2). The pitch story runs from problem → context → approach → architecture → governance → workflow comparison → validation → pilot CTA.


Slide 1 — Title

Automated Fatigue Post-Processing for Industrial Equipment & Machinery

An agent-based approach to fatigue life prediction from existing FE results — solver-agnostic, deterministic, traceable

February 2026


Slide 2 — Problem: Fatigue Post-Processing Remains Largely Manual

Most industrial equipment manufacturers perform static FEA but handle fatigue assessment manually — or skip it entirely.

The current manual workflow: 1. Extract nodal stresses from FE results 2. Identify critical locations manually 3. Look up S-N data from standard (PDF/book) 4. Rainflow count load history in Excel 5. Calculate damage per Miner's rule 6. Compile report in Word

The result: - 4–8 hours per analysis - 0% traceability - ±30% analyst variance


Slide 3 — Context: Your Engineering Environment

Company profile: Machine builders, conveyor OEMs, packaging line manufacturers — Mittelstand / SME

Dimension Typical state
Team size 1–3 simulation engineers per company
Purchasing Engineering-led decisions
Solvers OptiStruct, Nastran, Abaqus, Ansys Mechanical
Structures Welded steel/aluminium frames, brackets, sheet-metal assemblies
Load profiles Deterministic duty cycles — bending, tension, combined loading
Current practice Static analysis with safety factor 2–5×; often no explicit fatigue check

Slide 4 — Problem: Consequences of Missing Fatigue Data

Without systematic fatigue assessment, engineering teams face two failure modes:

Under-design: - Field failures discovered by customers - Warranty claims, production downtime, reputational risk - Reactive rather than preventive engineering

Over-design: - Safety factors of 3–5× without basis - Excess material cost (+30–40%), unnecessary weight - Conservatism masking as engineering rigour


Slide 5 — Approach: Agent-Based Fatigue Post-Processing

Operates on existing FE results — no new solver, no changes to your modelling process.

Post-processing only: Works downstream of your existing solver. Reads .op2, .h3d, .odb, .rst result files directly. No licence dependencies, no solver changes.

Deterministic math: Palmgren-Miner damage, rainflow counting, S-N interpolation. No ML predictions on fatigue life — identical inputs produce identical outputs.

Full traceability: Every output links to specific inputs — S-N curve, load case, extraction point, standard clause. Auditable by design.


Slide 6 — Workflow: Processing Pipeline

FE result file (.op2, .h3d, .odb, .rst)
    ↓ Result parser
Stress extraction at candidate locations
    ↓ Hotspot detection
Critical location ranking
    ↓ Load cycle processing
Rainflow counting (ASTM E1049)
    ↓ Damage calculation
Palmgren-Miner (FKM / IIW / Eurocode 3)
    ↓ Report generation
Traceable PDF with input hashes, S-N data, method rationale

Standards applied: FKM Guideline · IIW Recommendations · Eurocode 3 · ASTM E1049 (Rainflow) · Goodman / Gerber / FKM R-ratio


Slide 7 — Technical Depth: Standards-Based Fatigue Data

  • S-N curves per IIW FAT classification (steel, aluminium weld classes)
  • Duty cycle load history with peak detection and sequence counting
  • Rainflow counting matrix per ASTM E1049
  • Built-in material library covering FKM and IIW FAT class data

Slide 8 — Analysis Output: Hotspot Identification and Damage Ranking

  • Stress field with auto-detected critical locations
  • Cumulative damage ranked by severity
  • Top 20 locations ranked by damage, with confidence flags
  • Sensitivity analysis for parameter changes (material, load amplitude, safety factors)

Slide 9 — Architecture: System Architecture

Interface Layer
Web UI (React) · File upload · Parameter config · Report download
Agent Layer
LLM orchestrator · Method selection · Input validation · RAG on standards
Compute Engine
Deterministic Python (NumPy/SciPy) · Result parsers (pyNastran, h5py) · Rainflow · Miner's rule

Key constraint: The AI layer handles workflow orchestration and method selection only. All fatigue calculations are deterministic — identical inputs always produce identical outputs.


Slide 10 — Governance: Engineering Governance

Every analysis run produces a complete audit record. The system enforces method boundaries to prevent invalid configurations.

Control Detail
Method approval Only FKM / IIW / Eurocode methods enabled. No custom or unvalidated approaches.
Parameter bounds State machine enforces valid ranges. Rejects out-of-scope inputs before computation.
Traceability Report includes input file hashes, S-N curve IDs, method rationale, extraction coordinates.
Reproducibility Deterministic compute — same inputs produce identical outputs regardless of run time.
Audit log Timestamped record of all agent decisions, method selections, and user overrides.

Slide 11 — Compatibility: Supported Formats and Data

Format Extensions Phase
OptiStruct .op2, .h3d Phase 1 (priority)
Nastran .op2 Phase 1 (via pyNastran — shared format)
Abaqus .odb Phase 2 (via Abaqus Python API)
Ansys .rst Phase 2 (via pyMAPDL or dpf-core)
Load data CSV, time series Phase 1 (duty cycles, vibration spectra)
Material data Built-in library Phase 1 (FKM / IIW FAT classes — steel, aluminium)

Slide 12 — Comparison: Workflow Comparison

Dimension Manual process Agent-based
Cycle time 4–8 hours ~15 minutes
Consistency Analyst-dependent Deterministic
Traceability Implicit Explicit — documented
Method selection Manual lookup Auto-selected
Hotspot ID Visual inspection Ranked by damage
Reporting Manual Word Auto-generated PDF
Specialist required Yes No — guided workflow

Slide 13 — Validation: Three-Phase Validation Approach

Validated against known benchmarks and pilot partner data before any production use.

Phase 1 — Benchmarks: Validate core engine against published analytical solutions. IIW/FKM benchmark problems. ASTM E1049 reference signals.

Phase 2 — Pilot data: Compare agent outputs against existing manual assessments from pilot partners. Same FE results, same load cases.

Phase 3 — Field correlation: Correlate predictions with actual field failure data. Track prediction accuracy. Build confidence intervals.


Slide 14 — CTA: Pilot Offer

We are looking for 3–5 engineering teams in the industrial equipment sector to validate this approach against real analysis workflows.

What a pilot involves: - You provide representative FE results and load histories - We run the agent and compare against your existing assessments - Joint review of results, method selection, and report quality - 3–4 month validation period — no cost during pilot

Contact: hello@rapiddraft.com


Pitch Notes

On the "deterministic" claim: This is the most important technical credibility point. The LLM never touches fatigue numbers. It orchestrates the workflow and selects methods. All Miner's rule calculations, rainflow counting, and S-N interpolation happen in deterministic Python. This is not marketed as AI fatigue analysis — it is fatigue analysis with an AI workflow layer.

On competition: FEA tools (ANSYS Mechanical, OptiStruct) have fatigue modules, but they require the analyst to set up every parameter. Dedicated fatigue tools (nCode, fe-safe, FEMFAT) are expensive (€10k–50k+) and targeted at automotive OEMs. The Fatigue Agent targets the 1–3 analyst team at a Mittelstand company that cannot justify those tools.

On adoption path: No changes to existing solver, no new CAD license, no IT project. The engineer exports their FE results, uploads to the agent UI, and gets a traceable fatigue report. This is designed to be a day-one workflow.