Design Software History: Numerical Robustness in Geometry Kernels: History, Failure Modes, and Engineering Playbook

December 08, 2025 13 min read

Design Software History: Numerical Robustness in Geometry Kernels: History, Failure Modes, and Engineering Playbook

NOVEDGE Blog Graphics

Setting the stage for numerical robustness in geometry kernels

Why the conversation never goes away

Geometry kernels sit at a tricky crossroads where algebra meets software engineering, and where manufacturing tolerances meet the cold arithmetic of machines. In practice, every modeler—from traditional B-reps in Parasolid and ACIS to newer field-based systems—must reconcile the mathematical ideal of real numbers with the finite representation of IEEE-754 floating-point. The result is a persistent, industry-wide struggle with “numerical robustness,” the umbrella term for keeping operations consistent, watertight, and predictable even when inputs are messy, scales vary wildly, and operations amplify rounding noise. Over decades, teams at Siemens (Parasolid), Spatial (ACIS), Dassault Systèmes (CGM), PTC (Granite), Autodesk (ShapeManager), ASCON/C3D Labs (C3D), and the Open CASCADE community have poured research and engineering effort into this single problem. The best kernels today are far stronger than those of the 1990s, yet design teams still encounter the same families of pathologies: booleans that refuse to finish, thin features that collapse under shelling, and offsets that produce cusps and slivers. This article maps the root causes, traces the evolution of ideas and institutions that raised the robustness bar, and lays out the practical playbook used by production kernels to bend but not break under ill-conditioned geometry. The conclusion is both sobering and empowering: the problem is hard for structural reasons, but the combination of smarter arithmetic, topology-aware algorithms, disciplined tolerance policy, and large-scale testing continues to raise the floor.

Why numerical robustness is the hardest problem in geometry kernels

The core tension: real geometry versus discrete topology

The heart of robustness is the mismatch between continuous geometry and discrete topology. The geometry of curves and surfaces lives in the world of real numbers and algebraic relations; the topology of a boundary representation (B-rep) is a finite combinatorial structure of vertices, edges, faces, and their adjacencies. Seemingly tiny metric perturbations in floating-point space—on the order of microns in millimeter units, or smaller—can flip topological decisions: whether two edges meet, whether a vertex lies on a surface, whether a loop is closed. Every kernel’s data structures try to bridge this divide with tolerances, but the bridge has to hold under cascades of operations. IEEE-754 floating-point arithmetic introduces machine epsilon-scaled errors at each step; when you chain booleans, offsets, shells, and fillets, these errors can compound or interact in ways that erode adjacency consistency. The effect is most brutal in neighborhoods of geometric degeneracy: nearly tangent intersections, near-coincident faces, or self-approaching trims. A single misclassified intersection point can force a different edge split, breaking edge-face loops and wrecking watertightness. Modern kernels mitigate this with adaptive-precision predicates, interval enclosures, and topology-first strategies, but the essential tension remains: the algebra says “almost equal,” while the B-rep must say “equal or not.” That binary obligation in a noisy numerical environment is why robustness remains the most challenging problem class.

  • Geometry (real-valued): algebraic/numerical descriptions of curves/surfaces.
  • Topology (discrete): explicit adjacencies among vertices, edges, faces.
  • Tolerance-managed bridges: policies that convert “almost equal” into topological glue.

Failure modes practitioners see in daily modeling

Practitioners experience robustness not as an academic concept but as specific, stubborn failures. After imports, users often see tiny gaps and overlaps along stitched seams; this leads to faces that refuse to knit or solids that remain non-watertight. Fillets and offsets produce sliver faces and edges so short that downstream operations snap them to zero, violating manifold constraints. Near-coincident entities multiply, inviting ambiguity in classification and trimming. After booleans, the resulting B-rep may be non-manifold or self-intersecting, especially when operands are nearly tangent or carry small defects from earlier steps. The 2D/3D mismatch is another frequent culprit: parametric curves on surfaces (p-curves) and their 3D counterparts drift beyond tolerance, leaving trims that no longer agree with their intended spatial locations. Offsets and fillets exacerbate matters because their construction increases curve/surface degree and can inject cusps and degeneracies; this creates trimming loops that are difficult to classify robustly. Shelling and thickening expose thin regions, pinched corners, and inverted normals. While toolmakers add “healing” and “sewing” features, these are remedial; prevention requires algorithmic strategies that acknowledge the shaky ground on which many inputs—and units—sit, while outputs must be watertight and manifold for manufacturing, meshing, and analysis.

  • Gaps, overlaps, and sliver faces; microscopic “stairs” along stitched seams.
  • Non-manifold or self-intersecting B-reps after boolean operations, shelling, or thickening.
  • 2D/3D mismatch: parametric trims and spatial curves drifting beyond model tolerance.
  • Offsets and fillets producing cusps, degree blow-up, and surface degeneracies.

Why booleans are numerically brittle

Boolean operations—union, intersection, subtraction—concentrate every robustness hazard into one pipeline: intersection, classification, trimming, and topological reassembly. Intersection algorithms must resolve curve-surface and surface-surface contacts under rounding; when configurations are ill-conditioned (nearly tangent, near-coincident, grazing), the numerical signature of an intersection can drop toward noise level, making robust detection and multiplicity decisions fragile. Even when intersections are detected, classification of points and regions as inside/outside relies on predicates that should be stable, yet small perturbations can flip outcomes. This sensitivity propagates to trimming: errors in intersection curves shift trims, creating topological inconsistencies in loops and face adjacency. Finally, reassembly relies on snapping and sewing guided by tolerances. Here, the choice of tolerance regime—absolute, relative, per-feature—determines whether the same model will knit cleanly at centimeter scale but fail at micron scale. Units conversions exacerbate ambiguity: what was a safe absolute tolerance in inches becomes aggressive in millimeters. The result is a phenomenon every CAD user knows: small changes to units, feature order, or operand positioning yield dramatically different boolean outcomes, all due to amplified rounding and tolerance choices at critical decision points.

  • Intersection amplification: rounding noise elevated by near-tangency and grazing contacts.
  • Classification fragility: inside/outside tests flipping across tolerance thresholds.
  • Tolerance policy drift: absolute versus relative settings changing results across scales.

A short history: kernels, researchers, and accumulated wisdom

Early B-rep lineage: from ROMULUS to Parasolid and the tolerant template

The modern tolerance-aware B-rep owes much to the Cambridge lineage. In the 1970s and early 1980s, Ian Braid and colleagues at Cambridge/Shape Data developed ROMULUS, one of the first commercially viable solid modelers to embrace boundary representations with robust operators. This lineage seeded ideas that would later appear in Parasolid, which emerged in the late 1980s and, under successive owners—first Shape Data, then Unigraphics Solutions, and later Siemens Digital Industries Software—became the kernel used in countless systems. The key legacy was not merely data structures but a philosophy: embrace finite precision, define model tolerances explicitly, and use them consistently during sewing, splitting, and classification. These concepts were codified in APIs that encouraged applications to adopt a consistent notion of “near” across import, modify, and export flows. The ROMULUS/Parasolid approach set expectations for what a general-purpose kernel could do: boolean on trimmed NURBS, manage topology coherently, and scale from prismatic parts to sculpted surfaces. Although the era’s hardware limited numerical sophistication, the seeds of a tolerant B-rep architecture were planted. The community learned that robust trimming and rigorous adjacency management matter as much as any single operation, and that a kernel’s usability is inseparable from how well it handles the messy, noisy edge cases of industrial geometry.

  • Shape Data (Cambridge) → ROMULUS → Parasolid.
  • Tolerance-centric B-rep modeling sets the pattern still followed today.
  • APIs codify tolerance use in sewing, splitting, and classification.

ACIS and the 1990s kernel explosion

In parallel, Dick Sowar founded Spatial and launched ACIS, another influential kernel that spread rapidly through the 1990s by offering a general-purpose 3D modeling engine to ISVs. ACIS prioritized breadth: broad NURBS support, a flexible feature set, and integration tooling that helped smaller CAD/CAM vendors build competitive products. Its component model, documentation, and developer community made it a default choice for many. The 1990s saw a wave of systems—some built on Parasolid, others on ACIS—competing to bring solids into mainstream mechanical and industrial design. The result was a marketplace of kernels whose robustness improved under customer pressure but still wrestled with the limits of double-precision arithmetic and immature healing practices. ACIS’s success also influenced corporate strategy: Spatial was later acquired by Dassault Systèmes, which would both continue ACIS as a product and invest in its own kernel technology. The availability of robust kernels as off-the-shelf components changed the pace of CAD innovation; independent vendors could now focus on UI, parametrics, and vertical workflows, while relying on kernel teams to push the state of boolean, fillet, and shell operations. The trade-off was clear: robustness issues now had to be solved generically, across disparate application contexts and model qualities, underscoring the importance of portability and strong import diagnostics.

  • Spatial (founded by Dick Sowar) makes ACIS a widely licensed kernel.
  • Componentized kernels catalyze the 1990s CAD proliferation.
  • Robustness driven by broad, heterogeneous customer geometries.

Diversification, consolidation, and the rise of healing

By the 2000s, the kernel landscape diversified and consolidated simultaneously. Dassault Systèmes advanced its internal CGM kernel for CATIA V5 and later 3DEXPERIENCE applications, while also maintaining Spatial/ACIS for the broader market. PTC developed Granite to power Pro/ENGINEER (later Creo). Autodesk forked ACIS to create ShapeManager for Inventor. ASCON spun out C3D (through C3D Labs) from its KOMPAS-3D heritage. Siemens kept investing in Parasolid, which became a de facto standard due to its prevalence across mid-market and enterprise systems. In open source, Open CASCADE emerged as the prominent option, influencing academic and industrial prototyping alike. This period also normalized dual-kernel strategies and “import diagnostics/healing” workflows. As companies recognized that no single kernel could flawlessly interpret every translated file, they prioritized sewing, gap closure, sliver suppression, analytic-to-NURBS unification, and orientation fixes as first-class features. Large OEMs built translators that pooled knowledge of common partner systems and unit conventions, reducing the probability of catastrophic mismatches. The market learned to treat healing as a systematic pipeline rather than an afterthought, and customers grew to expect interactive reports that pinpointed issues like short edges, non-manifold vertices, and inverted normals, together with one-click fixes guided by sensible tolerance policies.

  • Proprietary kernels: CGM, Granite, ShapeManager, C3D, Parasolid.
  • Open source: Open CASCADE as a community hub for experimentation.
  • Healing pipelines become must-have: sewing, gap closing, and normalization.

Academic impact: EGC, robust predicates, and CGAL

While industry iterated on tolerant B-reps, academia formalized the numerical underpinnings. Chee Yap and colleagues articulated the Exact Geometric Computation (EGC) paradigm, which advocates exactness for critical predicates and constructions, often achieved by adaptive precision. Jonathan Richard Shewchuk’s robust predicates for orientation and incircle tests showed how to structure algorithms that are correct across floating-point ranges by escalating precision only when necessary. Libraries like GMP (by Torbjörn Granlund) and LEDA (Kurt Mehlhorn, Stefan Näher) provided big integers/rationals and algorithmic infrastructure. The CGAL project (Kurt Mehlhorn, Herbert Edelsbrunner, and a large community) became a repository of robust geometric algorithms, with filtered predicates, interval arithmetic, and exact kernels. Although many CAD kernels could not afford pure exact arithmetic for all operations, the ideas translated into practical hybrids: filtered predicates first, intervals for conservative classification, and multiprecision only on near-degenerate configurations. The intellectual effect was profound: it reframed robustness not as a bag of patches but as a design principle—separate decision-making (predicates) from construction, and pay for precision only when the data demands it. Over time, these concepts seeped into production-quality intersection, trimming, and boolean frameworks used across the kernel ecosystem.

  • EGC: exactness for key decisions; adapt precision as needed.
  • Shewchuk predicates: orientation/incircle tests with adaptive precision.
  • Infrastructure: GMP, LEDA, and the CGAL library.

Community war stories and the cloud-native turn

Through the 2000s and 2010s, users kept surfacing the same pain—boolean failures, thin features, translation-induced gaps—driving investment in healing, sewing, and validation tools. Vendors embedded diagnostics that traced failures to specific faces or edges, recommended tolerance adjustments, and suggested repairs like merging short edges or replacing analytic/approximated surfaces to unify representation. A new turn came with cloud-native CAD. Parasolid running server-side in Onshape—founded by Jon Hirschtick, John McEleney, Mike Payne, and Dave Corcoran—brought determinism and telemetry at scale. Cloud deployment eliminated per-workstation differences in libraries and numerics, reducing non-determinism. It also enabled massive regression suites: every production failure signature could become a test, every fix validated across thousands of models. Telemetry identified hot spots in operations and geometry classes that triggered the most bugs. Vendors used this to tune tolerance policies, improve snapping and classification heuristics, and streamline healing pipelines. The cloud also encouraged versioned kernels and deterministic replay, making subtle numerical changes easier to deploy and audit. Combined, these practices accelerated the cycle between field failures and core-kernel improvements, shrinking the time users spent in manual diagnosis and increasing confidence that once-fixed bugs stayed fixed.

  • Server-side determinism reduces cross-platform numerical variability.
  • Telemetry drives targeted robustness fixes and smarter defaults.
  • Massive regression suites enforce “once fixed, stays fixed.”

The engineering playbook for robustness in production kernels

Arithmetic strategies: adaptive precision and conservative classification

Production kernels increasingly adopt arithmetic stacks that balance speed with correctness. The canonical approach is to combine fast double-precision computation with filtered predicates: evaluate a test (e.g., side-of-plane, orientation) with IEEE-754; if the result is far from the uncertainty band, accept it; otherwise escalate to higher precision through extended floats, intervals, or exact rationals. Interval arithmetic helps in conservative classification, enclosing uncertainty margins and eliminating false positives in intersection tests. For constructions—building intersection curves, offset surfaces—kernels often keep doubles but add refinement loops and error certificates, ensuring constructed entities stay within declared tolerances. Where filters still fail, a pay-as-you-go multiprecision path catches near-degenerate configurations without imposing a global performance tax. The art is in controlling escalation frequency and avoiding catastrophic slowdowns. Implementations cache predicate outcomes across substeps, normalize coordinates to reduce condition numbers, and scale units to keep values within numerically friendly ranges. Many kernels also use compensated summation or Kahan-like techniques to reduce accumulation error in iterative solvers. The outcome is not mathematical exactness, but a pragmatic regime where most decisions are cheap and reliable, and only the pathological cases incur extra cost—precisely where correctness matters most.

  • Filtered predicates with escalation to multiprecision on demand.
  • Interval arithmetic for conservative, sound classifications.
  • Coordinate normalization and unit scaling to improve conditioning.
  • Compensated summation to reduce iterative drift.

Geometric/topological safeguards: topology-first booleans and tolerance policy

Arithmetic alone cannot save a kernel from topological chaos; frameworks must be topology-first. Robust boolean pipelines typically split operands along computed intersections before any classification, guaranteeing that edges and faces align to trimming curves. Next comes snapping with consistent tolerance-managed sewing, where vertices and edges are merged under well-defined policies. The tolerance policy itself is crucial: some teams choose absolute model tolerances (e.g., 1e-5 in model units), others adopt relative measures tied to feature scale, and many blend the two with per-feature overrides. Unit-aware scaling prevents silent drift when moving between inches and millimeters. The B-rep data structures enforce manifoldness rules at merge time, rejecting operations that would create non-manifold vertices unless explicitly allowed. Orientation and parametric consistency checks ensure that loops are closed in both parameter and 3D space. To tame drift between parametric and 3D, kernels compute “p-curve to 3D” consistency metrics and tighten trims if errors exceed budgets. These safeguards are complemented by normalization steps that convert analytic geometry (planes, cylinders, spheres) and approximated NURBS into coherent representations, reducing mixed-type surprises. Together, these measures ensure that even when numerics wobble, the topological fabric—edges, loops, and faces—remains coherent and watertight.

  • Split-on-intersection before classification and reassembly.
  • Consistent snapping and sewing under explicit tolerance policy.
  • Unit-aware scaling; absolute vs. relative tolerance strategy.
  • Parametric/3D consistency checks to prevent trim drift.

Operation-specific tactics: intersections, offsets/fillets, shelling/thickening

Each class of operation demands specialized tactics. For intersections and booleans, robust curve-surface and surface-surface intersection requires seed-and-refine strategies: coarse candidates via bounding volumes or interval trees, then Newton-like refinement with backtracking and arc-length parametrization to avoid missed roots. Classification uses multiple cues—signed distances, winding numbers, and local orientation tests—with confidence tracking to trigger escalation when results conflict. Trimming strategies favor stability: generating monotone segments in parameter space, avoiding micro-loops by eliminating oscillatory intersection artifacts, and regularizing endpoints. Offsets and fillets suffer from degree growth and cusps; rolling-ball formulations and medial-axis awareness help select feasible radii, while degree/order management and reapproximation limit complexity. Detecting and removing cusps or self-intersections is essential to avoid invalid trims. Shelling and thickening focus on degeneracy detection: identify thin regions likely to invert, adjust local thickness, or introduce patching strategies where offsets collapse. Some kernels resort to local remeshing/patching and then re-NURBS fitting to preserve classification and manufacturability. Across all operations, fallback paths—e.g., approximating a troublesome exact intersection with a certified spline—trade purity for reliability, provided error bounds are enforced and documented.

  • Intersection: seed/refine, confidence-tracked classification, stable trimming.
  • Offsets/fillets: rolling-ball logic, cusp detection, degree/order control.
  • Shelling/thickening: degeneracy detection, local patching, and fallback fits.

Quality assurance at scale: fuzzing, corpora, and determinism

Robustness gains stick only if quality assurance is relentless and representative. Kernel teams curate adversarial model corpora: millions of parts from real workflows, plus synthetic shapes designed to trigger edge cases like near-tangency, skinny triangles, and multi-scale assemblies. Fuzzing introduces small random perturbations in inputs, units, feature order, and tolerance settings to shake out non-determinism. Geometry-diff tools compare topologies, trims, and geometry parametrically, flagging changes down to sub-micron deltas. Cloud deployments enable determinism testing across platforms and toolchains—consistent results on Linux and Windows builds, across compiler versions and CPU architectures. Telemetry in production highlights hot operations and model classes by frequency and failure rate; every failure becomes a regression test. Versioned kernels and reproducible pipelines support “binary search” across releases to isolate the change that altered behavior. The result is a feedback loop where users’ hardest models inform the daily test diet, preventing backsliding. As data volume grows, teams invest in triage automation that clusters failures by signature, assigns severity, and suggests likely subsystems (intersection, classification, sewing) responsible for the deviation. Over time, this industrialized QA transforms hard-won fixes into institutional knowledge embedded in test suites.

  • Adversarial corpora mixing real models with synthetic edge cases.
  • Fuzzing of inputs, units, and feature orders to expose fragility.
  • Geometry-diff with topological and parametric sensitivity.
  • Determinism testing, telemetry-driven regression, and triage automation.

Looking beyond tolerant B-reps: implicit/hybrid modeling and formal methods

While tolerant B-reps remain dominant for manufacturing, alternative representations can sidestep some brittleness. Implicit/field-based modeling—as in systems inspired by nTopology’s workflows—represents shapes as level sets of scalar fields, making booleans algebraically simple (via min/max or smooth blends) and ensuring watertightness by construction. These models handle lattice structures, graded materials, and metamaterials elegantly, and hybrid pipelines convert fields to explicit B-reps only at manufacturing boundaries. Hybrid workflows also go the other way: maintain a B-rep envelope for mating and dimensioning while delegating infill or complex filleting to fields, reducing the topological complexity B-reps must carry. Alongside representational shifts, formal methods are gaining traction. Property-based testing expresses invariants such as manifoldness, watertightness, and orientation consistency; SMT- and theorem-prover-backed checks validate small kernels of logic around snapping and classification. Sound interval analyses ensure that decisions are conservative. Although full formal verification of a production kernel is out of reach, selective application to the most error-prone subroutines creates high-leverage wins. Together, implicit/hybrid representations and formal reasoning offer incremental robustness gains without abandoning the proven ecosystem of tolerant B-reps.

  • Field-based booleans that stay watertight by construction.
  • Hybrid implicit–explicit pipelines for complex interior detail.
  • Property-based testing and selective formal verification of predicates.

Conclusion

The perennial challenge and the blended strategy

Numerical robustness sits at the junction of mathematics, algorithms, and the gritty realities of production software. The core tension—continuous geometry versus discrete topology under finite precision—guarantees that no single fix will end the struggle. Decades of evolution across Parasolid, ACIS, CGM, Granite, ShapeManager, C3D, and Open CASCADE, combined with academic advances like Exact Geometric Computation, robust predicates, and the algorithmic infrastructure of CGAL, have steadily raised the floor. Yet ill-conditioned geometry, tolerance ambiguity, and multi-scale workflows continue to generate borderline cases where tiny numerical differences dictate topological outcomes. The winning playbook blends adaptive precision (filtered predicates and pay-as-you-go multiprecision), topology-aware frameworks (split-on-intersection, consistent snapping, and tolerance-managed sewing), disciplined tolerance policy (unit-aware, feature-aware), and relentless at-scale testing (fuzzing, telemetry, geometry-diff, determinism). Looking forward, emerging hybrids—implicit modeling for watertightness, cloud-native determinism for reproducibility, and selective formal verification for key invariants—will deliver incremental robustness gains. But the challenge is perennial because designers keep pushing kernels into tighter tolerances, more aggressive offsets and fillets, and increasingly complex assemblies. Progress, therefore, is a sustained engineering practice, not a destination: each new failure is a breadcrumb toward the next improvement, and robustness remains the quiet, central craft of geometric modeling.




Also in Design News