"Great customer service. The folks at Novedge were super helpful in navigating a somewhat complicated order including software upgrades and serial numbers in various stages of inactivity. They were friendly and helpful throughout the process.."
Ruben Ruckmark
"Quick & very helpful. We have been using Novedge for years and are very happy with their quick service when we need to make a purchase and excellent support resolving any issues."
Will Woodson
"Scott is the best. He reminds me about subscriptions dates, guides me in the correct direction for updates. He always responds promptly to me. He is literally the reason I continue to work with Novedge and will do so in the future."
Edward Mchugh
"Calvin Lok is “the man”. After my purchase of Sketchup 2021, he called me and provided step-by-step instructions to ease me through difficulties I was having with the setup of my new software."
Mike Borzage
November 01, 2025 11 min read

Designers rarely enjoy the luxury of open-ended exploration. They must converge on materials that meet performance targets, cost ceilings, sustainability mandates, and manufacturability constraints—simultaneously. A seemingly simple substitution can ripple across stiffness, damping, corrosion resistance, recyclability, and vendor lead times. In this environment, relying on static tables or ad hoc spreadsheets forces teams to either over-test or accept unjustified risk. Bringing explainable active learning into the tools where decisions are made reconfigures this pressure cooker into a guided, data-efficient navigation problem. Within CAD/CAE, property ranges can be tied directly to geometry and load cases; within PLM, variants and releases inherit provenance about why a selection is defensible. Instead of combing through PDFs or vendor brochures, teams interact with a living, prioritized set of experiments or simulations that maximize information where it matters for the design. The result is a tighter feedback loop: each test or simulation informs the next best action, reducing uncertainty along the specific boundaries that threaten feasibility, certification, or cost. When integrated properly, the system not only proposes candidates but also quantifies risks and articulates the rationale behind them, enabling faster sign-offs and cleaner handoffs between materials science, design engineering, manufacturing, and sustainability stakeholders.
Active learning (AL) is the algorithmic counterpart to an expert’s instinct for “the next most informative test.” Rather than spreading effort evenly, AL concentrates attention on regions of the design–materials space where uncertainty intersects with value. In practice, AL surfaces a ranked set of candidate compositions, process parameters, or test conditions that are expected to reduce uncertainty in properties directly tied to performance or compliance. For low-data regimes common in novel alloys, polymers, composites, or AM feedstocks, AL leverages surrogate models to propose a small number of high-yield steps, often compressing weeks of exploratory testing into days. The spotlight analogy is apt: instead of illuminating everything dimly, AL brightens the precise edges that define pass/fail. Moreover, modern AL treats experiments and simulations as budgeted actions, weighting acquisitions by time, cost, and even CO2 impact. That lets teams decide when to run fast approximate analyses (e.g., a reduced-order CAE or a coarse molecular simulation) and when to escalate to a certified test. The upshot is not merely efficiency; it is targeted clarity, achieved sooner, with explicit trade-offs that match the project’s priorities.
No matter how clever the model, design teams will not adopt it if they cannot answer, “Why this suggestion?” Explainability is not a cosmetic add-on; it is the language through which professionals coordinate and negotiate trade-offs. If a model recommends a new heat-treatment schedule or a composite layup tweak, stakeholders need to see uncertainty bands, constraint margins, and the expected change in objective metrics. They need to visualize which compositional elements or process variables are driving the prediction, how sensitive outputs are to measurement noise, and what minimal change could move a candidate across a threshold (e.g., glass transition temperature or yield strength). With techniques like SHAP for feature attributions, counterfactual generation for actionable “what would it take” questions, and structure-aware explanations for motifs in crystals or polymer graphs, the system can make its reasoning transparent. In this way, explainability underwrites auditability: it documents how choices were made, what alternatives were considered, and why certain risks were accepted. That record becomes crucial when designs move through certification, supplier qualification, or sustainability reviews—turning machine suggestions into defensible engineering decisions.
Embedding explainable AL inside design tools closes the loop between CAD/CAE models, materials databases, laboratory automation, and PLM artifacts. Connecting these systems transforms isolated analyses into a coordinated, learning pipeline. CAD constraints (wall thickness, fillets, temperature exposure) inform feasibility, while CAE-derived sensitivities prioritize which properties to pin down first. Materials databases and ELN/LIMS records seed the initial surrogate models; lab robots or contract test houses execute queued experiments; results flow back into the model registry, tightening uncertainty and updating recommendations. The PLM system serves as the governance backbone: it tracks lineage, versioning, and the mapping of geometry to material cards—including uncertainty intervals and rationale badges. This closure not only accelerates convergence; it also creates a durable paper trail. When teams revisit a design variant, they inherit context, not just numbers: the hypotheses tested, the expected hypervolume improvement that guided choices, and the measured outcomes. Over time, accumulated knowledge shifts the organization from reactive selection to proactive design of materials and processes aligned with product families and manufacturing capabilities.
Explainable AL is agnostic to domain and scales across the materials spectrum. For alloys, it can navigate composition–processing spaces under phase stability and precipitation kinetics constraints; for polymers, it can balance Tg, modulus, and rheology across copolymer ratios and cure schedules; for fiber-reinforced composites, it can co-optimize layup, resin chemistry, and cure cycle to meet buckling and impact criteria; for coatings, it can trade corrosion resistance against VOC limits and application throughput; for additive manufacturing, it links powder characteristics and scan strategies to porosity, anisotropy, and surface finish. The underlying tasks range from property prediction to learning the process–structure–property mapping and qualification envelopes. By treating tests and simulations as actions guided by uncertainty-aware acquisition, the system helps teams qualify new suppliers faster, screen alternative chemistries with fewer samples, and adapt to changing regulations. Crucially, explainability ensures that across these diverse scopes, the “why” is always available—bridging science, engineering, and operations with common, interpretable artifacts that survive the passage from concept to certification.
Robust active learning starts with disciplined data plumbing. Design tools should offer connectors to materials repositories via OPTIMADE, integrate ELN/LIMS for experimental entries, and link PLM artifacts for geometry, variants, and approvals. Life-cycle assessment (LCA) repositories contribute emissions, toxicity, and circularity attributes, letting acquisition functions factor in CO2 or recyclability. To avoid brittle glue code, unify schemas using standards like PIF for materials records and Mendeleev feature sets for elemental descriptors, augmented with explicit processing metadata: thermal profiles, pressure, humidity, toolpaths, and machine IDs. Lineage fields track where each datum came from, its method (DFT, MD, ASTM test, supplier datasheet), and quality scores. This pays dividends when models encounter conflicting measurements or divergent test conditions; the system can discount low-trust sources and present users with reasoned justifications. Even the best models degrade without consistent units, reference states, and provenance. Embedding these checks directly in the CAD/CAE plugin lowers friction: when a designer drags a candidate into a slot, the system verifies schema conformance, flags unit mismatches, and attaches a data quality badge that will travel with the material card through the digital thread.
Modern design workflows span data of widely varying cost and fidelity: quantum-level DFT predictions; coarse-grained MD; calibrated CAE surrogates; historical test campaigns; in-situ process sensors; and vendor datasheets. Rather than flattening these sources, the platform should ingest them as distinct fidelities with explicit lineage and uncertainty. Multi-fidelity Gaussian processes or hierarchical neural nets can learn mappings between cheap approximations and expensive ground truth, while acquisition functions weigh the value of information against budget. This matters because a 30-minute reduced-order simulation might rule out half the candidates before any lab sample is cut. The ingestion pipeline should attach fidelity tags, calibration curves, and cross-fidelity residuals, enabling the UI to show users how much of a prediction rests on analogies versus direct evidence. Supplier data can be powerful if its variability and test protocols are transparent; where they are not, the system can down-weight them and encourage targeted confirmatory tests. Persistent lineage links every predicted property to its evidence base, so when a user asks “why Tg equals 112 °C here,” the system can reveal the DFT, MD, sensor, and test components and their contributions to both mean and uncertainty.
One model rarely fits all. Low-data regimes benefit from Gaussian processes with kernels on compositional and process spaces, especially when monotonicity or smoothness assumptions are justifiable. For crystalline materials and polymers, graph and equivariant neural networks capture structure with symmetries respected—message passing across crystal lattices or polymer chains can tease out motif-level effects on mechanical or thermal properties. Physics-informed models can embed known constraints, such as mixture rules, diffusion limits, or thermodynamic boundaries, improving sample efficiency and credibility. Deep kernel learning blends the best of both worlds: a learned representation feeding a GP head that yields calibrated uncertainties. When CAE exists, reduced-order surrogates (e.g., POD, autoencoders with latent Gaussian processes) bring field quantities into the AL loop, letting the system optimize against criteria like buckling eigenvalues or fatigue damage indexes. A registry of model versions, training data snapshots, and validation metrics allows safe promotion of surrogates into production and rollback when drift or violations are detected, all while surfacing confidence intervals alongside the property values in CAD and CAE panels.
Without credible uncertainty, active learning cannot prioritize effectively or protect against costly mistakes. The platform should support ensembles, MC dropout, deep kernel GPs, and calibrated posterior sampling to estimate predictive intervals. Critically, decompose uncertainty into epistemic (lack of knowledge) and aleatoric (inherent variability). Epistemic uncertainty tells AL where learning will pay off; aleatoric highlights limits the designer must respect—specifying wider safety margins, additional inspections, or supplier controls. The UI should expose both: confidence bands on predicted stress–strain curves, density ellipses on multi-objective fronts, and “constraint margin” chips indicating the probability of violating stiffness or thermal limits. Calibration diagnostics (coverage of intervals, probability integral transform plots) run continuously in the background, with alerts when models become overconfident due to distribution shift. For composites and AM, where process noise can dominate, the system can estimate heteroscedastic noise models and propagate them through CAE to show the distribution of performance metrics. That transforms uncertainty from a warning label into a design variable, transparently traded against cost and schedule.
Inside design tools, AL must feel native. Acquisition functions such as expected improvement, q-expected hypervolume improvement for multi-objective fronts, BatchBALD/EIG for pure information gain, and Thompson sampling for exploration should be selectable templates with sensible defaults. Constraints matter: safe Bayesian optimization avoids proposals likely to fail structural or thermal tests, incorporating CAD-driven feasibility (thickness, radii, temperature exposure) and CAE sensitivities. The system should allow cost-, time-, and CO2-weighted acquisitions so teams can align learning with program objectives. Multi-fidelity AL orchestrates cheap simulations for triage and expensive lab tests for confirmation, guided by adaptive fidelity selection that estimates the marginal value of upgrading evidence. The queue visible in PLM or ELN predicts completion times given lab capacity. Designers see not just a ranked list but a rationale card for each action: expected regret reduction, constraint risk, and the anticipated shift in Pareto front. This makes the loop actionable and auditable, not merely algorithmic.
Explanations should appear where decisions happen. A “Why this next test?” panel can show uncertainty hotspots on the composition–process map, predicted trade-off shifts if the test confirms the model, and constraint margins expressed as probabilities. Local explanations reveal SHAP attributions for the specific candidate—e.g., which elements, filler fractions, or process parameters most contribute to predicted yield strength—and compare them to global feature importances to avoid overemphasizing idiosyncrasies. Counterfactuals answer “what minimal change achieves target Tg?” with actionable deltas tied to manufacturing feasibility. For structure-aware models, GNN explainers highlight motifs (e.g., BCC vs. FCC features, pendant groups in polymers) influencing properties, and the UI can map these back to levers the team can actually pull (composition tweaks, anneal time, print speed). Prototype examples from historical data provide precedents—similar past candidates with outcomes and notes—offered as references without dictating choices. Together, these surfaces make the model’s reasoning legible, discussable, and anchored in engineering language rather than abstract scores.
To be adopted, AL must remove clicks, not add them. A materials browser embedded in CAD/PLM should display candidates with rationale badges, confidence intervals, and a one-click “simulate with surrogate” button that runs CAE and propagates uncertainties through to performance metrics. Constraint-aware suggestions should respect geometry, load cases, and environmental exposure; if a user selects an out-of-distribution material, the system flags it with an explanation and proposes a minimal validation plan. Behind the scenes, orchestration uses microservices and a message bus to manage experiment queues to lab robots or external labs, ensure traceable model training via a registry, and deliver CI/CD for surrogates with unit tests and drift monitors. The user never needs to see the plumbing; they experience faster iteration and clearer choices. When a design milestone nears, the system can switch modes to prioritize confirmatory tests that lock down uncertainty on the most business-critical properties, aligning the AL loop with program cadence rather than operating in academic isolation.
Trust grows when teams can quantify what the system delivers. Beyond raw accuracy, track sample efficiency (improvement per experiment), cumulative regret against a moving target, and information gain per unit cost or CO2. Time-to-feasible and time-to-certified targets reflect real program milestones. Reliability metrics should include interval coverage (do 90% intervals cover 90% of outcomes?), calibration of predictive distributions, and constraint violation rate, especially under safe-BO. From a business lens, monitor cost reduction, CO2 reduction from greener materials or fewer tests, lab utilization efficiency, and design-cycle compression. Put these metrics in dashboards visible in PLM alongside design status. Expose them at multiple scales: per project, per materials class, and across the portfolio. By making success criteria explicit and multi-objective, organizations prevent a false dichotomy between “model accuracy” and “engineering value,” aligning incentives and ensuring active learning genuinely codifies the way teams already make hard trade-offs.
Before the AL engine directs real budgets, validate policies and models. Offline replay on historical campaigns can simulate what the system would have proposed and how quickly it would have converged, revealing sensitivity to noise and data gaps. Rolling-origin backtests prevent temporal leakage, mimicking how knowledge accumulates in reality. In live projects, run shadow mode: the system makes recommendations, but experts proceed with their plan, comparing outcomes via A/B testing against expert heuristics. Robustness checks cover out-of-distribution detection for novel chemistries or processes, and adversarial stress testing with injected noise or conflicting datasheets. Treat validation as continuous rather than one-off; whenever the data distribution shifts (new supplier, new process window), re-run backtests and update thresholds for uncertainty-based escalation. This operationalizes skepticism constructively, converting it into a repeatable process that strengthens the system as it scales.
Governance transforms explainable AL from a clever assistant into an enterprise asset. Provenance includes model cards describing intended use, limits, and training data; dataset versioning; and explicit geometry–material linkages captured in PLM. Privacy and IP constraints can be respected through federated learning with secure aggregation across suppliers and differential privacy when sharing insights without exposing recipes. Safety is enforced via safe-BO guardrails and automatic escalation to domain experts when risk or uncertainty exceed thresholds. Human roles are clear: materials scientists curate hypotheses and interpret structure-aware insights; design engineers set constraints and evaluate trade-offs; data scientists steward models and calibration. Decision logs capture rationale, overrides, and outcomes to ensure learning flows both ways—humans teaching the system and the system informing humans. Finally, scalability requires batch scheduling under lab constraints and multi-armed bandit routing across instruments, ensuring the queue maximizes information while respecting maintenance and calibration windows. Governance, in short, makes the system auditable, equitable, and durable.
Embedding explainable active learning directly into CAD/CAE/PLM turns materials selection from static lookup into a defensible, data-efficient optimization loop aligned with the realities of product development. Near term, teams see faster convergence to viable materials, fewer costly tests, and clearer cross-functional conversations anchored by uncertainty, trade-offs, and constraint margins. Over the longer horizon, standardized provenance threads through the digital lifecycle, autonomous labs execute prioritized queues, and physics-augmented foundation models co-optimize material, process, and geometry. Crucial enablers are robust data plumbing, calibrated uncertainty quantification, constraint-aware acquisition, and explanations tied to designer intent and certification needs. As these foundations mature, organizations move from point solutions to a platform mindset: reusable surrogates, reusable governance, and reusable workflows that carry learning across products and programs. The destination is not automated decisions; it is augmented decisions—human judgment strengthened by transparent, continuously improving evidence.
Begin with a narrow, high-impact property target where decisions repeatedly stall—say, Tg for a polymer family, yield strength for additive alloys, or corrosion resistance for a coating—then deploy a small surrogate with explicit uncertainty and a simple acquisition like EI under a safe-BO constraint. Wire in essential data pipelines (OPTIMADE, ELN/LIMS, PLM) with PIF-based schemas and basic lineage. Introduce multi-fidelity triage (cheap sim, targeted tests) and surface explanations: SHAP attributions, constraint margins, and a “Why this next test?” panel. As confidence grows, expand to multi-objective fronts with qEHI, add physics-informed constraints, and integrate lab orchestration and CI/CD for surrogates. Parallel to capability expansion, institutionalize governance: model cards, calibration checks, OOD detection, and decision logs. By proceeding iteratively, teams realize immediate benefits while laying a durable foundation for the long-term arc—one where materials discovery and selection are not isolated acts but continuous, explainable learning woven into the digital thread of design and manufacturing.

November 01, 2025 2 min read
Read More
November 01, 2025 2 min read
Read MoreSign up to get the latest on sales, new releases and more …