"Great customer service. The folks at Novedge were super helpful in navigating a somewhat complicated order including software upgrades and serial numbers in various stages of inactivity. They were friendly and helpful throughout the process.."
Ruben Ruckmark
"Quick & very helpful. We have been using Novedge for years and are very happy with their quick service when we need to make a purchase and excellent support resolving any issues."
Will Woodson
"Scott is the best. He reminds me about subscriptions dates, guides me in the correct direction for updates. He always responds promptly to me. He is literally the reason I continue to work with Novedge and will do so in the future."
Edward Mchugh
"Calvin Lok is “the man”. After my purchase of Sketchup 2021, he called me and provided step-by-step instructions to ease me through difficulties I was having with the setup of my new software."
Mike Borzage
November 20, 2025 12 min read

The modern idea of a reusable, portable “appearance” stems from the earliest analytic reflectance models that artists and engineers could actually tune. In 1975, Bui Tuong Phong proposed the now-iconic Phong reflection model, which—by separating ambient, diffuse, and specular components—gave practitioners a controllable recipe for shiny surfaces and the first broadly adopted approximation of highlights. Jim Blinn’s 1977 refinement, commonly called Blinn-Phong, altered the specular term to rely on a half-angle vector, both improving numerical stability and producing more convincing, broader highlights. These were not physically correct, but they were delightfully predictable, a virtue that made them fixtures of early CAD and DCC. The 1980s brought a microfacet turn toward physics with Cook–Torrance (Robert L. Cook and Kenneth E. Torrance, 1982), which modeled surfaces as distributions of tiny mirrors and introduced parameters like roughness and Fresnel reflectance. Greg Ward’s 1992 model embraced anisotropy for brushed metals and hairline finishes, while Michael Oren and Shree Nayar’s 1994 rough diffuse accounted for micro-occlusion and surface self-shadowing. Collectively, these models cemented a vocabulary—“specular,” “roughness,” “Fresnel,” “anisotropy”—that became the alphabet of appearance definition, long before anyone spoke of PBR as a unified doctrine. Even in their limitations, these models seeded the notion that materials are parameter sets that could be described, traded, and reused across scenes, studios, and eventually products.
As offline rendering matured, renderers became de facto arbiters of how materials were built and shared. Pixar’s RenderMan, commercialized in 1989 under the technical stewardship of Pat Hanrahan and colleagues, introduced the RenderMan Shading Language (RSL). With RSL, studios could author reusable shaders as programmable building blocks—surface, displacement, light—that evolved into internal libraries of gold, car paint, or human skin. mental ray, developed by Rolf Herken’s mental images, followed a similar trajectory: its integration into 3ds Max, Maya, and Softimage, and the spread of LightWave 3D, created a culture of “preset” materials distributed in renderer-specific formats. Early CAD and product visualization workflows absorbed these ideas but simplified them; the prevailing logic was “color + gloss + bump”, with plenty of artistic latitude but little energy conservation or spectral rigor. In effect, each renderer became its own kingdom, with slightly different BRDF assumptions, texture slot names, and parameter ranges. The result was an uneasy mix: materially expressive for experienced technical directors, yet fragile when moving assets between applications. This renderer gravity set the stage—both culturally and economically—for libraries tied to platforms and for consultancies that specialized in translating looks from one ecosystem to another.
By the mid-1990s, material authors began to codify distinctive industrial looks that demanded layered, directionally dependent behavior. Automotive visualizers stacked clearcoat layers over colored basecoats to mimic metallic flakes and the characteristic off-specular tint of automotive finishes; brushed metals and hairline plastics needed anisotropy aligned to model UVs or procedural direction fields. Because no common standard existed, these effects were ad hoc—custom shaders in RenderMan or mental ray node networks—whose behavior hinged on the renderer’s integrator, sampling strategy, and tone mapping. Concurrently, Ken Perlin’s procedural noise (and its richer descendants like fBm and Worley noise) seeded a procedural texturing culture: wood grain, marble, and cellular patterns could be parameterized and regenerated at any resolution, removing reliance on low-resolution bitmaps. This era also saw early experiments with flakes and fabrics using layered specular lobes and perturbed normals to hint at complex microgeometry. While these constructions were not strictly predictive of measured reality, they crystallized a powerful idea: specialized materials could be encapsulated, named, and configured through a small set of artist-facing controls, encouraging authors to share recipes that felt like reusable assets long before the word “material library” was fashionable.
Once a shared vocabulary and a taste for presets existed, a market was inevitable. Renderer vendors shipped bundled “starter” materials to ease onboarding. Third-party publishers pressed CDs brimming with tiled textures, procedural shader networks, and bump maps—sold through bookstores and early e-commerce as “textures and shaders.” The plugin boom of the late 1990s and early 2000s intensified this trend. SplutterFish’s Brazil r/s and V-Ray by Chaos Group (co-founded by Vladimir Koylazov and Peter Mitev) nurtured communities where shader presets and look-dev snippets became sellable value. Forums and magazines posted shader balls—consistent spheres rendered under the same light rig—to communicate what a material “looked like” across engines, foreshadowing today’s conventions. The sales pitch was dual: speed and trust. Presets compressed expertise into parameters, helping freelancers and studios bid confidently. In hindsight, these modest catalogs were the proto-marketplaces that would later blossom into subscription platforms. Crucially, they familiarized buyers with the notion that a material is an asset, distinct from geometry or lighting—and therefore worth curating, versioning, and paying for.
Empirical capture reframed materials from imaginative recipes into verifiable, scientific objects. The CUReT database (1999) cataloged diverse textures with view-dependent imagery, while the MERL BRDF database (Wojciech Matusik et al., 2003) delivered isotropic BRDFs measured over dense hemispherical sampling. For the first time, researchers and toolmakers could test shader models against ground truth—demonstrating where Phong-like lobes broke down and where microfacet distributions excelled. In parallel, a new wave of physically based, often spectral renderers—Maxwell Render from Next Limit, Indigo Renderer, Fryrender—marketed the promise that if you modeled energy transport correctly, materials would “just look right.” Even if their caustics-first demos oversold practical throughput, they influenced expectations: energy conservation, Fresnel-correct specular, and importance sampling became table stakes. The empirical turn also encouraged better naming and parameterization. Rather than arbitrary “specular level,” authors began to speak of index of refraction, diffuse albedo, and roughness with clear physical meaning. This was a crucial psychological shift: when parameters map to measurements, appearances become portable artifacts instead of renderer lore, paving the way for the standards that would define the 2010s.
Between 2010 and 2015, a consensus crystalized around physically based shading that prioritized artist usability without betraying first principles. Brent Burley’s 2012 paper on Disney’s physically based shading presented an intuitive, energy-conserving “principled” model with modular lobes—diffuse, specular, clearcoat, sheen—driven by perceptual ranges. Real-time engines embraced simplified yet consistent parameterizations. The games industry converged on two interchangeable workflows: metal/rough and spec/gloss. By 2014–2015, engines like Unreal Engine and Unity shipped PBR pipelines that normalized roughness maps and metalness logic, aligning authoring expectations across VFX, games, and product viz. Khronos Group’s glTF 2.0 (2017) embedded a baseline PBR model—base color, metallic, roughness, normal, occlusion, emissive—into a portable scene format, making interchange not just possible but expected. The effect was profound. Material creators could publish a single set of textures and reasonably predict appearance across contexts; educators and tool vendors could teach one mental model; and QA teams could build “golden shader balls” to validate fidelity. While specialized features—transmission with dispersion, thin-film interference, subsurface scattering—remained uneven across engines, the center of gravity moved decisively toward parameter compatibility.
As PBR parameters stabilized, attention turned to describing materials independently of any one renderer’s codebase. Larry Gritz’s Open Shading Language (OSL) emerged in 2010 as a production-proven, renderer-agnostic language focused on closures—abstract BSDF building blocks that renderers could evaluate consistently. NVIDIA’s Material Definition Language (MDL), announced in 2015, formalized materials as layered, physically based constructs with clear separation between appearance and implementation, allowing the same MDL asset to render across Iray, Omniverse, and other MDL-capable backends. Industrial Light & Magic released MaterialX (2015), later stewarded by the Academy Software Foundation (from 2019), to standardize node graphs for shaders, textures, and look-dev, while Autodesk’s Standard Surface aligned DCC defaults around a principled uber-shader. Each effort tackled a different slice: OSL for programmable closures, MDL for portable definitions with a strong PBR semantics, MaterialX for interoperable graphs and look-dev IO. Together, they built the scaffolding for libraries that could outlive a renderer migration—critical for studios with multi-decade archives and for enterprises enforcing global CMF consistency across visualization stacks.
The success of compact PBR maps didn’t erase the reality that many materials are led by view- and light-dependent phenomena. Research into SVBRDFs and BTFs captured fabrics with directional sheen, car paints with sparkling flakes, and leathers whose pores self-shadow, but the data volumes were burdensome for DCC pipelines and especially for real-time delivery. Compression schemes and neural decoders have chipped away at the size problem, yet authoring and editability remain challenging. Meanwhile, “shader ball” conventions became ubiquitous in product pages and documentation, with neutral HDR environments and consistent camera angles. These controlled comparisons exposed subtle incompatibilities—how one engine’s clearcoat tints differently, how another interprets normal map scale—underscoring the cost of nonstandard parameters. Communities responded with converter utilities and calibration scenes, but the lesson stuck: portability requires not only shared maps but shared semantics and numerics. In practice, many pipelines settled on a layered approach: PBR textures for the backbone, augmented with masked clearcoat or sheen, reserving heavy SVBRDF/BTF representations for hero fabrics and automotive flake paints where the visual payoff justified the pipeline overhead.
Once PBR stabilized and scanners matured, appearance libraries became businesses with distinct strategies. Allegorithmic’s Substance platform, founded by Sébastien Deguy and acquired by Adobe in 2019, popularized procedural, parametric materials packaged as SBSAR files. With Substance 3D Designer and Painter, materials could be authored once and exposed via sliders—wood planks with adjustable age, marble with controllable veins—then distributed via the Substance 3D Assets library. Quixel Megascans, founded by Teddy Bergsman Lind and acquired by Epic Games in 2019, normalized high-quality scanned PBR surfaces and 3D assets and made them free for Unreal Engine users, a strategic lock-in that tied content gravity to the engine. Independents flourished: Poliigon (Andrew Price), Arroway Textures, RD-Textures, AmbientCG (Lennart Demes), Greyscalegorilla Plus, and Textures.com offered specialized catalogs with consistent presentation. Chaos, the company behind V-Ray, expanded through acquisitions (including Corona, now Chaos Czech) and innovations like VRscans measured materials and the Chaos Cosmos library; Chaos merged with Enscape (2022), signaling further consolidation between AEC real-time visualization and renderer-driven asset ecosystems. The value proposition sharpened: speed to photoreal, breadth of categories, and trust in measured or validated fidelity—all delivered via subscriptions and APIs that feed DCC, CAD, and real-time engines.
Beyond entertainment, manufacturers and AEC needed traceable digital surrogates of real finishes. X-Rite’s AxF (Appearance Exchange Format) and TAC7 scanners standardized the capture and interchange of measured materials, including effect coatings and textiles with complex anisotropy. AxF support spread across Autodesk VRED, Dassault Systèmes 3DEXCITE DELTAGEN, Luxion KeyShot, PTC Creo, and Siemens NX, enabling color and finish teams to evaluate form-language models with confidence that virtual and physical samples align. Vizoo’s XTex scanners and the Material Exchange platform brought “digital material twin” workflows to fashion and interiors, where mills could publish verified appearances for brand review. Color management anchored these flows: Pantone/X-Rite ecosystems, combined with multi-angle spectrophotometers like the BYK instruments, kept albedo and effect color within tolerances and documented via measurement data. The heart of the promise is traceability. When a digital appearance is backed by measurement files, acquisition metadata, and device characterization, CMF teams can move earlier to virtual approval gates, compressing development cycles without gambling on subjective looks. This shift is as much process innovation as visual fidelity: materials become tracked, versioned entities within enterprise systems, not ad hoc shader presets on a designer’s workstation.
As visual fidelity rose, the border between DCC and engineering blurred. Luxion KeyShot, co-founded by Claus and Henrik Wann Jensen, mainstreamed drag-and-drop photorealism for engineers, backed by GPU/CPU hybrid rendering and a large library accessible through the KeyShot Cloud. SOLIDWORKS Visualize, tracing heritage to Bunkspeed and NVIDIA Iray, put studio-quality rendering into mainstream mechanical CAD. Autodesk pushed across VRED for automotive review, 3ds Max for broader DCC, and Arnold for production rendering workflows; Siemens NX added Ray Traced Studio; Dassault Systèmes advanced 3DEXCITE for marketing-grade imagery. Real-time visualization dovetailed with standardized PBR assets: Unreal Engine’s Datasmith and Twinmotion, and Unity Reflect, ingested CAD geometry and materials while retaining PBR semantics. The common thread is consistency. A brushed aluminum authored as a rough, anisotropic metal in a DCC tool should present consistently in a VR review or a marketing render. By harmonizing default shaders (e.g., Autodesk Standard Surface) and leaning on interchange (glTF 2.0, USD/USDShade), vendors lowered the translation tax. In practice, teams build libraries once, deploy to multiple surfaces—engineering reviews, configurators, XR demos—and expect predictable behavior everywhere.
Under the hood, the economics and infrastructure of material libraries became as important as their visuals. Subscription catalogs, per-asset licensing, and enterprise site agreements coexist, often bundling content to drive adoption of rendering platforms or engines. Interoperability is the lubricant. MDL, MaterialX, USD/USDShade, and glTF each contribute a piece: portable definitions, node graphs, and baseline PBR semantics. Vendors ship converters that mitigate parameter mismatches—squaring roughness definitions, reconciling IOR conventions, and fixing normal map tangents. Increasingly, appearance assets carry metadata beyond visuals: Environmental Product Declarations (EPDs) for sustainability, compliance notes (flammability, VOC content), supplier IDs, finish process constraints (e.g., anodization options), and availability by region. This metadata flows into PLM/PDM systems so that a material chosen for design review maps cleanly to procurement and sustains traceability through manufacturing and marketing. The value unlock is twofold. First, materials become data products with lifecycle context, not just images. Second, vendors can deliver analytics: usage telemetry, duplication detection, and lineage tracking to prevent the proliferation of near-duplicate finishes. The net effect is a tighter loop between CMF intent, engineering feasibility, and visualization reality, all mediated by standardized, monetized appearance assets.
Across five decades, photorealistic material libraries evolved from hand-tuned shader presets into standardized, measured, and monetized assets that straddle DCC, CAD, and real-time engines. Early BRDF innovations—Phong, Blinn-Phong, Cook–Torrance, Ward, Oren–Nayar—taught the industry to think in parameters. Renderer ecosystems and procedural texturing made those parameters shareable recipes with cultural weight. Measurement databases like CUReT and MERL elevated conversations from taste to truth, while spectral and physically based renderers reset expectations for energy conservation and Fresnel correctness. The PBR era translated physics into practice: Disney’s principled shading, games’ metal/rough conventions, and glTF 2.0’s baseline model created a lingua franca for base color, roughness, metalness, normals, and occlusion. Material languages—OSL, MDL, MaterialX—and common surfaces (Autodesk Standard Surface) secured cross-application consistency. Meanwhile, business ecosystems matured: Substance’s procedural SBSARs, Quixel’s scanned Megascans, and Chaos’s VRscans established catalogs as ongoing services. In parallel, AxF and enterprise-grade scanners brought measured fidelity to design and AEC, tying digital twins to physical verification. What began as renderer-specific lore has become an infrastructure where materials are first-class assets with IDs, metadata, and governance.
Commercial success flowed directly from technical consolidation. PBR consistency lowered the cost of authoring once and deploying everywhere, while glTF 2.0 ensured portable baselines for interchange. Material languages and schemas—OSL closures, MDL’s layered semantics, MaterialX graphs—gave vendors confidence that libraries would survive renderer migrations. Scanning hardware and formats (X-Rite TAC7 and AxF, Vizoo XTex) created supply: measured appearances that enterprises could trust in decision-making. Together, these forces enabled marketplaces and platform strategies: Adobe’s Substance ecosystem centered on editable, compact SBSARs; Epic’s Megascans tied premium scans to Unreal as a competitive moat; Chaos paired rendering engines with VRscans and Cosmos to drive end-to-end adoption. The commercial pattern is consistent: bundle materials to accelerate time-to-value, price via subscriptions to align incentives, and differentiate on breadth, fidelity, and integration depth. As standards settled, barriers to entry dropped for specialists (e.g., Arroway, RD-Textures, AmbientCG), expanding the long tail and fostering healthy competition that in turn nudged vendors toward better interoperability.
Despite progress, parity is imperfect. Layered/coated materials—automotive clearcoats, thin films, pearlescents—still render differently across engines due to varying microfacet distributions, Fresnel implementations, and sampling strategies. Anisotropy remains sensitive to tangent frame conventions and UV continuity, while subsurface scattering varies by diffusion model and random-walk settings. The spectral vs. RGB tradeoff persists: spectral rendering captures dispersion and wavelength-dependent absorbance, but RGB pipelines dominate for speed, necessitating approximations that can drift under colored lights or in extreme angles. SVBRDF/BTF datasets and flake microgeometry are heavy; compression reduces size but can complicate editability and introduce artifacts. Interoperability also faces seams: MDL, MaterialX, USDShade, and engine-specific models overlap but are not identical, prompting converter chains that inevitably lose nuance in edge cases. Measured assets create a fidelity vs. editability dilemma—how to tweak a scanned velvet without breaking its physical plausibility or ballooning data size. And finally, governance is nontrivial: keeping libraries deduplicated, metadata current, and supplier relationships synchronized inside PLM/PDM demands discipline and tooling that many organizations are still building.
The next decade points toward deeper convergence and smarter representations. Expect wider adoption of open shading models like MaterialX’s emerging OpenPBR, harmonized across DCC, CAD, and real-time engines so that clearcoat, sheen, and transmission semantics match out of the box. Neural appearance capture and compression will shrink SVBRDF/BTF-scale assets into deployable packages with learned decoders, while inverse rendering and differentiable frameworks—such as Mitsuba 2 and NVIDIA’s neural materials research—make scanned-quality assets editable through optimization, not hand-tuned guesswork. On the enterprise side, materials will deepen as digital twins: spectral data, manufacturing process metadata (e.g., bake curves, flow coat parameters), and sustainability attributes (EPDs, recycled content) will travel from supplier to PLM to visualization, informing both design and marketing claims. USD will likely serve as the scene backbone, with USDShade linking to MaterialX graphs and MDL or OSL kernels as execution targets, while glTF continues as a performant distribution format for runtime consumption. The strategic arc is clear: make appearances portable, truthful, lightweight, and governable. When those four align, material libraries stop being mere skin and become the connective tissue between design intent, engineering feasibility, and compelling visualization across screens, headsets, and showrooms.

November 20, 2025 10 min read
Read More
November 20, 2025 2 min read
Read More
November 20, 2025 2 min read
Read MoreSign up to get the latest on sales, new releases and more …