Step 1
Mix inputs stay explicit
The runtime starts from pigment colors plus integer parts, not from an already-baked RGB blend.
Technical Notes
SpectralNN Paint Mixer is intentionally not a pure neural painter. The base engine carries the physical prior, the residual stage corrects its repeatable blind spots, and the shared artifact contract keeps runtimes and training tooling pointed at the same model family.
Step 1
The runtime starts from pigment colors plus integer parts, not from an already-baked RGB blend.
Step 2
The base engine reconstructs pigment-like reflectance and performs subtractive mixing before any learned correction happens.
Step 3
The residual network sees the same mix inputs plus the base result and predicts only the correction delta.
Step 4
The public mixer stays small while still allowing experiments with alternative base engines or residual stages.
Runtime Shape
MixPortion[]
-> BaseMixEngine.mixOrNull(...)
-> ResidualCorrectionModel.correct(...)
-> SrgbColor
RGB interpolation is good at screen blending, not paint behavior. Pigments absorb and reflect different wavelengths unevenly, so two paints that look vivid on their own can collapse into muted mixtures that naive channel interpolation never predicts well.
The default base engine is `spectral_ks_v1`. It reconstructs pigment-like reflectance and mixes in spectral space, then converts the result back to sRGB for runtime output. This is where most of the physically plausible behavior comes from.
The learned stage does not replace the base engine. It learns a compact correction on top of the base result, which keeps the model focused on systematic errors instead of relearning the whole mixing problem from scratch.
Kotlin and JavaScript consume the same canonical JSON artifact. That file carries the runtime contract, mixing parameters, normalization vectors, network shape, and provenance metadata. The runtime can stay thin because the artifact is explicit.
The fixture corpus is just as important as the artifact. Curated parity cases and residual-stage fixtures keep the Kotlin and JavaScript ports honest and make regression checking much cheaper than visual spot checks alone.
Future quality improvements come from curated ground-truth samples rather than widening the public API. The runtime stays stable while the data and artifact keep evolving.
The pipeline split exists so different developers can try another base engine without rewriting the rest of the stack. The important rule is that the correction artifact must declare which base engine it was trained against.
The validation story is intentionally split across real consumers. Kotlin is validated through a local-publish consumer integration path, and JavaScript is validated by this landing page using the actual runtime in a browser-facing demo.
The dataset gallery is the quickest way to inspect the current curation state before and after training runs. It brings the legacy core set together with the newer measured and observed additions in one visual browser.
Design Principle
The runtime surface is intentionally small: colors, portions, mixers, artifacts, and the pipeline seam. Training, curation, and evaluation keep changing underneath that surface, which is exactly why they should stay in tooling and artifacts rather than in the public API.