SpectralNN Paint Mixer

Technical Notes

Why this mixer is built as a two-stage pipeline.

SpectralNN Paint Mixer is intentionally not a pure neural painter. The base engine carries the physical prior, the residual stage corrects its repeatable blind spots, and the shared artifact contract keeps runtimes and training tooling pointed at the same model family.

Step 1

Mix inputs stay explicit

The runtime starts from pigment colors plus integer parts, not from an already-baked RGB blend.

Step 2

BaseMixEngine runs first

The base engine reconstructs pigment-like reflectance and performs subtractive mixing before any learned correction happens.

Step 3

ResidualCorrectionModel nudges

The residual network sees the same mix inputs plus the base result and predicts only the correction delta.

Step 4

PipelinePaintMixer composes both

The public mixer stays small while still allowing experiments with alternative base engines or residual stages.

Runtime Shape

Residual learning is a correction strategy, not the whole engine.

MixPortion[] 
  -> BaseMixEngine.mixOrNull(...)
  -> ResidualCorrectionModel.correct(...)
  -> SrgbColor

Problem

RGB interpolation is good at screen blending, not paint behavior. Pigments absorb and reflect different wavelengths unevenly, so two paints that look vivid on their own can collapse into muted mixtures that naive channel interpolation never predicts well.

  • Paint mixing is subtractive, not additive.
  • The same pair can behave very differently at 1:1 versus 3:1.
  • Good results need a physical prior or a strong approximation of one.

Base Engine

The default base engine is `spectral_ks_v1`. It reconstructs pigment-like reflectance and mixes in spectral space, then converts the result back to sRGB for runtime output. This is where most of the physically plausible behavior comes from.

  • It is deterministic and language-portable.
  • It gives the model a strong starting point before learning anything.
  • It can be swapped in the future through the explicit `BaseMixEngine` interface.

Residual Model

The learned stage does not replace the base engine. It learns a compact correction on top of the base result, which keeps the model focused on systematic errors instead of relearning the whole mixing problem from scratch.

  • Inputs include the paint colors, ratios, and base mixed color.
  • The artifact carries normalization stats and dense weights.
  • The model is tied to the base engine through `baseEngineId`.

Shared Artifact

Kotlin and JavaScript consume the same canonical JSON artifact. That file carries the runtime contract, mixing parameters, normalization vectors, network shape, and provenance metadata. The runtime can stay thin because the artifact is explicit.

  • One artifact format for both runtimes.
  • `baseEngineId` prevents mismatched pipeline wiring.
  • The generated language wrappers are derived from the same source artifact.

Shared Fixtures

The fixture corpus is just as important as the artifact. Curated parity cases and residual-stage fixtures keep the Kotlin and JavaScript ports honest and make regression checking much cheaper than visual spot checks alone.

  • Curated parity tracks end-to-end expected outputs.
  • Residual parity isolates the learned correction stage.
  • The browser demo is another consumer of the same runtime, not a special case.

Fine-Tuning

Future quality improvements come from curated ground-truth samples rather than widening the public API. The runtime stays stable while the data and artifact keep evolving.

  1. Start from the baseline artifact and the current checkpoint.
  2. Add or refine curated ground-truth cases with source notes.
  3. Export training data against the chosen base engine.
  4. Warm-start the residual model instead of retraining from cold.
  5. Compare against fixtures before promoting the next artifact.

Experimentation

The pipeline split exists so different developers can try another base engine without rewriting the rest of the stack. The important rule is that the correction artifact must declare which base engine it was trained against.

  • Swap the base engine, then regenerate curated training data.
  • Retrain or fine-tune the residual model against that new prior.
  • Use shared fixtures to measure whether the swap is actually better.

Validation

The validation story is intentionally split across real consumers. Kotlin is validated through a local-publish consumer integration path, and JavaScript is validated by this landing page using the actual runtime in a browser-facing demo.

  • Kotlin validation happens through a published-consumer integration path.
  • JavaScript validation happens through the demo site and shared tests.
  • The same artifact and fixtures support both sides.

QA Dataset

The dataset gallery is the quickest way to inspect the current curation state before and after training runs. It brings the legacy core set together with the newer measured and observed additions in one visual browser.

  • Use it to spot label or color mismatches before trusting a dataset.
  • Check coverage gaps across hue families and ratio ranges.
  • Compare new sources against the curated core before folding them into training.
Open dataset gallery

Design Principle

Keep the public API boring, keep the model iteration flexible.

The runtime surface is intentionally small: colors, portions, mixers, artifacts, and the pipeline seam. Training, curation, and evaluation keep changing underneath that surface, which is exactly why they should stay in tooling and artifacts rather than in the public API.