Recolour What Matters: Region-Aware Colour Editing via Token-Level Diffusion

Yuqi Yang1,2, Dongliang Chang1,2, Yijia Ling1,2, Ruoyi Du1, Zhanyu Ma1,2
1Beijing University of Posts and Telecommunications
2Beijing Key Laboratory of Multimodal Data Intelligent Perception and Governance
Method Framework
Figure 1. Editing results of ColourCrafter under varying reference colours. Each row shows the input image and its edited outputs conditioned on different RGB references. As the reference colours vary smoothly from left to right, ColourCrafter produces continuous and precise recolouring with consistent structure and texture.

Abstract

Colour is one of the most perceptually salient yet least controllable attributes in image generation. Although recent diffusion models can modify object colours from user instructions, their results often deviate from the intended hue, especially for fine-grained and local edits. Early text-driven methods rely on discrete language descriptions that cannot accurately represent continuous chromatic variations. To overcome this limitation, we propose ColourCrafter, a unified diffusion framework that transforms colour editing from global tone transfer into a structured, region-aware generation process. Unlike traditional colour-driven methods, ColourCrafter performs token-level fusion of RGB colour tokens and image tokens in latent space, selectively propagating colour information to semantically relevant regions while preserving structural fidelity. A perceptual Lab-space Loss further enhances pixel-level precision by decoupling luminance and chrominance and constraining edits within masked areas. Additionally, we build ColourfulSet, a large-scale dataset of high-quality image pairs with continuous and diverse colour variations. Extensive experiments demonstrate that ColourCrafter achieves state-of-the-art colour accuracy, controllability and perceptual fidelity in fine-grained colour editing.

Colourful Dataset

dataset
Figure 2. Examples from the ColourfulSet dataset. The edited images are from different categories under various target colour references.

Method Overview

Method Framework
Figure3. Overview of the ColourCrafter pipeline. (1) Dataset construction: Using Flux.1-Kontext, we generate diverse image-colour pairs and employ a Vision-Language Model (VLM) to filter samples for consistency, fidelity, and realism. The corresponding RGB references are extracted to build the high-quality dataset ColourfulSet. (2) Training: The original image, target colour reference, and text prompt are jointly fed into the diffusion model, which is optimised with both Diffusion and Lab-space losses to enhance chromatic accuracy and perceptual consistency. (3) Inference: Given an input image, a RGB reference, and a prompt, ColourCrafter performs fine-grained, structure-preserving, and perceptually natural colour editing.

Experimental Results

result1
Figure 4a. Comparison with other methods. The first column shows the reference colours and original images. The comparison demonstrates that our method achieves more precise and fine-grained colour editing while preserving structural integrity and background consistency.
result2
Figure 4b. Comparison with other methods. The first column shows the reference colours and original images. The comparison demonstrates that our method achieves more precise and fine-grained colour editing while preserving structural integrity and background consistency.
result3
Figure 4c. Comparison with other methods. The first column shows the reference colours and original images. The comparison demonstrates that our method achieves more precise and fine-grained colour editing while preserving structural integrity and background consistency.
output

Citation

  @article{yang2026recolourmatters,
        title={Recolour What Matters: Region-Aware Colour Editing via Token-Level Diffusion}, 
        author={Yuqi Yang and Dongliang Chang and Yijia Ling and Ruoyi Du and Zhanyu Ma},
        journal={arXiv preprint arXiv:2603.18466}
        year={2026},
        url={https://arxiv.org/abs/2603.18466}, 
  }