Throughout the Admire pipeline, color management must be carefully considered in order to create a final image in which the visual disconnection between “mixed realities” (studio and user images) is minimized. To achieve this, the user capture conditions (lighting, capture device), compression for transmission, in-studio display of user video feed, studio capture conditions, and the final shading and compression of the broadcast image will all be relevant parameters to balance. While the physical light values encoded or displayed at each step can be more or less objectively measured, the final evaluation of whether or not this goal is achieved successfully can only be evaluated subjectively by the viewer. This is because “visual disconnection” is a product of complex perceptual interpretation involving the entire visual system. While viewers will most likely be contextually aware that the program hosts and remote participants are not actually present in the same location, they will be evaluating the image based on the consistency of lighting color and directionality to decide whether or not to suspend their disbelief and comfortably accept the illusion.

Although it is most certainly now a dated and tired example for the color science community, #thedress still serves as a fantastic example of how contextual lighting judgements in images can result in varied visual experiences between viewers. For the uninitiated (in the off chance that any yet exist), #thedress refers to a particular photograph of a dress made of blue and black fabric which instead appears to a large minority group of observers as white and gold (Figure 1.) It has been postulated that the most likely cause for this discrepancy is in differing subconscious interpretations of the illumination source in the photograph. Some interpret the photograph “correctly” as representing blue and black fabrics under yellow tinted illumination while others interpret the photograph as being illuminated by blue light (as a result of the overexposed background of the photo), resulting in a perception where the black fabric is seen as a dull gold color and the blue fabric as white. While such extreme examples of perceptual discrepancies  in photographs are truly rare (where different viewers categorize image details under different color naming categories), one can assume that images which trigger interpretation discrepancies subtle enough to fall under the radar of our limited linguistic ability to communicate color precepts amongst ourselves are relatively common. In addition to variance in the interpretation of images which can occur between observers viewing the same display, the problem of verifying our system “in the wild” is further exacerbated by the reproduction variability between viewers’ television models or even units, and the lighting conditions under which they are viewed, as was briefly discussed in our previous blog post “Admire’s beautiful colors.”

Figure 1. “The dress” in question. Photographed by Cecilia Blaesdale

At a recent special session on color held by the Society of Motion Picture and Television Engineers, I held a roundtable discussion on the ill-posed nature of the color management problem. During this session, I detailed how the end stage in any complete color management pipeline (the viewer) is best conceptualized as a secondary color grading step. When seen through this lens, we can think of it as a process which is personalized for the current image and its interpretation by an observer, similar to how color correction is conceptualized as adjustments which are personal to the taste of the artist, but are constrained by the content captured in the image and the needs of the story. While it is still unclear how this concept could be practically implemented in a way that improves the quality of motion picture experiences for viewers, its acknowledgement in developing color pipelines would represent a significant improvement over current pipelines which regard the end viewer as a predictable and consistent receiver of images, or just ignore them outright. This is especially true when models aiming to represent the visual processing of the “average” observer are included, as they can inspire false confidence and serve as a shaky base for the development of image processing tools, causing them to fail in the face of new contexts. The full SMPTE session can be viewed here: https://youtu.be/RIWS-5Bu4HY.

As for the Admire project, and the validation of the mixed reality illusion in the final system, we may rely, as we always have, on our personal satisfaction in the quality of the images as well as the collected feedback of viewers.

Trevor Canham /UPF