space engineers adding textures to lcd panel made in china

8) Now you need to finish text below (no brackets) : LCDTextureDefinition "Any name for your texture" Textures\Models\"Texture real name".dds

space engineers adding textures to lcd panel made in china

Surface texture is among the most salient haptic characteristics of objects, which help in object identification and enhanced feel of the device being manipulated. Although textures are widely known in computer graphics only recently have they been explored in haptic. Haptic textures are used to improve the realism of haptic interaction and give cues associated with many tasks in tel-manipulation or designing machines in virtual space. The haptic texture display can help the visually impaired to experience 3D art at virtual museums and perceive the features of arts (G. Jansson et al 2003). It also can be used for home shoppers in the internet shopping.

Researchers have developed many sophisticated haptic texture display methods. The texture display methods so far involve three types of construction: 1) real surface patch presentation (S. Tachi et al 1994)( K. Hirota & M. Hirose 1995), These methods use a real contact surface, arranged in arbitrary position in a 3D space, to simulate a partial model of the virtual object. The typical system was developed by Minsky and was call the Sandpaper system. 2) multiple-pin vibratory presentation. These approach need to design specialized pin-array device which can dynamically or statically exert pressure to the skin. (Ikei et al1997) (Ikei 1998) (Masami 1998) 3) Single point sensing and contact tool driving presentation. In recently years, researchers have tried to combine the kinematics and tactile device together, so as to enrich the haptic presentation and enhance the performance the texture display(Ikei 2002). However, the developments mean more complicated and more expensive.

Under the category 3), When the operator exploring the virtual sculpture’s surface by performing natural movements of his/her hand, the artificial force is generated by the interaction with the virtual surface through a haptic interface. As the interacting with the virtual textured surface will generate complex force information, so the key issue of this class of methods is to computing the contact forces in response to interactions with virtual textured surface, and applying to the operator through the force-reflecting haptic interface. This paper first presents some relevant previous work in texture force modeling, and the principle of the image based feature extraction and contact force modeling is introduced. Then an implementation of the algorithm is presented. Finally the experimental results are discussed.

Hari present a method to record perturbations while dragging the tip of the PHANToM on a real surface and play back using the same device (Lederman et al 2004)(Vasudevan & Manivannan 2006). Ho et.al present a modification of ‘bump maps’ borrowed from computer graphics to create texture in haptic environments (C.-H. Ho et al 1999). This method involves the perturbation of surface normals to create the illusion of texture. Dominguez-Ramirez proposed that the texture could be modelled as a periodic function. Saira and Pai present a stochastic approach to haptic textures aimed at reducing the computational complexity of texturing methods (Juhani& Dinesh 2006). Fritz and Barner follow this up by presenting two stochastic models to generate haptic textures (Fritz& Barner 1996). S. Choi and H.Z.Tan study the perceived instabilities arising out of current haptic texture rendering algorithms while interacting with textured models (Choi & Tan 2004). Miguel proposed a force model based on the geometry model (Miguel et al 2004). All this contact force models can be classified into four kinds.

The sensor-based approach using the device such as scanning electron microscope, potentially reproduces tactile impressions most precisely but a sensor equivalent to human skin is not commonly available and too expensive(Tan 2006). The geometry based approach involves intricate microscopic modelling of an object surface, requiring computation time to resolve the contact state between a finger and the surface. The stochastic and deterministic models such as sinusoid and random stochastic are simple to implement and could generate different sample of contact force and perceptually different from each other, but the produced contact force is not mapping the real textures. As to the deterministic models, the variable parameters are amplify, frequency and texture coordinate. The controllable parameters are limited and these features couldn’t fully describe the characteristic of texture, such as the trend and direction of texture.

To modelling the texture force during contact with texture, the geometrical shape and material properties should be grasped. However, the precise measurement of minute shape or bumpiness is not easy since it requires special apparatus for measurement. The method of using the photograph to get the geometrical data of texture has been adopted by Ikei, in which, histogram transformation was adopted to get the intensity distribution of the image.

Although the height profile of a surface itself is not directly the intensity of tactile sensation perceived, it is among most related data to the real stimulus. The material of object should be in some category that produces the image reflecting its height map. Based on this hypothesis, a novel image date-based texture force model is proposed. We proposed that a photo image would be equivalent to a geometrical data as long as the photo was properly taken. The image data are processed with Gauss filters and the micro-geometric feature is acquired. Based on the height map, the constraint forces in tangent and normal direction are modelled. Then, the texture force is applied to the operator by the DELTA haptic device. The principle diagram of the haptic display system is shown as Figure 1. During exploration on a surface of a virtual object, the user can perceive force stimulus from the DELTA haptic device on the hand.

A virtual texture is presented by 3-DOF texture force. In order to simulate the contact force as if touching the real texture surface, it is proposed that the present surface texture by using a photograph. To assure the precise discrimination and high similarity to actual object textures, the feature of the image was dealt at the following step.

Image sampling and pre-processing. The original images were taken form digital camera. The colour image was firstly transformed into grey-scale image. To acquiring the contour of textured surface from 2D image data, the prominent problem is that an image’s brightness intensity must roughly match the height map of texture protrusions. So homomorphic filtering was adopted to eliminate the effect of the non-uniform illumination.

In image processing, Gauss filter is a common low pass filter in the frequency domain to attenuate high frequencies and retains low frequencies unchanged. The result is to smooth the edge of the image. While to the texture image, the low frequency components usually reflect the large continuous spatial region, and the high frequency components usually reflect the edge of texture image, which are in coherence with the alternation of geometrical height in space. Here we use the uniform Gauss filter to reshape the texture image, and then the original image is minus by the filtered image, the left ‘noise’ denotes the texture models.

Where k is a constant of proportionality, and d(x,y) is the height map of the texture. (x,y) is a texture coordinate within a surface. The direction of force is normal to the reference surface that forms a contour of a whole object.

To constraint the avatar on the surface of the virtual surface, the maximum penetration depth H is given. If the depth of penetration depth is over H, then, the constraint force in normal will set to the constant value, which is within the output range of the haptic device.

Using the techniques discussed in this paper, the texture force display system, shown in Figure 4, was composed of 3-DOF DELTA haptic device, which was able to produce the force stimulations. It was connected to a 2.66GHz Pentium PC, and graphical display on the LCD screen. During exploration on the virtual textured surface, the user can perceive variation of reactive force between the virtual probe and virtual textured surface. The screen shows the virtual space in which the red dot, an avatar representing the fingertip of the physical hand on the right, interact with a sculpture. DELTA haptic device held by the operator can reflect forces to the user whose sensations approximate the effect of exploring a real object having the same shape.

To evaluate presentation quality of the haptic texture display system, other two commonly used texture force models were compared. One is sinusoid model (Tan, 2006), the other is random stochastic model (Siira, 2006). The sinusoid model is commonly used that define the height map as

Where the rand() is the white noise, produced by computer program. The adjustable parameters mean μ and variance σ, which originated from the statistical properties of the texture image sample. Higher variance produces a rougher texture. To assure that there is no texture force applied when the user is not moving, F is set to zero below a small velocity threshold.

This experiment is to compare the effectiveness of the texture force models mentioned above, the texture force were presented to the subjects with the models mentioned above in a random sequence. While controlling the proxy to slide over the virtual texture surface with DELTA device, subjects felt the texture force, and they were required to select which method was the most realistic one. The trials were repeated three times for different texture samples, and Table 1 shows the mean selection results.

The result indicates that the proposed model is superior to others. One explanation for this may be that the height map is originated from the real image texture, which contains more information such as orientation of the texture than other two models.

This experiment was to evaluate the proposed model’s legibility. In the experiment, subjects were asked to wear the eye mask so as the process were implemented prohibiting the visual observation, four different texture samples were displayed to the subject. And then the four corresponding texture images were shown to the subject. Subject was asked to match the texture of what they saw to what they have felt previously. As human’s haptic memory span is limited. For one continues haptic perception trial, the number of the samples which human can remember and identify is 3 to 6. So in this experiment, one rendered group only included four samples. Four texture samples are metal, brick, wood and brick (a, b, c, d in Fig.5). Each subject performed on the same condition for three times. The match correct rate is as table 2.

This experiment is also to estimate the presentation quality of the haptic texture model. In the experiment, the haptic texture was render to the subjects with our method, and subjects were required to see a group of texture images in which the rendered one was among them, and to tell which image is the just rendered one. The number of samples in one group is 6.

The correct rate of match experiment II is higher than III. It is supposed that, in experiment II, the match pairs is limited, and the human’ force perception can be referred to the others samples

From the experiments, it implied that our model is distinctly superior to other two haptic texture models. The haptic texture render system exhibits the remarkable different reflected force to the users. One important reason is that the texture force model is from the height profile of texture image and Gauss filter is utilized. But we also have to confess that what we feel is still not what we see. Due to the complex nature of the haptic rendering pipeline and the human somatosensory system, it remains a difficult problem to expose all factors contributing to such perceptual artefacts. The haptic interface research laboratory at Purdue University has investigated the unrealistic behaviour of haptic texture. To our system, one reason for the limited matching rating is from the haptic device. As the haptic texture is implemented through the haptic device, so the performance is closely related with the device. To generate the stimuli of small texture force, the output force is controlled in the range of 5N. However, the DELTA device has relatively large damping force compared with the magnitude of texture force, which effected the actually output of the system. On the other hand, a real stainless steel surface has an almost infinite stiffness and cannot be penetrated by the fingertip or a probe, whereas an impedance implementation of a virtual surface has a limited stiffness due to the output of the haptic device. Another reason may affect the haptic perception in our system is the mapping between the pixel coordination of the image and world coordination of haptic device. As the resolution of the image is given, the larger working space of movement means interpolation or approximation approach would be used.

To rapidly and effectively modelling the textured force and simulated the haptic stimuli while the user touching the textured surface of a 3D object. The height map of the texture is acquired based on the Gauss filter in frequency domain. The texture force and friction force are modelled based on the height map. As the height map is transformed from the image data, so the processing is simple and no specialized 3D geometrical scanning device is utilized. In this system, the texture objects are regular for example cuboids and columns. In the future, more the haptic texture model could be improved to be combined with random signal based on SR (stochastic resonance) and further psychophysical experiments would be carried out to adjust the parameters. The system will be applied to the objects with irregular shape for more widely uses.

space engineers adding textures to lcd panel made in china

Despite the importance of the appearance of human skin for theoretical and practical purposes, little is known about visual sensitivity to subtle skin-tone changes, and whether the human visual system is indeed optimized to discern skin-color changes that confer some evolutionary advantage. Here, we report discrimination thresholds in a three-dimensional chromatic-luminance color space for natural skin and skinlike textures, and compare these to thresholds for uniform stimuli of the same mean color. We find no evidence that discrimination performance is superior along evolutionarily relevant color directions. Instead, discriminability is primarily determined by the prevailing illumination, and discrimination ellipses are aligned with the daylight locus. More specifically, the area and orientation of discrimination ellipses are governed by the chromatic distance between the stimulus and the illumination. Since this is true for both uniform and textured stimuli, it is likely to be driven by adaptation to mean stimulus color. Natural skin texture itself does not confer any advantage for discrimination performance. Furthermore, we find that discrimination boundaries for skin, skinlike, and scrambled skin stimuli are consistently larger than those for uniform stimuli, suggesting a possible adaptation to higher order color statistics of skin. This is in line with findings by Hansen, Giesel, and Gegenfurtner (2008) for other natural stimuli (fruit and vegetables). Human observers are also more sensitive to skin-color changes under simulated daylight as opposed to fluorescent light. The reduced sensitivity is driven by a decline in sensitivity along the luminance axis, which is qualitatively consistent with predictions from a Von Kries adaptation model.

Skin color and texture are used by humans in processing and accomplishing a variety of tasks, such as face recognition (Bar-Haim, Saidel, & Yovel, 2009), judgments of health (Stephen, Law Smith, Stirrat, & Perrett, 2009), and evaluation of attractiveness (Fink et al., 2008; Fink, Grammer, & Thornhill, 2001; Stephen et al., 2009). Communication of skin color has also been proposed as an important factor driving the evolution of human color vision (Changizi, Zhang, & Shimojo, 2006). However, relatively little is known about the performance of human observers in telling apart subtle changes in skin color.

Classical definitions of color appearance and color-difference metrics (defined through discrimination thresholds) have relied on the use of uniform color stimuli. Many studies in the past have measured discrimination thresholds for uniformly colored light fields and color patches (MacAdam, 1942; Melgosa, Hita, Poza, Alman, & Berns, 1997; Poirson & Wandell, 1990; Poirson, Wandell, Varner, & Brainard, 1990). These measurements have been used to develop color-appearance spaces such as CIE 1976 UCS (International Commission on Illumination [CIE], 2004), CIELAB (CIE, 2004), and CIECAM02 (Moroney et al., 2002), some of which are also associated with color-difference metrics such as ΔELAB and ΔECAM02. The aim of these color spaces is to propose a description of color based on appearance, where equal distances traversed in the color space correspond to roughly equal perceived differences in appearance. Although these theories offer critical insights into the early mechanisms of human color vision, such as color opponency, they do not provide a convincing framework to study natural polychromatic stimuli. The response of the visual system to these stimuli is more complex, and relatively less understood. For instance, Webster and Mollon (1997) showed that the human visual system adapts to color distributions in natural scenes. In particular, when natural (or naturallike) stimuli are presented to observers, their color perception has been shown to be affected by factors such as the object"s textural properties (Vurro, Ling, & Hurlbert, 2013) and the observer"s memory of the object (Olkkonen, Hansen, & Gegenfurtner, 2008). Consequently, attempts to define and estimate discrimination surfaces for polychromatic stimuli have been relatively fewer and more recent. Montag and Berns (2000) compared luminance thresholds for textures and uniform patches and found the luminance thresholds for textures to be higher by a factor of 2. Hansen, Giesel, and Gegenfurtner (2008) and Giesel, Hansen, and Gegenfurtner (2009) estimated chromatic thresholds in an isoluminant plane for uniform patches, natural objects, and polychromatic textures with color distributions similar to natural stimuli.

In this article, we investigate how the human visual system responds to an ecologically important class of polychromatic natural stimuli: human skin. We do so by estimating discrimination thresholds for skin and skinlike patches not in an isoluminant chromaticity plane but in a more informative chromaticity-luminance color space. In the first experiment, we estimate thresholds for skin stimuli from two distinct ethnicities under simulated daylight and fluorescent lighting. We compare these thresholds to those obtained for uniform colors. We find that thresholds for skin are higher than those for uniform patches. The change in the magnitude of these thresholds with illumination is mediated by a luminance, and not a chromatic, mechanism. In the second experiment, we investigate how discrimination thresholds are affected by the color of the illuminant and the mean color of the stimulus. Our results suggest that the chromatic discrimination ellipses change size with their chromatic distance from the ambient illuminant. Taken together, our data indicate that the human visual system shows adaptation to the spatio-chromatic structure of skin. In addition, our results on the discrimination thresholds for skin stimuli lend themselves to a variety of applications such as the evaluation of skin prostheses and algorithms for automated dermatological examination.

This section gives methodological details common to all the experiments described in this article. Experiment-specific details are described in the corresponding sections to avoid confusion.

All experiments were carried out in a lightproof anechoic chamber fitted with an overhead luminaire (GLE-M5/32; GTI Graphic Technology Inc., Newburgh, NY). Two illumination modes from the overhead luminaire were used—metameric daylight and cool-white fluorescent light. In the first experiment, an additional dark condition was also used, wherein the overhead luminaire was switched off. The light reaching the screen in each luminaire mode was measured using a spectroradiometer (PR-650; Photo Research Inc., North Syracuse, NY) and a standard white reflective tile placed at the same position as the center of the stimulus (which was presented on a screen). The measured spectral power distributions of the two illuminants are shown in Figure 1a.

Experimental methods. (a) Spectral power distributions of the overhead illuminants: Simulated daylight (6100 K, blue line) and cool-white fluorescent lighting (3900 K, yellow line). They are labeled D65 and TL84, respectively, as they are approximately metameric with standard D65 (6500 K) and TL84/F11 (4000 K) illuminants. (b) Simulated skin patches used as reference stimuli in Experiment 1. Skin patches (Caucasian and Chinese) were simulated using the metameric daylight and cool-white fluorescent illuminants shown in (a). Note that the images in the illustration are not color accurate. (c) Generation of the test patch. The blue points represent the color of each pixel in the reference patch. The black arrows show the direction of the test vector which was added to the reference patch. The result is a displacement of the color of each pixel in the direction of the test vector, leading to a transformed image (test patch). (d) An example of the stimulus presented during the four-alternative forced-choice odd-one-out task. The participants were asked to identify the test patch (lower right in the illustration).

Experimental methods. (a) Spectral power distributions of the overhead illuminants: Simulated daylight (6100 K, blue line) and cool-white fluorescent lighting (3900 K, yellow line). They are labeled D65 and TL84, respectively, as they are approximately metameric with standard D65 (6500 K) and TL84/F11 (4000 K) illuminants. (b) Simulated skin patches used as reference stimuli in Experiment 1. Skin patches (Caucasian and Chinese) were simulated using the metameric daylight and cool-white fluorescent illuminants shown in (a). Note that the images in the illustration are not color accurate. (c) Generation of the test patch. The blue points represent the color of each pixel in the reference patch. The black arrows show the direction of the test vector which was added to the reference patch. The result is a displacement of the color of each pixel in the direction of the test vector, leading to a transformed image (test patch). (d) An example of the stimulus presented during the four-alternative forced-choice odd-one-out task. The participants were asked to identify the test patch (lower right in the illustration).

In all experiments, thresholds were estimated using a four-alternative forced-choice task. Four patches were simultaneously displayed on the screen, of which three were copies of a single reference patch, while one—the test patch—differed in color (Figure 1d). The observer"s task was to indicate the odd one out by pressing the corresponding button on a response box. The test patch was generated by adding a test vector in 3-D CIELAB color space (CIE, 2004) to each pixel of the reference patch. The process is illustrated in Figure 1c.

The CIELAB space was chosen because of its wide acceptance as a uniform color space. This is achieved through nonlinear compression of opponency channels and a normalization to a reference white point. Intuitively, equal steps in CIELAB space correspond to (roughly) equal changes in the color appearance of the stimulus. The white point used for the normalization of this CIELAB space was fixed as the white point of the display used for the experiment, with CIE xyY coordinates (CIE, 2004) of [0.28 0.30 106.1445]T. The thresholds were estimated along 14 directions such that the CIELAB space was sampled evenly. Six of these coincided with the cardinal ±L*, ±a*, and ±b* directions, while the other eight directions were along the centroids of the eight octants. During the experiment, the length of the test vector in each direction was controlled by the QUEST adaptive algorithm (Watson & Pelli, 1983), leading to 14 interleaved staircases. Theoretically, the measured threshold corresponded to an 86% score on the psychometric function. In the best-case scenario, each staircase lasted approximately 40 trials, although if observers made frequent errors, some lasted for as many as 90 trials.

The stimuli were presented on a color-calibrated monitor (ColorEdge CG243W; EIZO Corporation, Hakusan, Japan) using the ViSaGe graphics system (Cambridge Research Systems Ltd., Rochester, UK) The participants were seated 175 cm away from the screen. At this distance, the opposing edges of the individual 5 cm × 5 cm patches subtended an angle of ≈1.65° at the observer"s retina (while the diagonals subtended an angle of ≈2.3°). In all cases except the dark condition, the screen was covered by a gray cardboard sheet with cutouts such that only the four patches remained visible. This occluded the self-luminous background, forcing the observer to further adapt to the ambient illumination. It also made the patches appear less like images presented on a self-luminous screen, akin to what is often described as an object mode of stimulus presentation (Tangkijviwat, Rattanakasamsuk, & Shinoda, 2010). We think this is a more ecologically valid method of presenting stimuli such as natural or known textures and surfaces on a computer screen.

In the dark condition, the gray cardboard was removed and the stimuli consisted of the four patches against a gray background of the same chromaticity as the simulated daylight from the luminaire (x = 0.32, y = 0.34) at 20 cd/m2. This chromaticity was chosen in order to avoid arbitrary adaptation to the textured self-luminous stimuli while also ensuring that the dark condition remained comparable to the luminaire-illuminated D65 condition.

The observer responses were collected using a mechanical-contact response box (RB-350, Cedrus Corporation, San Pedro, CA). The experiment was programmed in MATLAB (MathWorks, Natick, MA) using the CRS (Cambridge Research Systems) Toolbox. The ellipsoid fitting and data analysis were performed in MATLAB and R.

The study began with an initial briefing where the four-alternative forced-choice odd-one-out task was explained and participants were instructed on how to use the response box. During this briefing the experimenter also mentioned that the study was designed to measure the observers" ability to differentiate between small changes in skin appearance. Next, the participants were tested for color-normal vision using the Cambridge Colour Test (Regan, Reffin, & Mollon, 1994). Due to the nature of the study, only participants with normal color vision were allowed to continue.

To avoid observer fatigue, testing was carried out in blocks held on separate days. Only one illumination condition was tested per block, due to the prohibitively high stabilization period of the luminaire. Before testing, the corresponding light source was allowed to stabilize for at least half an hour. Each block was divided into several sessions, each corresponding to the measurement of a separate discrimination boundary. Since the discrimination boundary was estimated by measurements along 14 directions, each session consisted of 14 randomly interleaved staircases operating along different directions in color space.

The sessions began with a test run of 30–50 easy trials (ΔELAB ≥ 5) which were not part of the main experiment; the objective was to facilitate adaptation to the ambient illumination while at the same time making sure that the observers understood and remembered the task. After the test run, the observers remained in the lightproof chamber for another minute for further adaptation, before a long beep signaled the start of the main experiment. At this point, the observers pressed a button to start the presentation of the trials. Each trial consisted of on-screen display of the stimulus corresponding to a randomly chosen staircase, which timed out after a maximum of 5 s (the stimulus was displayed throughout the duration of the trial). If the observer failed to respond within 5 s in a given trial, the experiment advanced to the next randomly chosen staircase while the state of the original staircase (for which the observer did not register a response) was not changed. A response or time-out was signaled by a beep, after which the experiment moved on to the next trial.

The thresholds for each condition were estimated thrice. There were breaks of 5–10 min. between the sessions, during which the observers were allowed to exit the lightproof chamber.

Ethical approval was gained from the University of Liverpool Ethics Sub-Committee, and the study was performed in accordance with the ethical standards laid down in the Declaration of Helsinki. Participants were recruited from the student population of the University of Liverpool. All subjects were reimbursed for their time. Prior to participation, informed consent was gained from each subject.

CIELAB is a device-independent uniform color space, with respect to a given white point. It expresses color as three numerical values: L* for the lightness and a* and b* for the green–red and blue–yellow chromatic components. CIELAB includes a von Kries–type adaptation constant to account for appearance changes due to illumination changes. CIE 1976 UCS, on the other hand (whose axes are commonly labeled as u′ and v′), is a uniform chromaticity-scale diagram. It is a projective transformation of the CIE xy chromaticity diagram (CIE, 2004) designed to yield a more uniform perceptual color spacing. Its axes roughly denote the red–green and yellow–blue colors. Both spaces are attempts to improve perceptual uniformity of the standard tristimulus CIE XYZ space, but CIE 1976 UCS does not make any assumptions about the adaptational state of the visual system.

While CIELAB allows for sampling of the color-appearance space in a uniform manner, it also makes it difficult to compare absolute visual sensitivity across illumination conditions. Since we were interested in studying the effect of illumination changes, we analyzed our results in a color space composed of the CIE 1976 UCS and a scaled luminance axis. The luminance axis was scaled by 1/100 so that it followed the order of magnitude of the chromaticity values, akin to methods previously employed by Melgosa et al. (1997) for reporting suprathreshold ellipsoids for surface colors. We will refer to this space as the u′v′Y′ space, where Y′ is the scaled version of the CIE luminance coordinate Y.

Thresholds for each participant in each condition were transformed to the u′v′Y′ space and averaged over the three repetitions. For each set of 14 average thresholds (along each of the 14 directions of measurement), an ellipsoid centered at the mean stimulus color was fitted by minimizing the total least-squared distance of the points from the ellipsoid surface. This resulted in one fitted ellipsoid per observer per condition. A detailed mathematical description of the ellipsoid fitting is provided in Appendix 1.

A set of meaningful parameters such as axis lengths and projections (see Figure 2 and Appendix 1) was extracted from the ellipsoids for analysis. One of the main parameters in our analysis was the volume of the ellipsoids, as it can be interpreted as a measure of the number of nondiscriminable stimuli given a fixed reference stimulus (the center of the ellipsoid). To further explore the discrimination boundaries, we divided the analysis into two parts: an analysis of the projections of these ellipsoids on the chromatic plane (theoretically, the envelope of chromatic discrimination ellipses across luminance) and an analysis of the luminance projections of the ellipsoids (both projections are illustrated in Figure 2b). This analysis of the discrimination ellipsoid in terms of luminance and chromaticity projections was driven by the independence in the chromaticity and luminance projections of discrimination ellipsoids reported by other researchers (Melgosa, Pérez, El Moraghi, & Hita, 1999) and later verified by results from the present study. It also breaks down the complicated 3-D discrimination boundaries into components that are relatively easier to interpret.

Ellipsoid parameters and projections. Ellipsoids were fitted to 14 thresholds in each condition for each observer. (a) Ellipsoid parameters. The semiaxis lengths and orientations were extracted from the fitted ellipsoids. (b) Ellipsoid projections. The fitted ellipsoids were projected on the chromaticity plane and the luminance axis for further analysis. Please note that this is only an illustration to demonstrate the projections; the actual ellipsoids showed a much closer alignment with the vertical luminance axis.

Ellipsoid parameters and projections. Ellipsoids were fitted to 14 thresholds in each condition for each observer. (a) Ellipsoid parameters. The semiaxis lengths and orientations were extracted from the fitted ellipsoids. (b) Ellipsoid projections. The fitted ellipsoids were projected on the chromaticity plane and the luminance axis for further analysis. Please note that this is only an illustration to demonstrate the projections; the actual ellipsoids showed a much closer alignment with the vertical luminance axis.

The analysis was carried out using standard toolboxes in R and MATLAB. Circular variables were analyzed using directional statistics (Fisher, 1953) through the circular package in R and the CircStat toolbox (Berens, 2009) in MATLAB. Both these packages use routines primarily based on the work of Jammalamadaka and Sengupta (2001).

The aim of the first experiment was to investigate how discrimination boundaries for skin differ from those for the corresponding mean uniform color, and to determine the influence of lighting condition and skin ethnicity on the nature of these changes. The discrimination boundaries were estimated under three different ambient lighting conditions (dark, daylight D65, and cool-white fluorescent TL84), using calibrated images of two skin types (Caucasian and Chinese). The choice of the two skin types was based on recent reports that ethnicity is highly correlated with the colorimetric yellowness of skin (Xiao et al., 2017).

Images of a Caucasian and a Chinese female face were captured under controlled D65 lighting in a Verivide DigiEye light booth using a calibrated Nikon D7000 camera. The images were calibrated for size by including a marker of known dimensions in the frame. Patches approximately 5 × 5 cm were cropped from the forehead regions of two selected images (one image per ethnicity). The cropping was done such that the patches looked uniformly lit, planar, and textured. Care was taken to minimize cues besides color and texture, such as obvious illumination gradients, shadows, furrows, wrinkles, blemishes, and facial and stray hair. These cropped patches were then used for the reconstruction of the reflectance spectrum at each pixel (Agahian, Amirshahi, & Amirshahi, 2008; Babaei, Amirshahi, & Agahian, 2011; Shen, Cai, Shao, & Xin, 2007; Xiao et al., 2016) using a Silicon skin-color chart manufactured by Spectromatch. Compared to standard calibration techniques such as the MacBeth chart, this skin-specific calibration provides better accuracy within the specific region of skin gamuts.

Here, λ is the wavelength in the visible spectrum, L(λ) is the spectrum of the illuminant, Display Formula\({\hat {\boldsymbol r}_{\rm pixel}}\left( \lambda \right)\) is the estimated reflectance spectrum calculated for each pixel, Display Formula\({\bar {\boldsymbol x_i}}\left( \lambda \right)\) is the ith CIE 1931 XYZ color-matching function, and Xi is the ith tristimulus coordinate corresponding to Display Formula\( {\bar {\boldsymbol x_i}}\left( \lambda \right)\). In this experiment, values of L(λ) correspond to the spectral power distribution curves shown in Figure 1. The color gamuts of the patches simulated using both overhead illuminants are shown in Figure 3a, while the luminance and chromatic projections of these distributions are shown in Figure 3b. The first row shows plots of luminance (ordinate) against the u′ coordinate (abscissa), while the second row shows u′v′ chromaticity plots (v′ ordinate, u′ abscissa).

Color distribution of the stimuli. (a) The color distribution in the u′v′Y space. The skin patches were simulated (Equation 1) using D65 and TL84 illuminant spectral power distributions (Figure 1a) and pixel-wise reconstructed reflectance spectra. Displayed stimuli were always consistent with the ambient illumination in the test booth. (b) Luminance and chromatic spreads of the stimuli. Patches are shown column-wise (left column, Caucasian; right column, Chinese). The first row shows the luminance spreads (luminance along the ordinate, u′ along the abscissa), while the second row shows chromatic spreads (v′ ordinate, u′ abscissa).

Color distribution of the stimuli. (a) The color distribution in the u′v′Y space. The skin patches were simulated (Equation 1) using D65 and TL84 illuminant spectral power distributions (Figure 1a) and pixel-wise reconstructed reflectance spectra. Displayed stimuli were always consistent with the ambient illumination in the test booth. (b) Luminance and chromatic spreads of the stimuli. Patches are shown column-wise (left column, Caucasian; right column, Chinese). The first row shows the luminance spreads (luminance along the ordinate, u′ along the abscissa), while the second row shows chromatic spreads (v′ ordinate, u′ abscissa).

A more detailed description of the color distribution of the two patches is shown in Table 1. The specific skin images used were prototypical images for both ethnicities with a mean color and luminance approximately in the center of the distribution for each ethnicity (Xiao et al., 2016). The gamut of the Chinese patch was found to have a higher luminance range in each illumination condition, and the gamuts for each ethnicity showed higher volumes and areas in D65 compared to TL84. A principal-components analysis of the chromatic projections of the gamuts further showed that the variance explained by the first principal component was reasonably high in all cases. An analysis of the orientation of this first principal component showed that while the color distribution of the Caucasian patch varied along the u′ axis in both illumination conditions, the Chinese patch showed variation along an inclined axis, with the inclination changing markedly with the illuminant.

The color distributions of the two skin patches (Caucasian and Chinese) described using five parameters. Notes: Mean = the mean color of the patches in u′v′Y space. Volume = calculated by fitting a convex hull to the distributions in u′v′Y′ space. Luminance range = calculated by using maximum and minimum luminance values in the distributions. Area = calculated by fitting a convex hull to the chromatic projections of the data on the u′v′ plane. Orientation of first PC (principal component) = calculated by performing a principal-components analysis on the chromatic projections and computing the angle made by the first principal component with the positive u′ axis.

The color distributions of the two skin patches (Caucasian and Chinese) described using five parameters. Notes: Mean = the mean color of the patches in u′v′Y space. Volume = calculated by fitting a convex hull to the distributions in u′v′Y′ space. Luminance range = calculated by using maximum and minimum luminance values in the distributions. Area = calculated by fitting a convex hull to the chromatic projections of the data on the u′v′ plane. Orientation of first PC (principal component) = calculated by performing a principal-components analysis on the chromatic projections and computing the angle made by the first principal component with the positive u′ axis.

All stimuli were generated by applying the procedure described under The task and stimulus generation. For the measurement of skin thresholds, the reference images were the skin patches described under Skin images: Acquisition and simulation. In addition, discrimination thresholds for two uniform color patches were also measured using the same procedure as the skin patches. These two uniform color patches corresponded to the mean CIELAB colors of the two skin patches (Caucasian and Chinese), respectively.

The experiment was conducted in two stages. In both stages, the protocol outlined earlier under Experimental protocol was used. The first stage measured thresholds for skin stimuli in 18 participants. In the second stage, eight of the 18 participants were recalled for the measurement of uniform skin-color discrimination thresholds.

Within a given illumination block (see the first Experimental protocol section), stimuli derived from the two ethnicities (Caucasian and Chinese) were tested alternately, three times each, leading to a total of six sub-blocks. On average, the observers responded to approximately 40 trials per staircase; and since there were 14 interleaved staircases, each sub-block consisted of at least 550 trials lasting from 20 to 25 min. A total of 252 thresholds (3 illuminants × 2 patch ethnicities × 3 repetitions × 14 measurement directions) were measured for each type of stimulus—skin images and uniform patches—amounting to about 7.5 hr of testing per participant per stimulus type. The participants were compensated for their time with a fee.

Ellipsoids fitted to mean thresholds. (a) Skin patches (N = 18). (b) Uniform colors (N = 8). The average thresholds across observers are marked with small spheres of the corresponding color, with the corresponding standard errors being marked as black lines through the spheres oriented along the direction of measurement.

Ellipsoids fitted to mean thresholds. (a) Skin patches (N = 18). (b) Uniform colors (N = 8). The average thresholds across observers are marked with small spheres of the corresponding color, with the corresponding standard errors being marked as black lines through the spheres oriented along the direction of measurement.

This is also reflected in the plot of the ellipsoid volumes shown in Figure 5a. To further examine these discrimination volumes, they were projected on the luminance axis and the u′v′ chromaticity plane (for details, see Data analysis and Appendix 1). The length of the luminance projection (a line segment) and the area of the chromaticity projection (an ellipse) are shown in Figure 5b and 5c, respectively. Both luminance and chromatic thresholds are higher for skin stimuli than for uniform patches. Furthermore, the luminance projections for skin images are, on average, larger in TL84 than the other two illumination conditions.

Discrimination ellipsoid parameters with 95% confidence intervals (Cousineau, 2005; Morey, 2008). Only observers common to both conditions (N = 8) are considered. The parameters are derived by fitting ellipsoids to each observer"s threshold data. The colors of the bars code the ambient illumination. (a) Ellipsoid volume. (b) Length of the luminance projection. (c) Area of the projected chromatic ellipse. (d) Orientation of the chromatic ellipse"s major axis. A detailed derivation of these parameters is given in Appendix 1.

Discrimination ellipsoid parameters with 95% confidence intervals (Cousineau, 2005; Morey, 2008). Only observers common to both conditions (N = 8) are considered. The parameters are derived by fitting ellipsoids to each observer"s threshold data. The colors of the bars code the ambient illumination. (a) Ellipsoid volume. (b) Length of the luminance projection. (c) Area of the projected chromatic ellipse. (d) Orientation of the chromatic ellipse"s major axis. A detailed derivation of these parameters is given in Appendix 1.

Figure 6 shows the mean chromatic projections (discrimination ellipses) on the u′v′ chromaticity plane. The chromatic ellipses for skin images (solid lines) are larger than those for the corresponding uniform patches (dashed lines). It is also interesting to note that while the area of these chromatic ellipses changes between the two ethnicities (being higher for the Chinese skin patch), there is little variation within the illumination conditions. Besides the area, the orientation of these ellipses (Figure 5d) also shows an interesting trend: The ellipses for the TL84 illumination condition differ markedly in their orientation from the dark and D65 conditions. These effects are also reflected in the individual observer data (Supplementary File S1).

Furthermore, the azimuths of the chromatic ellipses for skin stimuli (solid lines in Figure 6) show a systematic variation. This variation could be explained in two plausible ways. First, we observe that the azimuths for both the patches seem to be aligned with the daylight locus. This supports the theory that discrimination thresholds are minimally orthogonal to the caerulean line—the line representing natural illuminants (Danilova & Mollon, 2010); and observers tend to confuse colors that lie along the daylight locus more than the colors that lie orthogonal to it. A second explanation could be that the alignment of the ellipses is influenced by the color gamut of the respective skin patches. This is similar to the results obtained by Hansen et al. (2008), who found that isoluminant discrimination ellipses roughly follow the direction of maximum chromatic variation in natural stimuli (banana, orange, and lettuce). These two explanations are by no means exclusive, and could be reconciled by the very interesting possibility that color distributions of natural surfaces and textures under varied lighting conditions fall maximally along the daylight locus.

In the simulated daylight and fluorescent conditions, the reference images represent ecologically valid simulations where the ambient illumination is consistent with the simulated appearance of the skin patch. The dark condition, on the other hand, does not represent an ambient illumination and is always inconsistent with the rendered skin patch (which is simulated using the D65 illuminant). The patches in this condition could easily be made out by the observers to be self-luminous images displayed on a screen. Even so, an interesting observation can be made if one compares the simulated daylight and the dark conditions. Although the two conditions use stimuli simulated using the same illuminant (luminaire D65), they have different viewing parameters in terms of the display mode (object mode in D65 vs. self-luminous surface patch in dark), the surround (gray cardboard reflecting ambient lighting in D65 vs. self-luminous gray screen in the dark condition), and the luminance of the illumination (≈51 cd/m2 simulated daylight from an overhead luminaire in D65 vs. ≈20 cd/m2 simulated daylight from the surround in the dark condition). Bearing this in mind, we observe that the chromatic projections of the discrimination ellipsoids under these two conditions display remarkably similar orientations (Figure 5d), whereas the discrimination ellipsoids themselves differ in overall volume (Figure 5a). This could suggest that while the chromatic mechanisms which respond to the skin stimulus depend on the spectrum of the foveal stimulus (which is the same in both dark and D65 conditions), the relative activations of these mechanisms are influenced by the adaptation conditions (which differ markedly between the two conditions).

Figure 7 shows the ratio of ellipsoid parameters for skin images and the corresponding uniform patches. Skin images are harder to discriminate, with ellipsoid volumes about 2–3 times larger than those for uniform patches (Figure 7a). This difference is found along both chromatic (Figure 7b) and luminance (Figure 7c) dimensions, though the ratios are higher for chromatic projections. A similar increase in chromatic thresholds was reported by Hansen et al. (2008) for natural textures, and for synthetic textures with color distributions similar to natural textures. Montag and Berns (2000) also reported similar effects in luminance thresholds. A possible explanation of these results could lie in the proposition by Webster and Mollon (1997) that polychromatic natural stimuli entail not only adaptation to the mean luminance of the scene but also a contrast adaptation to the color distribution within the scene. They reasoned that although light adaptation could adjust for changes in mean color, it cannot compensate for changes in the statistics of the color distributions. They further proposed that contrast-adaptation mechanisms might operate by whitening the stimulus color distribution based on changes in postreceptoral channel tunings, with new tunings emerging due to inhibition between channels which produce the most correlated responses (Atick, Li, & Redlich, 1993; Barlow & Földiák, 1989; Webster & Mollon, 1997). Considering that in the current study the observer could view the entire scene (the interior of the testing booth), one cannot ignore contrast adaptation regardless of whether the tested stimuli were uniform or textured. Even so, it is likely that the amount of possible contrast adaptation in case of uniform stimuli was lower than that for the simulated skin patches (since there is no contrast within a uniform foveal stimulus). Thus, the observers were comparatively less adapted, and hence less capable of constancy or discounting the illuminant in the uniform color condition, which in turn would predict better discrimination performance or lower thresholds compared to skin stimuli.

Ratio of ellipsoid parameters measured for skin and uniform patches. (a) Ellipsoid volume. (b) Area of chromatic-projection ellipses. (c) Luminance-projection length. The ratios for all three parameters are greater than unity, indicating that skin images have larger discrimination ellipsoids compared to uniform patches of the corresponding mean colors. This increase is size is observed for both luminance and chromaticity projections.

Ratio of ellipsoid parameters measured for skin and uniform patches. (a) Ellipsoid volume. (b) Area of chromatic-projection ellipses. (c) Luminance-projection length. The ratios for all three parameters are greater than unity, indicating that skin images have larger discrimination ellipsoids compared to uniform patches of the corresponding mean colors. This increase is size is observed for both luminance and chromaticity projections.

So far, we have reported the discrimination thresholds in terms of the parameters of fitted discrimination boundaries. In industrial processes such as 3-D printing of skin prostheses, measurements are often made along the axes of the color space. Thus, to better use our data in practical applications, it is important to analyze the projections of discrimination boundaries onto the u′ and v′ axes.

Chromatic discrimination ellipses for skin stimuli (see Appendix 2) show that the just-noticeable differences along the two axes are, in general, not equal. While the distortion is very high in the artificial fluorescent (TL84) illuminant, simulated daylight illumination (D65) produces roughly similar thresholds along the two axes. In D65, the thresholds range from 0.005 to 0.01 along either axis (for comparison, uniform color just-noticeable differences in u′v′ are around 0.005), and the v′ thresholds are about 0.7 times the u′ thresholds. Thus, to a first approximation under a daylight illuminant, the commonly used u′v′ space can indeed be quite useful to predict whether two skin patches will look the same.

In Experiment 1 we showed that discrimination thresholds for skin stimuli are higher than those for uniform patches, and that both sets of stimuli are affected by the illumination condition. But what are the properties of the stimulus and the ambient illumination which drive these thresholds? This was investigated in Experiment 2; the specific question addressed here was to what extent the illuminant and the mean location of the stimulus in color space affect the discrimination thresholds. In Experiment 1, the simulated skin patches were ecologically valid (i.e., the appearance of each stimulus was simulated such that they were consistent with the ambient illumination). For Experiment 2, the color distributions of these ecologically valid stimuli from Experiment 1 were translated such that the mean colors in the two illumination conditions were swapped (Figure 8), while their relative distributions remained intact. Our hypothesis was that if the thresholds are simply driven by the location of the textures in color space, swapping of the means should also swap the discrimination thresholds of the stimuli in the two illumination conditions.

Stimuli used in Experiment 2. (a) Generation of polychromatic reference stimuli for Experiment 2, shown in u′v′Y color space. Only the Caucasian patch was used in this experiment. The mean colors of the Caucasian patch simulated under D65 and TL84 are labeled “D” and “T,” respectively. In Experiment 2, the reference stimulus to be tested in D65 was generated by translating the distribution under D65 such that its mean was shifted to T. Similarly, the reference stimulus tested under TL84 was generated by translating the TL84 skin distribution such that its mean was shifted to D. This essentially swapped the means of the two distributions while maintaining the relative positions of the colors (the relative distribution). Thus, in Experiment 2, T is the mean color of the reference stimulus tested under D65, while D is the mean color of the stimulus tested under TL84. (b) Luminance and chromatic spreads of the simulated skin patches (left column) and the stimuli generated by swapping their relative distributions in CIELAB space (right column). In each case, the color of the points represents the illumination used for testing the corresponding patch (light and dark blue: D65; yellow and brown: TL84). The first row shows the luminance spreads (luminance along the ordinate, u′ along abscissa) while the second row shows chromatic spreads (v′ ordinate, u′ abscissa).

Stimuli used in Experiment 2. (a) Generation of polychromatic reference stimuli for Experiment 2, shown in u′v′Y color space. Only the Caucasian patch was used in this experiment. The mean colors of the Caucasian patch simulated under D65 and TL84 are labeled “D” and “T,” respectively. In Experiment 2, the reference stimulus to be tested in D65 was generated by translating the distribution under D65 such that its mean was shifted to T. Similarly, the reference stimulus tested under TL84 was generated by translating the TL84 skin distribution such that its mean was shifted to D. This essentially swapped the means of the two distributions while maintaining the relative positions of the colors (the relative distribution). Thus, in Experiment 2, T is the mean color of the reference stimulus tested under D65, while D is the mean color of the stimulus tested under TL84. (b) Luminance and chromatic spreads of the simulated skin patches (left column) and the stimuli generated by swapping their relative distributions in CIELAB space (right column). In each case, the color of the points represents the illumination used for testing the corresponding patch (light and dark blue: D65; yellow and brown: TL84). The first row shows the luminance spreads (luminance along the ordinate, u′ along abscissa) while the second row shows chromatic spreads (v′ ordinate, u′ abscissa).

In Experiment 1, the reference stimuli were color-accurate renderings of skin patches such that their appearance was consistent with the ambient illumination (D65 or TL84). In Experiment 2, the reference stimuli were obtained by translating the color distribution of simulated skin under one illuminant (say D65) such that its mean moved to the mean color of simulated skin under the other illuminant (in this case, TL84). Note that this manipulation, while swapping the means of the stimuli under the two illuminants, maintains their original relative color distributions (Figure 8a). Since the swap involved colors measured under different illuminants, it was carried out in the CIELAB space, which has some degree of inbuilt adaptation. To reduce the testing time per participant, only stimuli based on the original Caucasian patch were tested, and the ecologically inconsistent dark condition was dropped. Thresholds for uniform patches derived from the mean CIELAB colors of the stimuli were also measured.

The experiment followed the exact same protocol as Experiment 1 except that only a subset of the observers (N = 6) from Experiment 1 were recruited. In total, 168 thresholds (2 illuminants × 2 stimulus types: uniform and textures × 3 repetitions × 14 measurement directions) were measured, amounting to about 6 hr of testing per participant.

In Figure 9 we show the results from Experiment 2, along with a subset of the results from Experiment 1 for comparison (only participants common to all tested conditions are shown). The ellipsoid volumes for textured (“Image”) and uniform stimuli in Experiment 2 (“Swapped means,” right subpanel, Figure 9a) show the same trend as Experiment 1 (left subpanel, Figure 9a), with the volumes for textures or images being larger than those for uniform stimuli. In Experiment 1 we found this difference to be distributed over both the luminance and chromatic thresholds (see left subpanels from Figure 9b and 9c, respectively). This is not found to be the case in Experiment 2—we observe that while thresholds along the luminance axis are similar for textures or images and uniform patches (right subpanel, Figure 9b), the areas of the chromatic projections are markedly higher in D65 as compared to TL84 (right panel, Figure 9c), suggesting a differential effect of illumination on chromatic-discrimination performance.

Parameters of discrimination boundaries from Experiments 1 and 2. (a) Ellipsoid volumes. (b) Luminance projections of the discrimination ellipsoids. (c) Areas of the chromatic-discrimination ellipses. (d) Orientations of the major axes of the chromatic-discrimination ellipses with respect to the positive u′ axis. (e) Average chromatic ellipses for polychromatic stimuli from Experiments 1 and 2. (f) Average chromatic ellipses for uniform stimuli from Experiments 1 and 2. The color of the bars indicates the ambient illumination (yellow: TL84; blue: D65). Labels T and D refer to the mean color (in u′v′Y′ color space) of the Caucasian skin patch simulated under TL84 and D65 illuminants (see Figure 8a). In (a–d), Experiment 1 is shown in the left subpanel (“Skin”) and Experiment 2 is shown in the right subpanel (“Swapped means”). The solid ellipses in (e–f) show the results for Experiment 1, while dashed ellipses show the results for Experiment 2. Only observers common to both experiments are shown (N = 6).

Parameters of discrimination boundaries from Experiments 1 and 2. (a) Ellipsoid volumes. (b) Luminance projections of the discrimination ellipsoids. (c) Areas of the chromatic-discrimination ellipses. (d) Orientations of the major axes of the chromatic-discrimination ellipses with respect to the positive u′ axis. (e) Average chromatic ellipses for polychromatic stimuli from Experiments 1 and 2. (f) Average chromatic ellipses for uniform stimuli from Experiments 1 and 2. The color of the bars indicates the ambient illumination (yellow: TL84; blue: D65). Labels T and D refer to the mean color (in u′v′Y′ color space) of the Caucasian skin patch simulated under TL84 and D65 illuminants (see Figure 8a). In (a–d), Experiment 1 is shown in the left subpanel (“Skin”) and Experiment 2 is shown in the right subpanel (“Swapped means”). The solid ellipses in (e–f) show the results for Experiment 1, while dashed ellipses show the results for Experiment 2. Only observers common to both experiments are shown (N = 6).

The mean chromatic ellipses are shown in Figure 9e (polychromatic stimuli) and 9f (uniform stimuli). The symbols and conventions used in these plots are the same as in Figure 6, except that the solid and dashed ellipses now denote data from Experiments 1 (solid lines) and 2 (dashed lines). Furthermore, the orientation of the major axis across the observers is shown in Figure 9d. Similar to the results from Experiment 1, we observe a strong effect of the ambient illumination on the orientation of the ellipses, with the TL84 ellipses being closer in orientation to the u′ axis than the D65 ellipses for both polychromatic and uniform stimuli.

The change in discrimination-ellipsoid volume from Experiment 1 to Experiment 2 is not consistent with respect to the illumination or the mean color of the stimulus (Figure 9a). However, resolving the discrimination volume into chromatic and luminance projections yields more consistent trends. The luminance projections of the threshold are higher for the fluorescent TL84 illuminant, irrespective of the location in color space (Figure 9b). This effect of the ambient illumination (also observed in Experiment 1) is qualitatively consistent with a simple Von Kries model (Chauhan et al., 2014), which, given the illuminants used in the experiment (Figure 1a), predicts higher luminance thresholds under the TL84 illuminant than simulated daylight.

The area of the chromatic projections, on the other hand, seems to be modulated by the distance of the reference stimulus from the chromaticity of the ambient illumination. Figure 10a shows the area of the chromatic ellipses from both experiments as a function of the distance between the reference stimulus and the illuminant chromaticity. The observers are coded by color, while the shape of the marker codes the experiment (circles are Experiment 1 and triangles are Experiment 2). In addition, vertical lines are drawn to indicate the lengths of the principal axes of the stimulus color distribution (coded by the illuminant color: blue for D65 and yellow for TL84)—taken here as an approximate measure of the spread of the distribution. Although the sampling is insufficient to draw a strong conclusion, we observe that the area of the ellipse tends to increase with the chromatic distance. This effect has also been reported by Giesel et al. (2009) for areas of discrimination ellipses measured along an isoluminant plane using natural stimuli and textures. We observe that, taken together, the luminance and chromatic effects (luminance thresholds are governed the ambient illumination, while chromatic thresholds depend on the chromatic distance of the stimulus from the illuminant) do indeed explain the trend observed in discrimination volumes across both experiments so far. Furthermore, the aforementioned effects of ambient illumination and chromatic distance of the stimulus from the illuminant are observed for both textured and uniform stimuli, suggesting that they are global mechanisms, likely to be driven by the mean adaptation to the illumination condition.

(a) Area of chromatic ellipses as a function of distance (in the u′v′ chromaticity plane) from the illuminant. The data for each observer are shown in a different color. For reference, we also use dashed vertical lines to show the extent of the skin gamut along the direction of the first principal component (D65: blue; TL84: yellow). (b) Ratio of ell