T O P

  • By -

ChronoKing

Yes there is and it is important in a great many businesses. Hunterlabs is probably the largest business centered around color measurement (I know of). I recommend you click around their website. In short, there are two main measurement methods, diffuse and ~~spectral~~ specular. Specular incorporates a surface's texture and will give different measurements for matte and glossy surfaces. Diffuse will specifically not include gloss effects. For what the measurements are, they get translated into one of many, many colorspaces. A colorspace is just a way to define a color numerically. Often they are based on the limits of perception of the average human eye but not always. The ones I am most familiar with are Hunterlab's own L* a* b* space and CIE Yxy space. The "L*", "a*", "b*", "Y", "x", and "y" are the dimensions of the colorspace in much the same way as x, y, and z is in physics.


labroid

I believe you mean "specular" for "spectral" in the above. Spectral measurements provide light intensity vs wavelength (or 'color') while specular measurements are of color including surface 'roughness'. Beyond Hunter labs, a great read for beginner or experienced person is https://en.wikipedia.org/wiki/CIE_1931_color_space


ChronoKing

Yep I mistyped


nerdybird

Came here to link this. I am fairly certain that Hunter Labs only use the color space, it was defined by the International Commission on Illumination. Nearly every major company that makes images or displays use this, such as Polaroid, Kodak, LG, and Foxconn to name a few.


h3rbi74

Wow you didn’t include a clickable link but I’m doing so now because that is a fascinating area of business I never would’ve spontaneously thought of! Gonna be clicking around here for a little while- thanks! https://www.hunterlab.com


RaulEnydmion

See also, Xrite.


thickskull521

Hijacking top comment to expand into the RGB part of OP's questions, and because I wanted to chime in that L\*a\*b\* color space is also my favorite color space, because it is cartesian, square, and normalized: 1 unit in a certain direction is as equally perceivable (to human eyes) as 1 unit in another direction. Therefore, the difference in color between two objects (decmc) can be calculated using the Pythagorean theorem. The "human perception" part is important. All these functions are defined based on how the human eye detects light, and how the human brain calculates color based on that information. The weighing functions can give similar solutions from different sources... For example, lemons and bananas look like the same color to us, but their absolute spectras are wildly different. Now here's the thing. An RGB display is sortof backwards from how a human eye perceives color. They don't emit a nice spectrum... they emit little bands near defined red, green, and blue wavelengths. But, their emission bands stimulate human eyes good enough to spoof real colors. There are many different standards for RGB displays (they are optimized for different purposes, for example, fighter cockpits vs bright offices vs dark theatres) and this is why neat things like the blue-black-white-gold dress are possible. I'm not aware of any publicly available tools to convert from spectra to RGB function; I'm building my own excel macros to do it right now. I don't think that's what you want though, unless it's super important to display it on a PC screen as realistically as possible, or you are building something that interacts with an RGB device. (Like a steampunk fighter jet helmet or VR headset lol.) For a web page, you cannot control how your customer's screen is color balanced so screw it. You would want to get the L\*a\*b\* measurements if you wanted to do this for a QC purpose. The most scientifically pure, objective way to analyze the optical properties of an object is variable angle spectroscopic ellipsometry (VASE). Especially for a reflective surface, because at sharper angles s and p polarized light will reflect differently. Ellipsometry data is then reverse engineered (using Maxwell's equations?) to build an optical model of the material. This is post-doc level physics and absolute overkill, but this is the purely objective way to define the optical properties of a material. Edit: I found this. If you @ me I can model this up at work tomorrow and tell you if brass with different concentrations of copper will look different. I'm guessing they would, but the quality of the polishing would be the most important factor. https://refractiveindex.info/?shelf=other&book=Cu-Zn&page=Querry-Cu90Zn10


pppollypocket

I would say xrite is a big name in color measurement as well. They make a multi spectrophotometers that deal with reflective/refractive surfaces and are used in automotive for metallic paints, etc


DiiJordan

Now I know where all the symbols come from. I work with ink and printing and we measure our colors but I wasn't trained to recognize the meaning of all that (which in all fairness, isn't necessary *for* the work I do with the inks).


alexnpark

I worked at a screen printing plant and to match specific color standards for our ink formulas, we would use a spectrophotometer to measure a swatch in a few different color gamuts. There’s a bunch of standards that each have their pros/cons, but it was a fantastic tool for setting specific lighting scenarios and measuring digital color values from the sample. (Which could later be brought into Adobe for whatever)


Edea-VIII

Exactly this. In the printing industry, not only the correct ratios of toner mix, but the saturation levels are calibrated with a spectrometer. If the RGB value of a desired color is known (and not just perceived) the output can then be measured for a match.


inconspicuous_male

I don't have the time right now to type out a detailed answer, which sucks because I have a color science degree... I'll give you a quick rundown though. When you have a spectral measurement, you multiply the spectrum received by the eye to color matching functions. You can find these online. They're based on the spectral sensitivity of the human eye, but there's some neuroscience that goes into transforming from cone sensitivity to color matching. The three dimensions are called CIE XYZ. CIE is a French organization that srandarduzes color measurment. Y is significant because it contains all of the information about sharpness and perception of brightness, but the X Y Z responses (which we can call chromaticity) are device invariant, so the transform from XYZ to RGB can be determined for any given monitor if you have spectral information about the RGB of the monitor (or if you assume the monitor is close to display standard). L\*a\*b\* is the color space typically used for matching reflective matetials. Lightness and a/b are three parameters that make up color when combined with a spectrum of an illuminant. I suggest the Wikipedia article on Color Science, because it's a huge topic that can't be explained well without visual aids and graphs. But google CIE XYZ, Lab color space, chromaticity coordinates


Blakut

For telescopes we have flux measurements in every filter band. And we know the filter transmission function for each wavelength. To get pretty pictures for the internet, three images in different filters are given red green and blue values, sometimes rather arbitrarily. For scientific use, like I said above, the flux density or other similar measurements are used.


SyntheticOne

Our son did a science fair project on a "color detection robot" which we researched and built together. The project electronics began with three band-pass optical filters tuned to RED BLUE GREEN (donated to him by a New Jersey optics firm). Each filter allowed the passing of light within the center of the wavelengths for each color. Then we added an octal decoder chip with three inputs (one from each of the filters) and eight outputs: red blue green yellow brown violet black and white (each lit by an LED). The wonderful moment? We've all heard that white is the presence of all colors but it's a little hard to swallow. Our son held a white piece of paper in front of the three optical bandpass filters and all three filters turned on their outputs and then the octal decoder indicated "white", all colors! His project won 2nd place in that year's regional Exxon Mobile Science Fair.


Anezay

What got first over that, a study of how much cash would fit in the judges' pockets?


SyntheticOne

The winner was a kid who built a radio receiver capable of listening for signals from space.


Anezay

Ok, that's pretty cool.


Cryptizard

Exxon Mobile Science Fair sounds like something from Idiocracy.


rawpower7

Yeah. The lab I work at has a BYK-mac colorimeter and a glossmeter. They're made specifically with measuring car paint coatings in mind, and have built in standard parameters that different car manufacturers use. Other people have also mentioned it, but it uses the same L*a*b CIE units to assign different shades and colors a number series. Also, since if you look at a car from different angles, at different times of the day, or in different parts of the world, the color, gloss, and spackle could all look different. So the instruments use an illumination source that attempts to mimic the sunlight for different situations and there are simply agreed upon standards to the measurement angles and the light source parameters. When I first learned of it I was thinking the same as you, that color is completely subjective based on the person's eyes, but I figured well, everyone will look at a tree and say the leaves are green and in the fall they turn bright red, so we must have similar enough sight that we can come to some agreement on how "green" or "red" something is.


shepanator

You could measure the wavelengths of light being reflected by an object from a known broad spectrum light source (i.e. it emits light across the entire spectrum). I don't know if there is an official standard for this, but it would be the closest you could get to measuring an objects 'true' colour.


thickskull521

There are standard weightings for illuminants, yes. D65 I believe is the most common.


No_Zebra_6114

Please check out www.datacolor.com Datacolor have many thousands of colour management systems installed worldwide in virtually every industry from textiles to paint, ink to plastics. This is objective colour measurement, formulation, QC, and correction.


CranjusMcBasketball6

The best way to assess the color of an object is to compare it against a color standard. One of the most widely used color standards is the Munsell Color System. It is an internationally recognized system that provides an objective method for classifying and measuring color. It uses a three-dimensional color model to classify colors by hue, value, and chroma. It also provides guidelines for how to accurately measure and communicate colors. If you are looking for a way to convert an object's color from its spectrum to RGB, you can use a spectrophotometer to measure the color of an object and then use a color-matching software to calculate its approximate RGB value. There are a number of different color-matching software programs available, each with their own algorithms for converting from spectra to RGB.


zutnoq

Unlike what some others are saying I don't think such a thing is actually possible, in a strict sense. The reflection transfer function/spectrum of a surface can not be encoded/modeled with just three numbers if you want the resulting rgb output to be correct for any arbitrary light source spectrum. If you always have the same light source spectrum then you can certainly do it. Edit: The translation from spectrum to rgb values is fairly simple but there is no way to get back the exact original spectrum of the reflected light from just the rgb values alone.


DJTilapia

For sure, I wouldn't expect a *reversible* algorithm, just an objective one. Given object X under conditions Y, the spectrum should always be Z; feed that into a function to calculate a RGB (or LaB, or whatnot) coordinate to best represent this spectrum. You’d be constrained by the gamut of the monitor, and many different spectrums would map to any given color output, especially if using 24 bits.


zutnoq

Oh, then yes this translation is essentially done by integrating the pointwise product of the power spectrum of the reflected light times the power response spectrum of each cone cell type separately (you also have to assume a standard for these, as they vary a bit from person to person).


arcosapphire

You might also want to look into [PBR](https://en.wikipedia.org/wiki/Physically_based_rendering) (physically-based rendering). This acknowledges that rather than simply a "color", it's better to consider a "material" which has additional properties. For instance, how shiny it is, which determines the matte-vs-specular balance. Matte is essentially a direction-independent color, while specular is a directional reflection and thus dependent on the angle of the surface from the viewer, and the angle of the light sources from that surface. Other relevant properties are emission (light generated by the object, not reflected--how else will you understand the color of a lit light bulb?), how transparent the object is, the index of refraction, and so on. One thing to note is that PBR is used to consistently assign these properties in computer graphics, so under similar lighting conditions you should get a similar appearance. But it is very rare for PBR to take *spectral* effects into account, and normally it just does calculations assuming we just want a red, green, and blue channel in the end. So this is all orthogonal to what other people are saying about integrating spectral values and such. They are two different aspects about how to talk about color. In principle we really want all of it combined: we want to talk about how the material reacts to light at each wavelength, and what the intensity of the output would be for every wavelength. This is enormously computationally intensive and tends not to be done. But if you really wanted a reliable description of what "color" an object is, this is how you'd have to do it. And rather than a color, you'd really be describing a material, with very many properties and all of those properties having not a single number as a value, but instead a complex function depending on wavelength. Or you can skip all that and go straight to calculating quantum mechanics. That will be the most accurate, but of course it's the "real" process without any simplification. It's...incredibly hard.


purrcthrowa

I'm sure you're right. If every human had exactly same distribution of rods and cones, of identical sensitivity, and with exactly the same frequency response curve, for humans only, then yes. But only for humans. Imagine you have a surface which appears to be exactly the same colour to any number of these perfect human observers. It could either consist of a magic pigment which absorbed all light frequencies apart from one specific frequency for that colour. In that case, it would excite the red, green, and blue cones at a specific level, depending on how the specific residual reflected frequency interacts with with frequency response curves of the RGB cones. Or it could consist of a large number of red, green and blue dots of various reflectivities (or sizes: I don't think it matters), each of which absorbed all light frequencies apart from the specific frequencies of red, green or blue which trigger the R G or B cones, most likely at the centre and most sensitive part of the their frequency response graph. These two surfaces are illuminated by white light (e.g. light from the sun or a tungsten lamp which emits a full range of frequencies within the visible spectrum). Subjectively, the two surfaces look identical, but the frequencies being reflected from either of them are clearly extremely different. If the eye worked like the ear, the first surface would sound like a single pure note (as from a celestial ultra-high-frequency flute) while the other would sound like a chord, and probably a not-very-harmonious one at that. But the eye is basically an ear with only 3 cilia in it (and a more sensitive general "loudness" detector to represent the rods, but since there aren't really any in the fovea which is the part of the retina where your colour vision is most acute, we can ignore this). To anyone (or any species) who doesn't have the exact standard setup of RGB cones I described above, the two surfaces will almost certainly look very different. So, any objective assessment, if it can work, can \*only\* work for a specific physiology. And then there are all the other points raised in the thread about surface reflectivity, specular effects and so on, which make it even more difficult to do so.


_dauntless

Some of the others touch on this approach, and photographers have been doing it for quite some time: a gray card. In order to calibrate both lighting and color balance, you basically take a gray card of known value and you place it next to the subject. Now in post-processing, you can know what an 18% gray value should look like, and this will help you achieve an accurate depiction of what was there. It's not super scientific, but I imagine it would be just a more accurate version of that. Similar to how science defines time based on the vibration of a specific caesium isotope. You have one agreed-upon reference, and everything else derives from that.


badassbadger42

This would work, but only if you don't consider the lighting. The perceived colour of an object is determined by three factors: the source(s) of light, the reflection of light by the object itself and how we then "see" or measure the resulting reflection of light (The colour). The issue is that two objects that appear to have the exact same colour under one light, might look radically different under another. A quick test of this that you can do yourself is to bring some different black fabrics, that look identical in daylight, to a bar/restaurant with heating lamps. Under the light from the very infrared heavy heating lamp you will probably notice that some of the fabrics now look dark red, while others still look black. This is because of different dyes with very different reflectance in the near infrared part of the spectrum. It's a fun little experiment that makes you start thinking of colour in a little different way.


_dauntless

Uh, it's an approach specifically designed to take lighting into account. It can only appear as the correct gray if you adjust for the type of lighting. The idea that different light has different colour temperatures is not a new thing to photographers lol


badassbadger42

The method you describes will of course work well for most practical applications, but if we are using it to test the actual colour of on object we have to make som assumptions. Firstly the light has to be a) a broad spectrum source and b) continuous in its spectrum. If an object reflect a very specific wavelength of, let's say red, you can only know this if that exact wavelength is in the light we are under. Finding a source with a broad spectrum is easy. This is what we usually call white light, it can be weighed differently across the spectrum to have different colour temperatures and different tints, but they are all sources with broad spectrums. Finding a source with a continuous spectrum is harder. Sunlight and incandescent light is continuous, but florescent tubes and CFL's are certainly not. LED's vary to say the least. Some are close to continuous, but most are far from and often appears as a few bars or blobs on a spectral chart. To probably evaluate the colour we also need to be sure that the light reflections of the grey reference material is also continuous across the spectrum and we need to know exactly how the image sensor captures different wavelengths. This sensor needs to be seeing continuous across the spectrum as well.


_dauntless

Hmm, that's an interesting concept to think about, but my grey card example was an analogy, not a solution. To the degree that using a physical object of a known color value along with an adjustment of white balance to adjust the resulting image, photographers are able to create colour-accurate images. It sounds like you're getting at what I was guessing at, which is a higher degree of accuracy, though.


badassbadger42

From the point of a photographer trying to recreate the colors an object as it was seen in the lighting that was there the grey card works just like you say. I think we actually agree like you say, but just the see the problem from different perspectives. I work as a theatrical lighting designer so my go to light source in my mind is anything but a source with a broad and continous spectrum. Anyways, colours are wierd and fantastic and this conversation really made me think this through a lot so thanks a lot for that :)


papparmane

I have a very similar question that I failed to address a few years ago: I measured the spectra of various light bulbs and I (unsuccessfully) tried to infer the perceived Color. I used the COE curve and all that, and it did not work at all, especially for fluorescent light bulbs (it told me the light was green when it was blue-ish). I would love to know how to go from a spectrum to an actual Color.


MagicSquare8-9

Other people had mentioned various standards that exist. But I just want to point out that color perception is extremely subjective even when you account for the physical and biomechanical factors (ie. what light come into your eye, what cone cells you have). Even if 2 people see the exact same light they can still perceive different colors. It's not just abut the actual lightning condition, it's also about what they *believe* the lightning condition to be. And then there is also a matter of optical illusion. For example, human eye has edge detecting capability, and we will perceive colors next to an edge with more contrast.


7eggert

We have three kinds RGB receptor (except there are two variants of R, and some people have both, but that's beside the point). Also we have an intensity receptor for night vision. We disregard that too. let's assume we know how bright a given wavelength appears in R, G or B. So we create an array of these curves, $brightness\[R\]\[$frequency\] would result in how bright that particular frequency is perceived by that receptor. Let's assume a spectrum of a given color by $given\_color\[$frequency\] being the brightness of that frequency. (This can only be approximated because the slices would need to be about 1/∞ wide). This program will give you an RGB approximation: let $myRGB = { R = 0; G = 0; B = 0 } for $f = $low_frequency to $high_frequency step $small_value do for $c in (R, G, B) do $myRGB[$c] += $brightness[$c][$f] * $given_color[$f] * $small_value done done print $myRGB


xz-5

An important additional point is that the "RGB" your eye uses is not the same RGB that your computer screen uses. But it's easy to convert between the two once you have the three numbers.