Understanding sRBG (Part 2)

In the previous post we explained most of theory behind sRGB. We have a little bit more theory to cover and then we can see how to use these concepts in practice and why they are useful in game development with some examples with OpenGL and Unity.

We said that the transfer function for linear sRGB (sometimes referred to as simply "RGB", or "linear space") is, as the name suggests, linear, while for sRGB it is not. But what is a transfer function? Essentially, it is a simple function which returns the color intensity given a number, in other words it “maps” numbers to color intensities. If we express the intensity as a float for example , we could say the minimum intensity is 0.0 and the maximum is 1.0 (intensities from 0 to 1 are also said to be  “normalized”), and if we also express numbers in the same way they will also range from zero to one. The “number” here is what our application uses or image data stores, and the “color intensity” is what your monitor will actually display.

Usually many image formats use only one byte to store a primary color, so numbers range from 0 (0.0) to 255 (1.0).
Here's how the transfer function for linear sRGB and sRGB looks like:

The red curve is the sRGB (non-linear) function, while the blue line is linear sRGB.
On the X axis you have the numbers, on the Y axis you have the resulting color intensity.

Now let’s take a closer look at the sRGB (non-linear) function, which is approximately an exponential function with exponent 2.2:

The numbers we are using are the same, 0 to 1, however they map to different intensities. In linear sRGB, “0.5” means 0.5 intensity for example, but in sRGB it means much less, about 0.2 intensity. So (R=0.5, G=0.5, B=0.5) is middle grey in linear sRGB but is a darker color in sRGB.

If you try to visualize how the numbers from 0 to 1 map into that curve adding for ex. 0.1 between samples, you will notice that darker intensities are quite near each other, but as the color intensity gets lighter, they are more distant. These means sRGB has more precision on dark tones than light ones, while in linear sRGB the precision is the same for all intensities.
If we had infinite numbers, precision wouldn’t matter, because you would be able to express every color intensity in either sRGB or linear sRGB. However colors are usually stored in a limited amount of bytes, so precision does matter.

As we said in the previous post, all modern monitors expect the image in sRGB format. So if you set a color byte as "122" they will interpret it as 0.2 and not as 0.5. If you store your image in linear space, you will end up with an image which is darker than you intended it to be.

If you are confused about the reason of all this, don't worry, everything will become clear by the end of this post.

The first thing you may be wondering is: Why sRGB function isn't linear? Why bother mapping numbers with a curve and get 0.5 -> 0.2, when 0.5 -> 0.5 is much easier to understand?

There are basically two reasons for this.
Let's start with the first: Do you remember those old big cathode computer monitors and televisions we used some years ago? For electrical reasons we do not cover here, their input/output curve looks like exactly the sRGB curve. Which means if you tell one of those old monitors: "Okay, here's an input of 0.5" it will display an intesity of about 0.2 instead.
So it is convenient to store images in sRGB to begin with. The number stored in the image is the input to the television/monitor without further conversions or precision loss.
So one reason is compatibilty with these monitors.

However modern monitors do not have this issue and their response is linear, so why not throw away all those old things and store everything in linear format?
Here's the second reason: The human eye happens to better discern dark tones than light tones so giving more precision to dark tones makes a lot of sense. Why waste precision in something your eye cannot discern? That's why modern monitors use sRGB and, despite being linear, "mimic" the behaviour of the old monitors by applying the same curve. The sRGB curve has usually and exponent of 2.2 (the exponent is also called "gamma") so all you have to do to transform from linear to non-linear is applying that gamma or the inverse 1/2.2 to convert back and forth. Modern monitors also allow you to customize this exponent to make images appear generally darker or brighter. (But usually this is not needed, as changing the backlight luminosity is more appropriate in most cases)

Okay that's all for theory! In the next post we will get our hands dirty and write some code in OpenGL and Unity.

Stay tuned!


Understanding sRGB

Back in school, you were probably told that you can obtain every possible color by mixing only three "primary" ones, like red, green and blue.
Here's the shocking truth: actually, this is not possible. You can't obtain all visible colors by mixing only three colors ( or four, or more for what matters ).
It is true, however, that red, green an blue mixed together capture all visible “chromes” you can see. That is, any color you can name, like “yellow”, “violet”, “pink”, can be made out of those three.
However, even if you choose accurately the red, green and blue to mix, you will not be able to make a very sarurated orange, or violet,  or pink for example, despite the fact that your eye can see that color without problems, so it’s still a visible color.

Wait, what? How do computers show every possible color by emitting usually a red, blue and green light then?
In fact, they don't! Displays can only emit a pretty wide range of what you can see, but it would be a lie to say "all colors". The reason for this, is that almost every modern display adopts the sRGB color space, and they can only display colors that are in that space.
To understand what this sRGB is all about, let's start with eye and human color reception.

Wavelengths of visible light

Light is an electromagnetic wave and, as such, it propagates with a wave length, frequency and amplitude. The human eye is able to "capture" light with a wave length from approximately 390 to 700 nano-meters, which is what we perceive as "visible light". Light outside that range is not visible to our eye and gets those fancy names such as "infra-red", "ultra-violet", "micro-waves", "x-rays", etc.
This "invisible light" is used in countless occasions, from medical usage, wireless communications to warming up food in a microwave oven.

So how does the human eye "capture" the visible light we are interested in? It does so by using many "groups" of cones, each of them containing exactly three cones, which are stimulated by light in three different ways: One is particurlary sensible to high frequency light (blue), the second to middle frequency (green), and the third to low frequency (red). Aha! So our eye does use three receptors (a tristimulus) to capture light, so by using at least three colors for our monitors maybe we are on the right track.

Unfortunately, it is not as simple as that... .
Remember, I said they are “particularly” sensible to that light, not “only”. In fact, if we take a look at the response curve of each cone, we can see that actually they greatly overlap:

Sensitivity (Y axis) of each cone for each wavelenght (X axis)

What this means, is that it just doesn't exist a certain color (a certain wave length) which stimulates only one of those cones. Looking at the graph, this means finding a value for X in which only one curve is greater than zero. To make matters worse, light of a single wavelength ( also called pure monochromatic light ) does not exist in nature for physical reasons, so a ray of light will be always a mix of frequencies. (Even lasers which are close to emitting a single wavelength, cannot emit exactly one)

Tests have been conducted to represent, mathematically, all visible colors, and it has been found that they can be expressed with three mathematical coordinates, XYZ. Each of these coordinates costitutes the intensity of an “imaginary color”, a color with a wavelength that does not exist and is hypothetical. These three “imaginary colors” combined together can make any possible color, and this time for real!
As we are dealing with three quantities here, you can guess we can express them in an XYZ Cartesian cube. However only a part of this cube is made of actually visible colors.
To more conveniently visualize visible colors, these three coordinates are further transformed, and a XY graph (ignoring Z) looks like this:

Visible color gamut

The third coordinate, Z, changes the overall brightness of the colors you see in the above graph.
So, as you can see, all visible colors together form a toungue-like shape, and this is called the “color gamut”.
Interesting... but, wait a second, how can we see all visible colors in this image on our monitor to begin with, if we just said monitors cannot display all colors?
Yes, we are cheating here. The image uses only colors your display can show, but in reality part of that shape is made up of colors your monitor is unable to reproduce.
However the real usefulness of this XY representation is that it enables us to tell which part of the visible colors three chosen primary ones can reproduce. This is as simple as connecting them to form a triangle on the gamut, what’s inside the triangle is what you can reproduce.

Now finally we can take a look at how does the "sRGB" triangle looks like:

sRGB color gamut

As you can see, sRGB does a pretty good job including most of the reds but fails to reproduce colors especially towards a saturated green. You may be wondering, why not choose a primary colors which are nearer to the edges or, ideally, on the edges to include more colors? One reason for this is cost, producing phoshors for monitors which morereproduce colors near edges is expensive. The other reason lies on the fact that we have a limited number of bits to express colors, so a wider gamut doesn't necessarily mean a better appearence since the bigger the gamut, more "spread out" reproducible colors will be.
sRGB is a very good compromise of cost and appearance, and thus is widely used. There are however other color spaces such as "AdobeRGB" which have a wider gamut, meant especially for photography and printing. There are even color spaces which have four primary colors instead of three to form a quadrilateral, but most often the benefits don't out weight costs.

Now that we understood what colors are inside the sRGB color space and which primary colors are chosen as R, G, and B, we need a way to identify the intensities of each of those using numbers, so that we can store them in a computer. If we call "0" the minimum intensity and "1" the maximum intensity, and the other intensities are in between, the most straightforward way is to express that as-it-is in a number from 0.0 to 1.0. *
This is what is often called "linear sRGB", because it is sRGB with a linear transfer function (linear conversion from numbers to light intensities). Often it is referred as simply "RGB".

However, the original transfer function for sRGB is not linear. We will see the reasons why it is not, how that impacts the image, and why you should be aware of it while using OpenGL, or even tools like Unity while making games, explained in the next post!

* In OpenGL, for example, RGB8 texture format uses one byte per primary color, which means 256 different intensities for each and a total of roughly 16 million possible combinations.


Premium Vs. Freemium

If you read the title, you probably know what I'm talking about: two of the most discussed models in the mobile gaming world.

In case you don't know what these terms mean, I'll give you a brief explaination.

Premium are games which you buy upfront and play as much as you want (This is what you're used to do if you play console games, or PC games).

Freemium are games which are free, but during gameplay you can purchase virtual currencies, power ups or other goodies which usually let you advance at a faster pace than you normally would. These are called IAPs in the app store (Which stands for In App Purchases).

A quite recent article from App Annie shows that Freemium is generally much more profitable than Premium in the App Store. This means that, strange as it may seem, free apps are making more money than paid apps. Why is that?

I think the reason behind this is that people before iPhone and Android came out weren't really used to pay to get games on their phone. On a console, instead,  it sounds pretty natural to buy a game before you can play it. And it's not suprising that the majority of today's console/PC gamers hate freemium while mobile gamers generally dislike premium: they just want to play on their phone for free and if they like the game they'll end up spending money on IAPs.

The main problem with Freemium is that it's extremely hard to balance. Gameplay may suffer for being too much crippled or slowed down in order to make space for IAPs and players can easily get annoyed and give a bad rating to the game for being too much IAP heavy, even if the developer thought the balance felt quite right.

Another model which is kind of a middle way between Premium and Freemium consists in giving the app for free and offer a "full" version or additional content as IAP. This is what games like Pangolin, RuzzleHardest game ever and many others are doing. I personally believe this kind of IAP is the fairest and I think I will take this route for my next game and see how it goes.

If you have any thoughts as player/developer feel free to leave a comment below! :)


Best App Ever Nominations

Every year the folks at 148Apps hold the Best App Ever Awards, in which users can nominate what they think are the best apps in different categories. If you enjoyed playing Tiny Stack or you feel it is deserving, please consider voting it in one or more of the following categories! :)

If you wish to vote, just click these buttons then "Nominate" (No registration is required)

Nominate Tiny Stack for Best Puzzle Game
Nominate Tiny Stack for Most Addictive Game


Available now on the AppStore!

Tiny Stack is now available on the App Store, and it costs less than a cup of coffee!
If you own an iPhone/iPad/iPod and bought my game I'd love to hear comments from you!
Follows the iTunes link.

Enjoy ;)



Tiny Stack out Tuesday 16th!

Okay, the wait is almost over! Tiny Stack comes out to the AppStore October the 16th!

Check out the game trailer above! ;)


"Tiny Stack" will be released soon!

My first iOS game has finally an official name: Tiny Stack!
And it's going to be released very soon (I still don't know the exact date, but I believe it will available in the Apple's App Store in the first half of October 2012). 

The game will be priced at 0.99$ and supports iPhone 3GS or later, iPod touch 3rd gen or later and all iPads. Full support for the new iPhone 5 is included as well!

For the latest screenshots and news you can visit my website here: http://www.penguinbit.com and my twitter account: http//www.twitter.com/penguibit.

Stay tuned! :)