Optimized Parallax Backgrounds

While I was designing the visuals for Twiniwt, I wanted various parallax animations for the background, but without blowing up the game size.

We value keeping the game size as small as possible, because not all parts of the world share the same network bandwidth privilages, yet everyone deserves the privilage of having a little fun. Also, there is something inherently uncomfortable about the idea of a 100MB puzzle game. But they are not only games, are they? It is interesting at what lengths the freemium model has to go to become profitable. Anyway.

Here we go.

We need a simple background first.

It is very heavily blurred, which also helps with quantizing and dithering the image, and storing as a colormap.

Now we need maps to use as parallax layers. They all have to be seamless in x-axis. Three layers for silhouette, one for modifying the silhouette with fog.

We compose all this information to a single image. The image looks like this when channels are composed.

I used DXT-5 compressed DDS files. If you use PNG as your final asset format, or export to PNG at some stage, be aware that your graphics suite might try to ignore color information of fully transparent pixels, which effectively destroys the asset.

We will also need a vignette map to divide the final color with, in order to nicely frame the composition.

All layers in place, it looks like this:

Here is the GLSL fragment shader I wrote for cocos2d-x. It automatically prefixes the shader code with some convenience definitions, but the idea is there, if you want to use it in another engine.

// Copyright (C) 2017 Kenan Bölükbaşı - 6x13 Games

#ifdef GL_ES
precision lowp float;
#endif

varying vec2 v_texCoord;
uniform mat4 u_parallax;

vec4
 lookup( vec4 pd )
{
    return texture2D( CC_Texture1,
                      vec2( v_texCoord.s + mod( pd.w * CC_Time[ 0 ], 1. ),
                            v_texCoord.t * 2. - 1. ) );
}

void
 main( void )
{
    vec3 bg = texture2D( CC_Texture0, v_texCoord ).rgb;

    if ( v_texCoord.t > .5 )
    {
        vec4 fog_pd = u_parallax[ 3 ];
        float fog_f = lookup( fog_pd ).w * .05;

        for ( int i = 0; i < 3; i++ )
        {
            vec4 mnt_pd = u_parallax[ i ];

            bg = mix(
             bg,
             mix( mnt_pd.rgb, fog_pd.rgb, fog_f * inversesqrt( mnt_pd.w ) ),
             lookup( mnt_pd )[ i ] );
        }
    }

    gl_FragColor.rgb = bg / texture2D( CC_Texture2, v_texCoord ).r;
    gl_FragColor.a   = 1.;
}

I am pretty sure this shader can be much better. Please, do not hesitate to share modifications, and I will edit the post.

We also store all color palette and movement speed information separately and load them as a uniform mat4.

{
    .76f, .67f, .49f, .01f,
    .80f, .57f, .27f, .07f,
    .78f, .43f, .25f, .25f,
    .80f, .80f, .50f, .17f
}

Because we want the palette to be specific to the mood of each background, and also it is way easier to experiment with colors this way.

In short:

PARALLAX.DDS: 342K
Microsoft DirectDraw Surface (DDS), 1024 x 256, DXT5

BG.PNG      : 91K
PNG image data, 1621 x 1024, 4-bit colormap, non-interlaced

VIGNETTE.PNG: 216K (shared among all backgrounds)
PNG image data, 811 x 512, 8-bit grayscale, non-interlaced

All high definition assets costs us ~400K per background. This way, we were able to fit 3 completely different background styles in less than 1.5MB in Twiniwt.

Rules of Thumb

Finally, achieving the best packing for games requires a holistic approach to development. It reflects on decisions made by artists as well as developers.

Some rules of thumb for anyone who wants to do production assets:

Know your file formats.

PNG, for example, is NOT “the format that stores alpha channel and compress loselessly.” The details matter. PNG specification defines multiple ways to store both color and transparency information. (Fortunately, PNG is also not your best option for final assets.)

In his seminal CppCon 2014 talk, Mike Acton describes the developer’s job as: “to solve data transformation problems.” As such, people creating production assets should be aware of what they are really feeding into that transformation. This is not the job of a technical artist, this is the responsibility of a digital artist.

Know your tools.

Not everything in file format specifications are well/strictly defined. And implementations are far from being perfect. Different tools may vary in the way they interpret files. So know how your tools handle the import/export of your assets.

bake.

This doesn’t seem like it needs reminding. But nowadays, we tend to embrace “best practices” that favor flexibility, which sometimes can carry unnecessary calculations into runtime. If the distance from the camera is always the same, maybe the amount of blur is the same. It doesn’t matter if you have that amazing focus blur shader, you can just bake the blur.

FOLLOW-UP

My good friend, Marcel Smit, reviewed the post and made some great comments regarding compression and PNG format problems. I believe they should be part of the post. Here we go:

I was thinking for the parallax scrolling you could compress it even further by using only black and white and storing the images with one bit per pixel and RLE-compression. You could blur the images after loading them.

RLE, short for Run-length encoding is a very simple method that has been around for a long time. No need for a technical description. This is Run-length encoding:

I found a very nice and super fast blurring algorithm..
https://github.com/memononen/fontstash/blob/master/src/fontstash.h#L987
It’s a clever trick to quickly blur an image. You’d need to do both a horizontal and vertical pass in constant time to do a 2D gaussian blur.

I am yet to try this method and see how it fares, but the idea makes so much sense I can’t see any reason it doesn’t work better.

There are other reasons to avoid PNG. Like you said it leaves out colors for translucent pixels. This is BAD when you’re doing bilinear filtering, as with bilinear filtering the GPU also samples adjacent pixels, but these may be white/black/whatever color your export tool used to replace the colors it optimized away with!

I can’t stress enough the importance of tools/libraries like ImageMagick, GraphicsMagick and GEGL, when you need to find what is really going on with your production assets. You can batch revise invisible properties of your assets in a matter of seconds. For example you can backup and batch remove all alpha channels to reveal how BAD it really is.

I had to write fix-up code for this for Riposte. I used the average adjacent pixel color for non-translucent neighboring pixels. Another reason to avoid it, is because it is horribly slow to decompress PNGs. Reading raw data or TGAs or decompressing your own format is likely much faster.

Thanks again, Marcel!