The raster part is easy. You work with high resolution and scale down.
I simply use ImageMagick for this because I am not at all happy with the quality of the downscaling implementations provided by other tools. Of course, maybe they got better in time, but I do not find that worthy of occasional testing.
This task usually requires too much manual involvement, so I will not provide a batch method either.
That’s all. Almost all our production assets in Twiniwt are exported using this simple tool.
You can download the extension as a zip file. I am placing it in Public Domain. Place the contents in the extensions subdirectory of your inkscape home directory. In GNU/Linux, this is ~/.config/inkscape/extensions/. It is probably the same in Mac OSX. Unfortunately, I haven’t even tested this extension in Windows, but I am sure it will work out-of-the-box once you locate it.
However, learning how to modify it, and writing your own production tools is more important.
An Inkscape extension with a GUI requires two files. An XML formatted “.inx” file, and the actual implementation module.
Let’s first write the interface and specify the meta data.
Those belong to our sprite.inx file.
if self.options.image == "" or self.options.image is None:
inkex.errormsg("Please enter an image name")
return (self.options.directory, self.options.image)
def check_dir_exists(self, dir):
if not os.path.isdir(dir):
Dealing with asset scaling in 2D games can be confusing, to say the least. There are many ways to handle it. Which one to chose depends on what you expect. The problem is, you usually do not know what to expect from a responsive interface.
So beware, most scaling strategies don’t respond well to display ratio changes. They will pretend to work on your test device, then they will fail you so bad you will wish you had bookmarked this blog post earlier, which is now. Seriously.
I will now explain one asset scaling strategy that worked great for us in all 4 games we released so far. It is not my invention. I found the original method here, and also influenced by this Cocos2d-x forum post, that uses a different approach to achieve similar behaviour.
Note: In order to make things easier to grasp, I will pretend your game always works full screen regardless of device type. For windowed-mode on a PC, interpret the term “display resolution” as “window resolution”.
Now there are four things you expect from your engine to handle for you.
Choosing the most suitable set of graphics according to display resolution.
Globally scaling the selected graphics to fit the screen.
Letting you ignore all these and totally forget about the display while coding the game itself.
Earning you money, success, and preferably a slice of New York Cheesecake.
Sadly, at 6×13 Games, we couldn’t come up with a reliable way for the engine to handle the last one, either. So I will only explain the asset scaling part.
We will use Cocos2d-x engine for the examples and rely on its terminology. But Cocos only provides a basic set of transform policies, so the same idea applies to any 2D engine. The engine source is available, so you can implement the missing stuff in your own engine as well.
This actually has nothing to do with resolution. It is about framing.
Cocos2d-x will autoscale the whole scene to somehow fit the frame. Resolution policy is where you tell the engine what your understanding of “fit” is, what kind of behaviour you expect from it. The options are:
I know, you need No Border!
Maybe you do. But let me help you, you really don’t. What you really want is a little more complicated than that.
You do not want a weird display aspect ratio messing up with your precious interaction area, making your UI buttons too small to press. That is what you get with No Border. And that is why we are rolling a better solution.
So, keep reading.
You need a safe area! An area that is not only guaranteed to be shown to user, but also guaranteed to cover as much screen space as possible. So I present you, the safe area:
Anything but the yellow area is just decoration that prevents the user from seeing black border. You never, ever put something that user really needs to see outside the Safe Area.
The safe area is the center 480×320 units portion of our 570×360 units game area -not pixels, units. It has an aspect ratio of 1,5f .
How do we guarantee that?
We calculate the reference axis first. Then we either choose the Fixed Height, or the Fixed Width policy, according to the reference axis. If the Reference Axis is y-axis, we choose Fixed Height, otherwise Fixed Width.
Whatever our Reference Axis is, it better be Fixed.
I believe the above animation clearly shows what a Reference Axis is. Below is what it mathematically means:
You can download our Safe Area reference SVG as well as three sample sizes exported, as a single zip file.
Now that we have decided how the engine should behave, it needs to know what portion of the screen to use for that behaviour. In Cocos2d-x terms, this is the Design Size.
Scenes are sizeless. They are just cartesian space with an origin, and possibly some stupid protagonist running around, trying to rescue the damsel who can actually take care of herself. Go away, creepy protagonist!
Anyway. In order for Cocos to fit the scene contents into the frame, it needs to know which portion of the scene we actually consider to be “the scene”. Hence, the Design Size.
In our case, Design Size is the dimensions of our Safe Area.
The moment we set the Design Size and Resolution Policy, the engine will start acting like an adult. Below is how it reacts to both ratio and physical size changes.
To sum up:
Cocos will scale to fit.
Resolution Policy tells it “how” to fit.
Design size tells it “what” to fit.
Great, it works! That’s it!
This would be the happy ending of our asset scaling adventure, if the only crappy, non-standardized piece of hardware on our way was the display. Far from it.
We also need our runtime assets to be fast to process, fit the video memory nicely, and be effected from the least amount of aliasing possible during scaling.
All of them require one thing: having multiple versions of our assets and picking the set of assets to load according to the display resolution, at runtime.
More precisely, we want the density -the definition- of our assets to be close to the expectations of the particular hardware. Because if your mobile device has a Standard Definition display, chances are it also has the video processing power and memory that can only handle Standard Definition, or less. Mobile manufacturers rarely skip the leg day.
Also, there are many algorithms for image scaling, with varying quality and performance characteristics. You want your realtime scaling to be of the fastest kind, which also means less quality. Therefore, it’s best to have your images prescaled to -or reproduced at- closest size.
So, we need to support multiple resolutions as part of our asset scaling strategy.
In order to do that, we put each set of assets in a different resource directory.
We use three size variants, and a directory for each: “small”, “medium”, and “large”.
We will pick the best possible size according to the display dimension of our Reference Axis, in pixels. After that, there is no special procedure to “pick” the directory, you simply add that particular directory to the Resource Search Path of your engine, omitting the others so they will not even be visible to the engine.
Content Scale Factor
We also associate a scale factor with each of those directories, so that the engine knows how the assets map to the design size. It is simple. For example, if the assets in “medium” are twice the definition of your Design Size, the scale is 2. I do that in a resource JSON file that holds other information as well. The relevant parts of the file looks like this:
Ok, you are all set! Now you need to fill your scenes with beautiful assets.
Applying Safe Area strategy to your asset production is pretty straightforward. Just grab the reference assets I shared above, and comply with the boundaries.
But if you really want to preserve every bit of quality you can in a generic way, I have two more recommendations for you.
First, not everything has to be pixel perfect. Rather, try to keep your content scaling uniform among all the assets, because that will, in turn, keep the distortion and aliasing characteristics uniform. No one ever died from a little less definition. We lived just fine watching Video CDs for years, after all.
However, watching a VCD side-by-side with a 4K movie now? That would have lasting effects. The point is, don’t let the player’s eyes compare asset densities. Keep it uniform.
Second, raster and vector assets require opposite treatment. You use raster graphics for more organic asset types, which results better if you work with high resolution sources and scale down. Work with exactly four times the size of your biggest production asset version, and scale by %50, %25, %12,5. So if you used our asset scheme, your background textures would have the following attributes:
small/bg.png PNG image data, 570 x 360, 8-bit/color RGBA
medium/bg.png PNG image data, 1140 x 720, 8-bit/color RGBA
large/bg.png PNG image data, 2280 x 1440, 8-bit/color RGBA
source/bg.png PNG image data, 4560 x 2880, 8-bit/color RGBA
Easy! Everyone already knows that.
Now, with vector graphics, you want almost the exact opposite. You want your source asset dimensions to be 570x360px, precisely as your smallest production asset variant. And you want to export for each resolution one by one -not export once and scale up. Because if a line is at the pixel border in your smallest resolution, it will always be at the pixel borders at every resolution, as long as you keep doubling the resolution. This guarantees the pixel-perfect output.
Of course, you can use the same method for raster graphics as well, but with raster graphics, the priority is seldom the quality at the pixel level.
Lastly, exporting for multiple resolutions is boring labor. If you want to automate your multi-resolution asset workflow with custom GUI tools, please check out my new post: Multi-Resolution Asset Workflow Automation.
Allright! That was long. It took me two full days to prepare this post. So please, do not hesitate to share and comment. Especially, if you tried the above method in your games and had problems, or success, drop me a line. Good luck!
While I was designing the visuals for Twiniwt, I wanted various parallax animations for the background, but without blowing up the game size.
We value keeping the game size as small as possible, because not all parts of the world share the same network bandwidth privilages, yet everyone deserves the privilage of having a little fun. Also, there is something inherently uncomfortable about the idea of a 100MB puzzle game. But they are not only games, are they? It is interesting at what lengths the freemium model has to go to become profitable. Anyway.
Here we go.
We need a simple background first.
It is very heavily blurred, which also helps with quantizing and dithering the image, and storing as a colormap.
Now we need maps to use as parallax layers. They all have to be seamless in x-axis. Three layers for silhouette, one for modifying the silhouette with fog.
We compose all this information to a single image. The image looks like this when channels are composed.
I used DXT-5 compressed DDS files. If you use PNG as your final asset format, or export to PNG at some stage, be aware that your graphics suite might try to ignore color information of fully transparent pixels, which effectively destroys the asset.
We will also need a vignette map to divide the final color with, in order to nicely frame the composition.
All layers in place, it looks like this:
Here is the GLSL fragment shader I wrote for cocos2d-x. It automatically prefixes the shader code with some convenience definitions, but the idea is there, if you want to use it in another engine.
Because we want the palette to be specific to the mood of each background, and also it is way easier to experiment with colors this way.
Microsoft DirectDraw Surface (DDS), 1024 x 256, DXT5
BG.PNG : 91K
PNG image data, 1621 x 1024, 4-bit colormap, non-interlaced
VIGNETTE.PNG: 216K (shared among all backgrounds)
PNG image data, 811 x 512, 8-bit grayscale, non-interlaced
All high definition assets costs us ~400K per background. This way, we were able to fit 3 completely different background styles in less than 1.5MB in Twiniwt.
Rules of Thumb
Finally, achieving the best packing for games requires a holistic approach to development. It reflects on decisions made by artists as well as developers.
Some rules of thumb for anyone who wants to do production assets:
Know your file formats.
PNG, for example, is NOT “the format that stores alpha channel and compress loselessly.” The details matter. PNG specification defines multiple ways to store both color and transparency information. (Fortunately, PNG is also not your best option for final assets.)
In his seminal CppCon 2014 talk, Mike Acton describes the developer’s job as: “to solve data transformation problems.” As such, people creating production assets should be aware of what they are really feeding into that transformation. This is not the job of a technical artist, this is the responsibility of a digital artist.
Know your tools.
Not everything in file format specifications are well/strictly defined. And implementations are far from being perfect. Different tools may vary in the way they interpret files. So know how your tools handle the import/export of your assets.
This doesn’t seem like it needs reminding. But nowadays, we tend to embrace “best practices” that favor flexibility, which sometimes can carry unnecessary calculations into runtime. If the distance from the camera is always the same, maybe the amount of blur is the same. It doesn’t matter if you have that amazing focus blur shader, you can just bake the blur.
My good friend, Marcel Smit, reviewed the post and made some great comments regarding compression and PNG format problems. I believe they should be part of the post. Here we go:
I was thinking for the parallax scrolling you could compress it even further by using only black and white and storing the images with one bit per pixel and RLE-compression. You could blur the images after loading them.
RLE, short for Run-length encoding is a very simple method that has been around for a long time. No need for a technical description. This is Run-length encoding:
I am yet to try this method and see how it fares, but the idea makes so much sense I can’t see any reason it doesn’t work better.
There are other reasons to avoid PNG. Like you said it leaves out colors for translucent pixels. This is BAD when you’re doing bilinear filtering, as with bilinear filtering the GPU also samples adjacent pixels, but these may be white/black/whatever color your export tool used to replace the colors it optimized away with!
I can’t stress enough the importance of tools/libraries like ImageMagick, GraphicsMagick and GEGL, when you need to find what is really going on with your production assets. You can batch revise invisible properties of your assets in a matter of seconds. For example you can backup and batch remove all alpha channels to reveal how BAD it really is.
I had to write fix-up code for this for Riposte. I used the average adjacent pixel color for non-translucent neighboring pixels. Another reason to avoid it, is because it is horribly slow to decompress PNGs. Reading raw data or TGAs or decompressing your own format is likely much faster.
The kind that has once heard the aphorism “knowledge is power” as a kid, and taken it all too seriously. The kind that could probably spell the Latin version of the phrase. The kind that poured most of their stat points to “wisdom”, hoping to be the wizard of the story.
Of course, one rarely questions what “power” actually is. Let’s define it as the ability to influence the state of the environment as well as the behaviour of its agents.
Knowledge definitely was power, once.
The new world weakened it. The characteristics of what we call knowledge is vastly different and more fragmented now. It is still important, maybe even more so, but not nearly as powerful.
I like to retrofit one of the notes from Newton’s alchemy texts to depict the new knowledge.
The vital agent diffused through everything in the earth is one and the same. And it is a mercurial spirit, extremely subtle and supremely volatile, which is dispersed through every place.
The new knowledge will change whenever you are sleeping, whenever you look the other way. It will change whenever you blink.
It will regress, and it will get revised and deprecated. It will be staged, and it will be branched.
The new knowledge, is a repository in version control. As such, it needs an active maintainer.
In a world of assets and liabilities, knowledge is only potentially an asset, but always a liability.
The good thing is, even though it doesn’t make you more powerful, it definitely makes you better. It gives you perspective. It gives you the ability to fill the new, interdisciplinary roles that are emerging, as long as you actively maintain at least one of those repositories.
I am a programmer, with knowledge and experience in computer graphics. I studied architecture, and I did organization/event management for some years. I settled on game industry, not only because I love games, but also because I can apply all of this knowledge in games.
Whenever people ask me why I include my work as an architect as vocational experience in my game developer CV, I remind them of a particular architect:
Christopher Alexander, whose research in patterns of architectural design and urban planning in 60s helped shape how we design large scale software projects today. His work was required reading in CS circles. He heavily influenced the research on Object Oriented Programming, as well as the design of C++. The whole Design Patterns movement was solely based on Alexander’s work.
Job titles are products of a well-defined, well-tested distribution of work. Not a definitive categorization of knowledge and expertise. Multiple areas of knowledge may be hard to actively maintain, let alone to apply. But they are meaningful, as long as you are able to specialize on one. The others, even when deprecated, will keep making you better.