This implementation uses a lot of RAM - around 470 Megs. There's a reason for this.
The important constraints of Flash are very small files (in the 100k to 8 Megs range), until recently no access to 3D cards, until recently no multicore access, code speed limitations with Actionscript 3 (and much, much, much worse still with Actionscript 2), and a software renderer that isn't incredibly quick for realtime performance.
The strengths of Flash, for my purposes, are an API that works as a pretty great client-side massively distributed Photoshop (taking into account its drawing API, particularly blendmode support, 8-bit alpha, its filter support, its heirarchical 2D parent/child object support, and its caching/blitting support for BitmapData / copypixel), good support for crunching file down and bundling them to make easy deployment in small tidy files, and, crucially, access to huge amounts of system memory of a hosting PC.
So... Punitively small file sizes, slowish code execution, relatively slow real-time renderer, BUT really full-featured 2D renderer, strong support for image caching / blitting, and massive, massive amounts of memory.
Thus, Flash (prior to the recent integration of 3D card support) is strongest with art styles and approaches that combine that powerful-but-slow 2D render with massive amounts of caching.
This is one such approach.
I've taken the simplest approach possible here - a handful of giant transparent bitmap layers drawn into at level load, then parallaxed around and scaled by perspective in real-time. It's really, really, really, really simple.
Lineage, and What this Approach is Good For
There was a window in the late 80's through the mid 90's where developers of 2D action games explored bitmap-based parallax scrolling techniques to add visual depth to their games. It was a hallmark of the 16-bit era, especially of platformers and shmups. Good examples include Shadow of the Beast on the Amiga, Thunder Force IV on the Genesis, Ranger-X on the Genesis, the floors in the arcade Street Fighter 2, topdown sections of the arcade Thunderblade, and the Genesis M.U.S.H.A. These games were exploring the intersction of art and parallax in a very technical way, using many layers or even line scrolling to simulate rich 3D depth. At their best, they didn't just look like a bunch of flat matte paintings scrolling around in the background.
In the mid-90's, developers largely abandoned this approach due to the advent of 3D hardware accelerators and the rise of the Playstation and Nintendo 64. With this change, screen-facing bitmaps (possibly with transparent edges) stopped being the drawing primitive of choice, replaced by 3D textured triangles with a Z-Buffer rasterized to the 2D screen. At the same time, game developers also moved away from games with strictly 2D controls and cameras that didn't rotate or move in three dimensional space for a long while. These bitmap parallax techniques, whatever their merits and charms, basically demand cameras that can translate in a fixed 2D plane, possibly rotating around the axis perpendicular to that plane or zooming towards and away from that plane, but not rotating or translating in general 3D space. So they were basically dropped too.
More recently, with the rise of indie gaming and some smaller games, we've watched the return of interest in games with these older 2D cameras. People are handling the transition in a few ways. Plenty of very retro games, like Super Meat Boy, just avoid parallax as part of their aesthetic. Others, like the recent 2D Super Mario Brothers games, Odin Sphere, or most vector Flash games that have parallax, feature a handful of extremely differentiated, flat / bitmap (or 2D vector) planes in their backgrounds, obviously seperated but often with lovely, sharp, high resolution 2D art. Finally, some games, like Bionic Commando Rearmed or Shadow Complex repurpose 3D game engines and tool chains and apply a fixed, limited 2D camera, and so their worlds naturally exhibit subtle, rich, deep parallax effects as the 2D camera moves. Like nearly all 3D games, their world is ultimately built of tiny 3D textured (and with shaders) triangles.
Tessellating worlds into small 3D triangles and using a Z-Buffer to handle sorting has some overwhelming advantages, particularly with 3D cameras, and it's become our default method of rendering for some obvious reasons. Nevertheless, it does come with some really signficant drawbacks. One is that silhouettes are boxy unless an enormous amount of triangles are used, because the edges of triangles always show up as lines. Unfortunately, human eyes are highly sensitive to silhouettes and high frequency details, so this is actually quite a drawback. Further, because of the Z-Buffer, those edges have to be extremely sharp and precise - the edges of objects can't really be partially transparent or smudged because transparency requires back to front sorting for drawing, and textured triangles, by default, have extremely sharp and precise edges anyway. Another drawback is that anything with extremely complicated edges - grass, hair, fur, trees and leaves, fuzzy cloth - ends up being extremely difficult to draw, often being at once really expensive to render in frame rate terms while also not being entirely convincing visually. Now, obviously, tons of work (and increased compute power) go into addressing these tricky-to-render subjects in big budget AAA games, and lots of games handle the topics with verve and gusto... and, if you need a 3D camera (like you would for a first person game), there's really no easy alternative anyway. And so ultimately you just throw bigger and bigger graphics cards and computational power at the problem, and you special case enough stuff, and you get in the ballpark of what you're looking for.
From one perspective, in my demos here, I'm returning to the thread that was dropped at the transition to 3D cards. Shadow of the Beast and Thunder Force IV and M.U.S.H.A. took a very bitmap oriented approach to parallax / 3D in game worlds with 2D cameras, but the limitations of their hardware were significant. Specifically, their machines had tiny amounts of RAM, they only had 1-bit transparency (rather than the 8-bit alpha we assume now), they were working with special tile-based rendering hardware unless otherwise noted generally (with some added cleverness, of course), they targetted machines with pretty low resolutions, and they didn't have access to drawing with arbitrary scaling and rotation with smoothing support. With their small amount of RAM, the possibility of making aggressive use of bitmap caching, and offscreen intermediate buffers, was quite limited too. Their rendering constraints meant that what we would now thinking of as reasonable amounts of overdraw weren't really an option to be much exploited and explored, the general idea of different kinds of drawing blendmodes was largely foreign, and the idea of general purpose filtering, like Gaussian blurs, were not a reasonable consideration either.
Flash, of course, since Flash 8, has swept all of those constraints off the table, and I'm taking advantage of every single one of them here. In general this approach has enough limitations that it would always be a curious alternative to the dominant approach we take now with 3D cards, of course, but for contexts that share the particular constraints that Flash has had until recently (small file sizes, slow compute power, huge amounts of RAM, great 2D drawing API), if you're willing to accept a 2D camera, I think this is a pretty fun and interesting approach.
If you wanted to maintain the high memory / blitting oriented / low compute / no 3D card set of trade offs here but with substanial memory improvements (I bet it would drop the memory usage down below 100 or 150 Megs here), would be to make the caching and drawing more selective, relying on an oct tree or quad tree or some other well chosen data structure to account for all the empty space. I probably wouldn't have taken that approach in AS2, but it would work fine in AS3. I rather wish I had had a chance to go that route. Those trade offs would have maintained the appealing "detail doesn't matter for real time performance" aspect (with all expensive world drawing at level load). One nice feature of the current approach is drawing and erasing into the world in real-time is trivial and easy, which can be a really fun and interesting feature for lots of game designs. That would be more complicated with the approach I'm mentioning here. Another lovely feature of the current approach is that I can apply (and thus cache) filters simply to the bitmap layers all at once at the end of level loading, after I've drawn the world into them, producing some great visual style effects. This gets much more complicated with sparse / segmented bitmaps representing the world.
Another approach springboarding off my work here would be to scroll the internal contents of the bitmap layers in real-time, rather than just changing the viewport of the untouched bitmap layers. It would then draw into the edges of the layers when the camera pans and make those bitmap layers somewhat larger than the screen area. So, for a 640 x 480 game, each layer might be 840 x 680 with offscreen edges being drawn into as the camera pans. This would bring the memory consumption down signficantly, maybe enough to make this approach work for a higher resolution game. As it stands, the current technique probably still uses too much RAM for a game that was 1024x768 or higher. Doing this would also make much larger integrated levels possible. The current approach uses RAM in direct proportion to level size, which makes large levels memory hungry. On the other hand, this change would push a lot of level drawing performance back into being real-time rather than at level load, where Flash is less strong. Particularly, level detail returns to being a real-time time performance concern rather than a level load time concern. It would further make real-time drawing and erasing of the world trickier or, more likely, off the table, and application of filters to the drawn world would be too taxing as well.
One Last Observation on Rendering Approaches
Coming from the land of hyper-constrained resources that was game development in the 80's and 90's, throwing around half a gig of RAM to get the results shown here can seem... well, borderline immoral.
But the truth is, if a player is playing a high quality Flash game, and it's occupying their attention entirely, right now there's actually a reasonable chance they have that amount of RAM sitting around going unused anyway. It goes against old instincts, but it is true.
And that leads me to a broader point. The history of game technology has been peculiar in that almost all constraints have improved all at the same time. The Atari VCS had no RAM, a terrible processor, no hard drive, tiny ROMs, and no modem / ethernet connection. It was bad at everything all at once. A PC in the early 90's might've had and 80 Meg harddrive, 4 Megs of RAM, a 33 Mhz processor, no meaningful graphics hardware accelerator, and a slow modem. All of those traits have been boosted hugely for a modern machine.
One consequence of this particular evolution of technology is that we haven't (outside of places like the demoscene) seen aggressive work done on exploring techniques for maximizing pretty asymmetrical constraints in games. We never really had an era where people were making games that assumed the computational power of a modern GPU but only 64k of RAM. We never really had an era where game makers were working with the computational power and graphics constraints of a Commodore 64, but had gigabit ethernet connections streaming massive amounts of data in real time.
Unusually, widely deployed browser-based Flash games really did represent such an era in many ways, as I've detailed above. It's possible, as mobile devices acquire more and more RAM (as fo 2013, plenty ship with 2 Gigs of RAM) but battery life remains a crucial constraint, that rendering techniques and art styles that heavily prioritize caching and blitting over more complex per frame computations might have some very important use cases. A similar case code be made for laptop batteries, too. But I haven't actually done any real tests on this topic.