Multi-threading is one of those things which initially has this magical sound to it, and as long as you don’t pay too close attention to it, it does keep its fairy-dust glimmer. You may even end up using it once or twice without trying to write a high-performance application. The real point of this article is to discuss some of the finer points of multi-threading, henceforth we’ll be skipping the high-level languages such as Java and C#, as well as fancy high-level, cross-platform libraries such as Qt, and instead dive straight into the messy, low-level stuff. Do mind your step 🙂
The primary language used for writing high-performance, multi-threaded applications is C++. Since C++ lacks native multi-threading support (until C++ 2011 becomes official and fully implemented that is), one has to pick from a variety of threading API options, the choice of which will depend on the platform of choice. If you only ever run the application on a single operating system, and never plan to port it, you can just go with whatever native API your OS provides, be it Windows, Linux or OS X. This is not a bad choice, as any other libraries you can pick will just build upon this native threading API.
If you’re like me, however, and would like to make porting one’s applications to other platforms as easy as a simple recompile, then one has to pick a library which offers this functionality and portability. While there are various ones out there, including Intel Threading Building Blocks (TBB)  and Boost Thread, they each come with their own advantages and disadvantages. Intel TBB is the more high-level one of the two, using abstractions to take away many of the gritty details of multi-threading, whereas Boost Thread (BT from hereon) is quite low-level, making you do the resource management yourself.
The crucial detail when designing a multi-threaded application is the need to balance between efficiency of execution and maintainability. While I haven’t used Intel TBB myself yet, it seems like it could make things a lot easier to build and as long as they keep working, it’s fine. It has to be hell to debug if something goes wrong, though. With a low-level library such as BT there are far fewer layers between you and the system, making maintenance and debugging easier. In theory, of course.
For BT, launching a thread is as easy as:
MyClass worker(1); // our freshly initialized class boost::thread myThread(worker); // insert into new thread instance
That’s it. The thing where TBB might be easier is when you have to manage a large number of threads, but that’s something you have to look at for yourself.
As I pointed out earlier, in the end it are always the platform’s native threading APIs which are being used for the actual threading. These approaches aren’t too dissimilar. A new thread is created and is allocated a task to run. So-called mutexes, spinlocks and other structures are then used to ensure that a single piece of data that is shared by multiple threads isn’t accessed simultaneously, as this could lead to undesirable behaviour, data corruption and crashes. At this level things are still quite easy to understand.
The part where it gets messy is when you move on to the actual implementation in the hardware which makes this possible. Before the arrival of multi-core processors, multi-threading truly was an illusion, as there’d never been two tasks simultaneously active. Instead the OS’s task scheduler would swap out tasks, giving each a time slice to do its things before its state being saved to the stack again and another task’s state being restored. With multi-core processors two or more tasks can be active simultaneously, yet if you look at for example the statistics provided by the task manager of your operating system and particularly the number of active threads, you’ll see that it’s far higher than the number of cores in your system. For me it’s above 1,000 active threads as I write this.
Task-switching is thus still a very common practice, and we run into the first hurdle when it comes to reliable multi-threading: the OS’s task scheduler. As described earlier, it’s the piece of code which determines which task gets to run and in which order. Countless approaches to task scheduling exist, each being more beneficial to particular scenarios. In an embedded, real-time OS such as QNX the emphasis would be on exact time slices and timing, so that any scheduled task would run on time and exactly for as long it has to be. For a desktop OS such as Windows there’s no such need and scheduling is far more loose. It’s a pretty chaotic environment anyway, so if a task doesn’t run for exactly 100 ms, few will notice.
So in essence your threads will be competing with all the other threads which are active at that time. Don’t count on exact timing, and expect some of your threads to be waiting on results from other threads, depending on the design you’re using. On a hardware level, threading is more smoke and mirrors than the clean and pristine world of software makes us believe in. The task scheduler can mess up allocating your threads, reducing performance significantly. Moving threads between cores will cause all the data it had previously gathered in that core’s L1 and L2 cache to be invalidated and have to start all over again on the new core it’s assigned to. Similarly, task switching on a single core will ‘pollute’ the caches with data your thread doesn’t need, also leading to performance reductions.
One can lock threads to a single core to prevent such problems, but whether it’s the right choice depends again on the situation. It’s a good idea to try multiple approaches and see what works best. Use accurate timing methods and perform multiple runs (at least 5 or so) to rule out any glitches and to ensure you get useful data to base your decision on. Repeat this for every platform and in the case of Linux and similar OSs which allow you to swap out the task scheduler, for each type of task scheduler you will be deploying the application on.
Of course, we are talking about high-performance multi-threading here. If you are just running a processing task in a thread to not disrupt the UI thread, then by all means just use Qt’s threading functionality or so. Even if its abstractions and sometimes poor documentation can make it more of a headache than doing it the low-level approach.
Next article should be on the Android game project again. Until then,
My apologies for the slow pacing of this series. Things are pretty hectic for me at the moment, with work and personal circumstances. Fortunately today I had some spare time to put the following together:
The above was created in 3D Studio Max 2010. it consists out of four rectangles: floor, large side, small side and window. Further two instanced pillars and a tilted pyramid are used. The two small sides/walls are instances of each other, meaning that they’ll mirror each other’s appearance and other qualities except for position and orientation. When you copy a selected item using Ctrl+v, you get offered to either make a copy or an instance. A copy is independent, an instance is as described earlier. Using the right choice here can make life much easier 🙂
To prepare the elements of this scene for use with jPCT-AE we will have to export each of them to a suitable 3D object file format. I picked OBJ (Wavefront .obj file) because it’s the most common and light-weight of all: http://en.wikipedia.org/wiki/Wavefront_.obj_file
The demo provided on the jPCT-AE page uses 3DS format files, which are relatively huge and complex to parse. OBJ on the other hand are ASCII format and can be read and corrected by a human with a text editor. This makes them far easier to debug as well. Another thing the demo does is serialize the 3D object files using a serialize function only available in the desktop version of the engine. We’ll look at that one later as it can provide a nice speed boost to the loading of resources. For now we’ll keep it simple.
In 3D Studio Max you can export to OBJ files using the standard export feature which I assume you can find. Select the mesh you wish to export, go to export and select ‘Export Selected’. On the next screen pick a name for the mesh and select OBJ as the file type. After clicking next, you get the OBJ export dialogue. The default settings are fine, although you want to make sure you’re exporting materials too (first two checkboxes). Check the 3DS manual if you want to know what each feature does. Do be sure to click the Map Export button. On this screen you’ll want to disable ‘use map-path’, enable ‘extended map-params’, ‘convert bitmaps’ and then pick a suitable map format. This will be used for the textures added to the mesh. I picked PNG here since that’s a nice portable and small format for mobile applications. TGA is more suitable for high-end purposes, and isn’t as widely supported.
Now, just keep exporting each mesh in the scene until you’re done and have a folder filled with OBJ, MTL (material) and PNG files. In the next installment I’ll show you how to take all these loose bits and reassemble it into a coherent whole again, inside the game environment 🙂
When looking at a finished 3D game’s levels it’s often hard to visualize how they were put together, or which tools were used. The easiest way to understand the basics is to imagine building a house from scratch. You start off with the foundation, either the floor or a terrain mesh in the game. Then you add the walls and ceiling, the rest of the bits which make up the structure or organic environment.
I won’t go too far into the basics, so I’ll assume that you know a bit about vertices and 3D meshes already. There are two important things to keep in mind while putting a game’s level together: mesh and vertex count. When designing a level it’s a good idea to minimize the vertex count per mesh, say a wall or floor segment. Mesh count in a level is also important, as each mesh is rendered separately unless you’re doing batch processing on a number of identical elements, say a hallway filled with tiles. OpenGL allows you to render all those elements in one go, which is a lot more efficient. Don’t go overboard with this, though, as not everything you’re rendering may be visible and would be a waste of effort, lowering the frames per second (FPS) count.
Fortunately the latter detail should be taken care of by the game engine, and we got one. If it’s smart enough it’ll take care of such rendering issues for us, leaving just the vertex count issue. The number of vertices we can render in a scene on a given hardware configuration isn’t a static number, sadly. The applied textures, lighting and particle effects applies to the scene… they all take processing time away from rendering raw, unshaded vertices. To find out what level of vertices, texture detail and effects in a scene works best will be a matter of trial and error. See where it makes the FPS go below 25 FPS or whatever feels playable and tune down the level of detail until things go smoothly.
That about covers it for the basics of designing a game level… the exact looks of it will depend on the game’s setting, and the total size of a level on the maximum vertex count. In upcoming articles I’ll show some detailed examples of how to put a level together using a demo level.
Moving on to the next important step in designing a level, the level editor and mesh and map editing tools. What a level editor is I probably won’t have to explain too much. It’s where you take all the bits and pieces, put a mesh up somewhere, stretch and manipulate it until it’s the right size and shape and apply a texture and other maps to it.
A quick note about maps here, as they’re an essential part of making a level look more than just ‘nice’. They come in a bewildering number of types, including regular texture maps, bump maps, height maps, shadow maps, environment maps and so on. Each applies a different effect to a mesh, to colourize it, give it depth, allow it to be used for a terrain or so, give detailed shadow effects, or give the illusion of an environment reflecting in a surface. Sadly the Android platform is sufficiently limited that we can not use many of those maps, and especially not dynamic lighting. Look forward to a PC-oriented series of articles on more advanced object mapping 🙂
When it comes to creating meshes and maps, there’s a bewildering number of tools out there, both free and paid. Personally I use Autodesk 3D Studio Max 2010 and Adobe Photoshop Extended CS5. The former is good for creating meshes, the latter good for colourizing and creating other map types. For more advanced mesh editing and mapping work there are applications such as Mudbox (http://en.wikipedia.org/wiki/Autodesk_Mudbox) which can do things like 3D sculpting which is hard to do in 3D Studio Max and its direct competitor Autodesk Maya. You’ll soon find out that you’ll want to use more than just a few tools as each is better at certain tasks.
I have heard good things about Blender 3D as of late. Back when I first used it in 2002 or so it was absolutely unusable, with a user interface only its own developer could like, and which was less intuitive than Vi in general usage. Recently the UI has been completely redesigned, however, and should be a nice, free (open source) alternative to paid apps such as 3DS and Maya, both of which cost about 1,000 Euro a piece. For Photoshop I can’t really think of any good, free alternatives, though. The Gimp is extremely hard to use and will require a very steep learning curve, while still doing far less than Photoshop can do. Maybe find a used copy of Photoshop CS4, which isn’t that much of a step down from CS5. I’ll gladly hear of free alternatives you are using successfully. The best ones I’ll list in a future article 🙂
Next article should be more than just a wall of text. I’m currently quite busy finishing up this Android project for a client, which is taking away most of the time I could be spending on making fun Android games. Life just isn’t fair, is it?
Today I got started with the jPCT-AE game engine for Android (http://www.jpct.net/jpct-ae/). Even with the shortage of documentation, it’s not too hard to figure out how to use it. Download the release, get the single JAR file and put it into a ‘libs’ folder you create in the root folder of the Android project. In Eclipse you then open the project properties, Java Build Path option and add the JAR from there to the project. Congratulations, you just installed jPCT for Android 🙂
The ‘Hello World’ demo project as presented in the jPCT wiki shows quite well how to use the API, though you have to use the documentation provided in the release ZIP file to get a full understanding of its workings. I won’t replicate the entire demo project here, but you can find it in the wiki: http://www.jpct.net/wiki/index.php/Hello_World_for_Android
You first initialize the engine, set up (http://developer.android.com/reference/android/opengl/GLSurfaceView.html) to act as a rendering surface for the renderer. The renderer is connected to this surface and everything moves from there. You can use the input events from the touchscreen (touch, move, etc.) to manipulate the game world.
The basics of the renderer function are as follows:
world = new World(); world.setAmbientLight(20, 20, 20); sun = new Light(world); sun.setIntensity(250, 250, 250); // Create a texture out of the icon...:-) Texture texture = new Texture(BitmapHelper.rescale(BitmapHelper.convert(getResources().getDrawable(R.drawable.icon)), 64, 64)); TextureManager.getInstance().addTexture("texture", texture); cube = Primitives.getCube(10); cube.calcTextureWrapSpherical(); cube.setTexture("texture"); cube.strip(); cube.build(); world.addObject(cube); Camera cam = world.getCamera(); cam.moveCamera(Camera.CAMERA_MOVEOUT, 50); cam.lookAt(cube.getTransformedCenter()); SimpleVector sv = new SimpleVector(); sv.set(cube.getTransformedCenter()); sv.y -= 100; sv.z -= 100; sun.setPosition(sv);
Very easy to get started with. With the demo running, you’ll see a cube on the screen with the standard Android icon used as texture. Touching the screen allows you to spin and manipulate the cube. Hello indeed 🙂
Next up is creating something more closely resembling a game world, and play with the lighting. Expect screenshots soon 🙂
Before I start off with the whole development series, I’d like to talk about how to get started with Android development and my thoughts on some parts of it. I already gave some of my thoughts on parts of it in the previous post, but I would like to expand on them.
I started with Android development early this year, deciding it would be nice to get started on this fancy smartphone thing. There are four options total if you want to get into smartphone development: iOS from Apple, Android from Google, Windows Phone 7 (WP7) from Microsoft and WebOS from Hewlett-Packard (HP). Development environments for these platforms are as follows: Objective-C, custom Java version with most of the standard Java library replaced, C#, and C++.
In terms of development ease, WP7 and iOS rank pretty high. They also feature fast and accurate emulators to test applications without having to resort to testing on a hardware device. Often the emulator is faster than the real hardware. In comparison, Android makes for a frankly rather poor showing.
The Android SDK emulator uses VirtualBox code as its base for ARM emulation, but only runs in interpreted mode, meaning that all of the virtual hardware’s registers and GPU actions are all run in software. This results in a speed penalty compared to the real hardware of between 10x and 100x. In brief this means that when it’s emulating a 1 GHz single-core ARM chip (Cortex A8 or so), the emulated speed makes it comparable to maybe 10-100 MHz. The GPU part will be even more slow, as doing OpenGL and similar operations in software is just beyond slow.
In short, this means that you won’t be using the emulator much and should basically consider it only as a bridging method until you can get your hands on a real hardware device. I got a Huawei IDEOS X5 U8800 for this reason. This is a mainstream to high-end phone, with an 0.8-1 GHz Cortex-A8 CPU, one of the better mobile GPUs currently available for smartphones. Uploading an application to it via USB and running applications on it, even the more demanding ones is painless, quick and makes using the emulator afterwards an extremely painful experience. Summarized, get a real Android device for testing 🙂
While WP7 and iOS use standard languages with standard APIs, Android is the odd one out. Knowing Java before programming for Android isn’t required and probably will just confuse you in some parts. Google thought that things like non-blocking (non-modal) dialogue windows were a good idea, meaning that you can’t ask for user input and wait for the dialogue to return. Code will be branching off all the time, ensuring that you keep having to patch up unexpected behaviours. In many ways the API feels immature and incomplete.
I have a lot of C++/Qt experience, and I must say that while similar to Android in some respects, the former is mature, feature-complete and always has new ways to make things run more efficiently. The latter needs a lot more work before it can be called mature. Considering that Android is still very new this isn’t surprising, especially with Google not investing much time and effect into it in the beginning.
Why I chose Android instead of iOS or WP7 is largely because of the costs and troubles involved in getting your applications out there. Both IOS and WP7 are very much walled gardens, with having to pay to be recognized as a developer, having to get one’s application approved before it can be sold on the single application market for the platform, and so on. Android is attractive because it’s free, open and offers the most freedom. Even if the API and tools aren’t as shiny. Android marketshare is growing the most rapidly because of this reason too, causing a big demand for Android developers.
The best IDE for Android development is Eclipse combined with the ADT plugin. This offers the best level of integration with ADB, the tool used to push applications onto a phone and retrieve data, but also with the debugger and such. There’s the IntelliJ IDEA IDE, but its level of Android integration isn’t nearly as refined.
So there you have it, a big rant from an Android developer why she dislikes the platform she develops for and why it’s still the best platform 🙂
Welcome to the first post on this new blog. Here I’d like to talk a bit about what you can expect to find here the coming time. For details about myself, I’ll refer you to my personal website. All I’ll say regarding myself is that I have been a professional software developer for a number of years now, and have experience with a wide variety of languages and tools. How’s that for qualifications? 🙂
The primary reason why I’m starting this blog is to detail the development of a new Android game I recently came up with. I have been professionally programming for the Android platform since early this year and have developed a number of applications, including for other companies. While I do not particularly enjoy programming in Java (I’m more of a procedural-style C/C++ programmer by preference), smartphones nevertheless are an interesting platform to develop games for, if only because of the radically different interface and mobility compared to a PC and laptop.
While I won’t be spilling the beans on the game’s idea yet, I will be blogging about the low-level issues I encounter and the libraries, game engine, etc. I will be using. I’ll also be blogging about Android and other programming issues not directly related to the game. You may encounter posts about C++, ASM, VHDL and a variety of other languages I use too, so be prepared 🙂
Back to the game. The main issue with developing games for Android or basically any kind of application which is heavy on multimedia is the lack of support for advanced audio features in particular. While OpenGL ES (1.0, 1.1, 2.0) is supported, with ES 2.0 common in recent phones, there’s no audio support beyond basic stereo audio. IOS on the iPhone/iPad has had OpenAL support since the beginning, and video codec support is also very strong. Assuming you stick to the Apple-endorsed formats, naturally.
Only with Android 2.3 did we get OpenSL support, a kind of competitor to OpenAL developed by Khronos. Both are 3D positional audio libraries and APIs, so beyond the annoyance of porting existing audio code between both APIs, it’s good to have native support. Unfortunately Android 2.3 isn’t that widely used yet, with the Android Market statistics showing a usage of under 25%: http://developer.android.com/resources/dashboard/platform-versions.html
Android 2.1 and 2.2 combined still account for a marketshare of about 71%, making them the main targets for development. Other Android versions – including 3.x – can be basically ignored at this point. The target I use for my Android development is 2.1 and up, although I suspect that most if not all of the applications developed by me also run on 1.6. Anyway, our target for new development clearly should be 2.1, which will make it run on all Android devices but the 3.3% which runs 1.5 and 1.6 and no doubt run on antiquated hardware.
Then the OpenGL version. Again, we look at the Android Market: http://developer.android.com/resources/dashboard/opengl.html
Clearly OpenGL ES 2.0 is the way to go, with over 90% marketshare. If the game engine we end up using supports 1.1 as well, then that’s just a bonus. Next, for screen resolutions. Here we see that normal screens at hdpi densities are most common. This means WVGA800 (480×800), WVGA854 (480×854) and 600×1024 with a DPI of 240, according to the Android documentation. My own Android device (Huawei U8800 IDEOS X5) is an 800×480 device with ~240 DPI, running Android 2.2.3.
Finally, back to audio. If on sub-2.3 versions of Android we do not have access to a proper audio API for games, what should we do? The best answer I have found is to use a ported version of OpenAL Soft, the non-hardware-accelerated version of OpenAL. Instructions on how to use it can be found here: http://pielot.org/2010/12/14/openal-on-android/
There are some issues with OpenAL on Android, particularly due to the underlying audio hardware often not being designed for low-latency operations. Do be aware of this and be mindful of the limitations of the platform. I only intend to use basic positional sound for this reason in the game.
At long last, the game engine. After looking around for a few days, I think I have found the best engine for Android in terms of feature set and documentation: http://www.jpct.net/jpct-ae/. This is the Android port of the jPCT Java-based game engine. It’s fairly basic and doesn’t take too much time to understand. From what I gather it should have complete OpenGL ES 2.0 support soon: http://www.jpct.net/forum2/index.php/topic,2067.0.html. It’s also a 3D engine unlike the many 2D game engines for Android also available 🙂 My game is going to be 3D, of course.
I guess that this about wraps up the low-level preparations for an advanced 3D Android game. I hope to update soon with more progress 🙂