My new game development book got published

October 10, 2015 Leave a comment

Some people may have noticed a drop in published content on this blog for a while. Part of it was due to working on a new book for Packt Publishing, titled ‘Mastering AndEngine Game Development’, which was finalised last month with its publication. For those interested, it can be purchased both at the Packt store [1] and at Amazon [2].

What this book is, is an in-depth look at how to go from ‘making a basic mobile game’ using a game engine such as AndEngine [3], to making a truly advanced (mobile) game using 3D assets in a 2D game with OpenGL ES, dynamic and static lighting, frame-based and skeletal-based animation, anti-aliasing, GLSL shaders, 3D sound and advanced sound effects using OpenAL & OpenSL, and much more. While it’s aimed at extending AndEngine-based games, it’s written in a generic enough manner that it should be useful for those using other game engines, on Android or other platforms.

So far this is my first published book, but it probably won’t be my last. In the meantime I will try to step up the publication of content on this blog again, both with programming and electronics-related postings. Please stay tuned :)



Creating a Websocket server with Websocket++

September 16, 2015 1 comment

Recently I had to add a Websocket server to a C++ project. Some research showed that the options here aren’t too many. There are a few C-based options, and one can of course pick the Websocket module from the POCO libraries [1] if one desires a C++ approach. Since this particular project is written in C++ I much preferred a purely C++ solution, preferably stand-alone. Ultimately I picked the (creatively named) Websocket++ library [2], also referred to as Websocketpp. Main arguments here were as mentioned being an object-oriented C++ solution, without significant dependencies, as well as ease of implementation thanks to it being a header-only library.

Websocket++ is a fairly modular library, making heavy use of templating to assemble various configurations, end points and similar into one coherent whole. As the basis for the transport module can for example pick from iostreams (very slow) and ASIO. For the latter one can pick between Boost ASIO and stand-alone ASIO. There is also the option of using no C++11 features and using the Boost alternatives instead. Since this project involved a number of compile targets, not all of which featured a C++11-capable compiler, the final configuration involved a Boost dependency using its ASIO and system library, as well as various other header-only dependencies.

After starting the actual integration of the library into my project, I did however find out that the quality of the documentation is very… sub-optimal. The documentation is split between the GitHub site and the author’s own site, with most of this documentation being completely and utterly outdated. Only after significant amounts of trial and error did I manage to get a fully working implementation. To save others the trouble, I would like to hereby present a (simplified and altered) version of my implementation. I hope it will be useful.

Let’s move on to the header file of our implementation:

#include "websocketpp/server.hpp"
#include "websocketpp/config/asio_no_tls.hpp"

With these two includes we pick from the Websocket++ server role and make available the ASIO configuration without TLS feature, meaning no encrypted connections.

class WebsocketServer {
	static bool init();
	static void run();
	static void stop();

	static bool sendClose(string id);
	static bool sendData(string id, string data);
    static bool getWebsocket(const string &id, websocketpp::connection_hdl &hdl);
	static websocketpp::server<websocketpp::config::asio> server;
	static pthread_rwlock_t websocketsLock;
	static map<string, websocketpp::connection_hdl> websockets;
	static LogStream ls;
	static ostream os;
	// callbacks
	static bool on_validate(websocketpp::connection_hdl hdl);
	static void on_fail(websocketpp::connection_hdl hdl);
	static void on_close(websocketpp::connection_hdl hdl);

Our class definition implements a static class. This will allow us to use the Websocket functionality from multiple classes. Websocket++ is thread-safe, so all we have to worry about is multi-thread level access to our own data structures and variables.

Moving on to the implementation, we can see first the usual static initialisations and namespace merging:

// static initialisations
websocketpp::server<websocketpp::config::asio> WebsocketServer::server;
map<string, connection_hdl> WebsocketServer::websockets;
pthread_rwlock_t WebsocketServer::websocketsLock = PTHREAD_RWLOCK_INITIALIZER;
LogStream WebsocketServer::ls;
ostream WebsocketServer::os(&ls);
// namespace merging
using websocketpp::connection_hdl;

Next is initialising the library and the server instance:

bool WebsocketServer::init() {
	// Initialising WebsocketServer.

When using the ASIO transport option, we call its init method here.

	// Set custom logger (ostream-based).

We may want to redirect the logging output to our own logging method. Websocket++’s basic logger allows us to set an ostream alternative for the standard std::cout and std::cerr. We will look at this in more detail later on.

	// Register the message handlers.

Next we set the message handlers. These are all callback methods we will define in a moment.

	// Listen on port.
	int port = 8082;
	try {
	} catch(websocketpp::exception const &e) {
		// Websocket exception on listen. Get char string via e.what().

With all the configuration done, we can start listening using the transport framework, this is done with the listen() call on the server object. This method is not exception-free, so we surround it with a try/catch block.

	// Starting Websocket accept.
	websocketpp::lib::error_code ec;
	if (ec) {
		// Can log an error message with the contents of ec.message() here.
		return false;
	return true;

Finally we start accepting connections. We just need to start the server proper now, which is done in the following function:

void WebsocketServer::run() {
	try {;
	} catch(websocketpp::exception const &e) {
        // Websocket exception. Get message via e.what().

Again, this is another method which isn’t exception-free, so we have to surround it with a try/catch block. The other clue here is that when we shut down the server at some point, we have to wait for this (blocking) run() call to return before we for example terminate a thread.

void WebsocketServer::stop() {
	// Stopping the Websocket listener and closing outstanding connections.
	websocketpp::lib::error_code ec;
	if (ec) {
		// Failed to stop listening. Log reason using ec.message().
	// Close all existing websocket connections.
	string data = "Terminating connection...";
	map<string, connection_hdl>::iterator it;
	for (it = websockets.begin(); it != websockets.end(); ++it) {
		websocketpp::lib::error_code ec;
		server.close(it->second, websocketpp::close::status::normal, data, ec); // send text message.
		if (ec) { // we got an error
			// Error closing websocket. Log reason using ec.message().
	// Stop the endpoint.

Shutting down the Websocket server is fairly obvious: first we stop listening. This means we will no longer accept new connections. Next we go through all of the websocket connections we still have and close every single one of them. Finally we call stop() on the server object. This isn’t strictly necessary, but it will ensure that the transport backend is completely shut down and any remaining connections forcefully terminated.

Let’s move on to actually accepting new connections. For this we can use a number of handlers [3], including open() and validate. I picked the validate handler, since it allows one to filter incoming connections and reject any which do not authenticate properly or such:

bool WebsocketServer::on_validate(connection_hdl hdl) {
	websocketpp::server<websocketpp::config::asio>::connection_ptr con = server.get_con_from_hdl(hdl);
	websocketpp::uri_ptr uri = con->get_uri();
	string query = uri->get_query(); // returns empty string if no query string set.
	if (!query.empty()) {
		// Split the query parameter string here, if desired.
		// We assume we extracted a string called 'id' here.
	else {
		// Reject if no query parameter provided, for example.
		return false;
	if (pthread_rwlock_wrlock(&websocketsLock) != 0) {
		// Failed to write-lock websocketsLock.
	websockets.insert(std::pair<string, connection_hdl>(id, hdl));
	if (pthread_rwlock_unlock(&websocketsLock) != 0) {
		// Failed to unlock websocketsLock.

	return true;

This code shows how to obtain the connection behind a connection handle from Websocket++ and to extract the URI including its query parameter string from it.

Here we assume that the connection client has to provide a string-based ID, though one can also use another identifier, based on the implementation. We use pthread-based locking around the websockets map to ensure no concurrent access takes place on this data structure and insert the new websocket handle with its id as key.

We may also wish to implement the fail() and close() handlers:

void WebsocketServer::on_fail(connection_hdl hdl) {
	websocketpp::server<websocketpp::config::asio>::connection_ptr con = server.get_con_from_hdl(hdl);
	websocketpp::lib::error_code ec = con->get_ec();
	// Websocket connection attempt by client failed. Log reason using ec.message().

void WebsocketServer::on_close(connection_hdl hdl) {
	// Websocket connection closed.

For the fail handler, we can obtain the connection as before, and extract the error code object to learn the reason behind the failure.

The close handler should generally be fairly boring, but it can be informative to have the confirmation in a log or such of a successfully closed connection.

Moving on, we just have to look at how to send data to such a socket.

bool WebsocketServer::sendData(string id, string data) {
	connection_hdl hdl;
	if (!getWebsocket(id, hdl)) {
		// Sending to non-existing websocket failed.
		return false;
	websocketpp::lib::error_code ec;
	server.send(hdl, data, websocketpp::frame::opcode::text, ec); // send text message.
	if (ec) { // we got an error
		// Error sending on websocket. Log reason using ec.message().
		return false;
	return true;

This function obtains the appropriate connection handle based upon the ID, then proceeds to write the provided data to this connection. The getWebsocket() method is a trivial STL map-based find and iteration effort and isn’t further documented here. Do not forget to lock the map while performing said find and iterator actions on it.

Lastly, how to close a socket:

bool WebsocketServer::sendClose(string id) {
	connection_hdl hdl;
	if (!getWebsocket(id, hdl)) {
		// Closing non-existing websocket failed.
		return false;
	string data = "Terminating connection...";
	websocketpp::lib::error_code ec;
	server.close(hdl, websocketpp::close::status::normal, data, ec); // send close message.
	if (ec) { // we got an error
		// Error closing websocket. Log reason using ec.message().
		return false;
	// Remove websocket from the map.
	return true;

Here we again obtain the proper connection handle, only this time we use the ‘close’ method instead of ‘send’. We can send a close reason using a string, or just send an empty string.

Finally the ID is erased from the websockets map and the now invalid connection handle with it.

With this we have everything we need for the Websocket server, except for one thing: the redirecting of the logging output from Websocket++. We saw earlier that we use the set_ostream() method on the logging interfaces. In the class declaration we saw this mysterious ‘LogStream’ type and an ostream, and again in the static initialisations.

What happens here is that this LogStream class is a custom implementation of std::streambuf, assigned to an std::ostream object which then replaces the standard outputs Websocket++’s logging. For the actual streambuf implementation, one would use something like this:

class LogStream : public streambuf {	
	string buffer;
    int overflow(int ch) override {
        buffer.push_back((char) ch);
        if (ch == '\n') {
            // End of line, write to logging output and clear buffer.
		return ch;
        //  Return traits::eof() for failure.

We just override the virtual overflow() method in the streambuf class. In the default implementation the default buffer overflows for every character written to the streambuf class and thus our overflow method is called for each character.

Using a string as buffer, we capture each received character and check whether it is a newline character or not. If it is we have a complete line which we can then write to whatever logging functionality we use in our project. After this we empty the buffer string and continue with the new line.

In conclusion, I must say that despite the effort it cost me to get a working integration of Websocket++ in my project, I do think it was worth it. Technically it is a well-designed library with a lot of cool features and thanks to its template-based nature ease of expansion and configuration to fit different purposes. Its main weakness is simply the outdated, lacking and occasionally wrong documentation and examples. Hopefully this article will fix at least part of that problem :)



2014 in review

January 2, 2015 Leave a comment

The stats helper monkeys prepared a 2014 annual report for this blog.

Here's an excerpt:

The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 70,000 times in 2014. If it were an exhibit at the Louvre Museum, it would take about 3 days for that many people to see it.

Click here to see the complete report.

Categories: Uncategorized

Power Supply Design Part 1: Unregulated Linear Supplies

December 28, 2014 2 comments

I recently stumbled over a particularly interesting specimen in the family of cheap unregulated power supplies, also lovingly referred to as ‘wallwarts’. Here is the unit in all its prestigious glory:


The label seems to claim it’s been certified, but lists no manufacturer or other useful info beyond the useless model number. Inside we find the following:



What we have here is pretty much the most basic unregulated power supply one can construct, though the bleeder resistor was technically not required. Such luxury. In diagram form we get the following circuit:

We see the transformer, four diodes (1N4001 or better) forming a bridge rectifier (two extra diodes are cheaper than a center-tapped transformer), the smoothing cap (1,000 uF, 16V) and bleeder resistor (100 Ohm, 1/2W?). 230VAC goes straight into the transformer and is stepped down to the desired voltage.

Now, let’s talk safety. While this circuit will work fine when nothing goes wrong, it is a good idea to consider the two most likely scenarios a circuit like this may encounter in the real world. The first is that of a surge, say from a nearby lightning strike, or an internal short-circuit. The second is when the connected device short-circuits, or its output connector or wires short out. The first scenario results in a massive surge into the adapter, the second will pull more and more power through the circuit until something fails.

With this circuit, the surge or internal short will result in the surge being passed on through the device, into the output and into the connected device. This forms a major electrocution and fire risk. Beyond the circuit failing and cutting off power that way, there are no safety features for this scenario. The same is true for the excessive power draw scenario. Here it’ll keep drawing power until likely something in the circuit blows up, catches on fire or both.

While a transformer in theory electrically isolates a circuit, it has a so-called breakdown voltage at which current will pass straight from the primary into the secondary winding(s), causing a short. During a surge scenario this is likely to happen, depending on the quality of the insulating tape between the windings. One should always consider the scenario where a short forms inside a transformer or related components.

So how to protect against this scenario? There are multiple ways to go about it, but the easiest and cheapest one has to be the humble fuse:

psu-linear-unregulated-fuseWhen the current becomes too much or the voltage too high, the fuse will melt or trip depending on the type of fuse used. One can use thermal fuses if one wants it to be easy to reset: once cooled down they will automatically reset. Regular glass fuses are even cheaper, though probably not as desirable in a closed, maintenance-free unit like a wallwart. There are more options than fuses, of course. One can also look at MOVs, crowbar (zener plus SCR) and clamp (zener plus transistor) overvoltage protection.

At any rate the message should be clear: unregulated linear power supplies are easy and cheap, but one should not skimp on the safeties.



New Project: NGS CPU Architecture

August 30, 2014 Leave a comment

For those looking at the scarcity of posts on this blog and wondering what in the world happened to me, I can offer the following explanation: personal (health) issues, as well as the embarking on writing this one book for Packt Publishing on AndEngine game development have taken up most of my time recently. Unfortunately I haven’t had much opportunity to write on this blog for that reason. Fortunately, however, I have not been sitting completely idle and have begun a new project which at least some may find interesting.

The project is a custom CPU architecture I have been wanting to develop for a while now. ‘Great’, I can hear some of you think, ‘Another CPU architecture, why would we need another one?!’ The short version is that this is a pretty experimental architecture, exploring features and designs not commonly used in any mainstream CPU architectures. Consider it a bit of a research project, one aimed at developing a CPU architecture which may be useful for HPC (high-performance computing) as well as general-purpose computing.

The project’s name is ‘Nyanko Grid-scaling System’, or NGS for short. Currently I’m working on the first prototype – a simplified 16-bit version of NGS – featuring only a single ALU. This prototype is referred to as ‘NGS-16’. Even then it has many of the essential features which I think make this into such an interesting project, including:

– unclocked design: all components work without a central clock or pipeline directing them.
– task scheduler: integrating the functionality of the software-based scheduler of an OS.
– virtual memory management: virtual memory management done in hardware.
– driver management: drivers for hardware devices are either in hardware, or directly communicate with the CPU.

Essentially this means that there’s no software-based operating system (OS) as such. A shell will be required to do the actual interfacing with human beings and to instruct the NGS task scheduler to launch new processes, but no OS in the traditional sense. While this also means that existing operating systems cannot be ported to the NGS architecture in any realistic fashion, it does not mean that applications can not be compiled for it. After porting a C/C++ toolchain (GCC or LLVM) to NGS, the average C/C++-based application would only be some library-wrangling and recompile away from functioning.

Moving back to the present, I’m writing NGS-16 in VHDL, with the Lattice MachX02-7000 [1] as the target FPGA. The basic structure has been laid out (components, top entities, signals), with just the architecture implementations and debugging/simulation left to finish. While this prototype is taking the usual short-cuts (leaving out unneeded components, etc.) to ease development, it should nevertheless be a useful representation of what the NGS architecture can do.

The FPGA board I’ll be using is actually produced by a friend, who called it the FleaFPGA following the name of his company: Fleasystems [2]. As you can see on the FleaFPGA page [3], it offers quite a reasonable amount of I/O, including VGA, USB (host), PS/2, audio and an I/O header. The idea is to use as much of this hardware as possible with the initial range of prototypes. I also have another FPGA board (Digilent Nexys 2, Spartan 3E-based), which offers similar specifications (LEs and I/O). Depending on how things work out I may also run NGS-16 on that board. Ultimately I may want to build my own FPGA board aimed specifically at running NGS.

Over the coming months I’ll be blogging about my progress with this NGS-16 prototype and beyond, so stay tuned :)



Categories: NGS, VHDL Tags: , , , , ,

2013 in review

January 26, 2014 Leave a comment

The stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 71,000 times in 2013. If it were an exhibit at the Louvre Museum, it would take about 3 days for that many people to see it.

Click here to see the complete report.

Categories: Uncategorized Tags:

Pointers Into Arrays: What You’re Really Dereferencing

January 26, 2014 2 comments

This one falls under the heading of things you should definitely know as a C/C++ programmer, but which are easy to get wrong by accident. For me it was while working on an Sliding Discrete Fourier Transform (SDFT) implementation in C++ that I stumbled over this gotcha. When I got nonsense output from the algorithm I took a long, detailed look at all aspects of it until finally a friend pointed me at something which I had overlooked until then because my brain had been telling itself that it couldn’t possibly be something that simple.

First of all, a little bit of theory on arrays in C/C++: all they are is just a series (array) of bytes in memory of which you tell the compiler that it’s special. You also give it a type, which doesn’t do anything to the bytes, but just hints to the compiler how it should treat the array in certain operations. This type is usually char, but can be anything else as well, including int and float. The trick here is that the compiler will thus essentially know not only the type this array contains, but also how many bytes go into each unit of the array.

Now, arrays are usually allocated on the heap, which in C++ takes the following format:

char* pChar = new char[16];

This gives us a single pointer into the array, which we have told the compiler contains char types. The pointer we have is thus also of the type char. This is the entire clue we have to the next procedure where we attempt to read 32-bit floating point types (float) from the array. We therefore want to get 4 bytes at a time where the array is said to contain chars, which are a single byte each. The naive but wrong approach is the following:

float f = *pChar;

Here we hope that the compiler will be so kind as to deposit four bytes into our four-byte destination type from the array. Unfortunately compilers aren’t very nice and thus from our dereferenced char pointer we only get a single byte, namely the char value it was pointing at.

To actually obtain four bytes from the array in one go we need to talk a bit with the compiler. This is also called ‘casting’, whereby we tell the compiler that we want to stop to pretend that this blob of bits is a certain type and that we’d rather have the compiler treat it as something else. This is a common technique in many applications, whereby the by itself unusable void type is instrumental. Fortunately we don’t have to go that far here. All we want in this case is to let the compiler know that we want to have this array treated as a series of floats instead of chars now:

float f = *((float*) pChar);

What we do here via some delicious brackets magic to make things flow in the right order, is to first cast the char pointer we have into a float pointer, which means that it now points at four bytes instead of just one. When we thus dereference the result we are copying four bytes into the destination instead of one. Mission accomplished.

It is possible to go even fancier here than in the above example using C++’s myriad of fancy casting mechanisms, but for basic casting as we need here the C-style method suffices. It’s best to leave those for special cases anyway, as they tend to be significantly more specialized and unnecessary for casting of basic types.

Hopefully the above will be useful to someone, whether a beginner or a more advanced C/C++ user, even just as a quick reminder. I know I could have used it a few days ago :)



Get every new post delivered to your Inbox.

Join 2,730 other followers