The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.
Here's an excerpt:
The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 70,000 times in 2014. If it were an exhibit at the Louvre Museum, it would take about 3 days for that many people to see it.
I recently stumbled over a particularly interesting specimen in the family of cheap unregulated power supplies, also lovingly referred to as ‘wallwarts’. Here is the unit in all its prestigious glory:
The label seems to claim it’s been certified, but lists no manufacturer or other useful info beyond the useless model number. Inside we find the following:
What we have here is pretty much the most basic unregulated power supply one can construct, though the bleeder resistor was technically not required. Such luxury. In diagram form we get the following circuit:
We see the transformer, four diodes (1N4001 or better) forming a bridge rectifier (two extra diodes are cheaper than a center-tapped transformer), the smoothing cap (1,000 uF, 16V) and bleeder resistor (100 Ohm, 1/2W?). 230VAC goes straight into the transformer and is stepped down to the desired voltage.
Now, let’s talk safety. While this circuit will work fine when nothing goes wrong, it is a good idea to consider the two most likely scenarios a circuit like this may encounter in the real world. The first is that of a surge, say from a nearby lightning strike, or an internal short-circuit. The second is when the connected device short-circuits, or its output connector or wires short out. The first scenario results in a massive surge into the adapter, the second will pull more and more power through the circuit until something fails.
With this circuit, the surge or internal short will result in the surge being passed on through the device, into the output and into the connected device. This forms a major electrocution and fire risk. Beyond the circuit failing and cutting off power that way, there are no safety features for this scenario. The same is true for the excessive power draw scenario. Here it’ll keep drawing power until likely something in the circuit blows up, catches on fire or both.
While a transformer in theory electrically isolates a circuit, it has a so-called breakdown voltage at which current will pass straight from the primary into the secondary winding(s), causing a short. During a surge scenario this is likely to happen, depending on the quality of the insulating tape between the windings. One should always consider the scenario where a short forms inside a transformer or related components.
So how to protect against this scenario? There are multiple ways to go about it, but the easiest and cheapest one has to be the humble fuse:
When the current becomes too much or the voltage too high, the fuse will melt or trip depending on the type of fuse used. One can use thermal fuses if one wants it to be easy to reset: once cooled down they will automatically reset. Regular glass fuses are even cheaper, though probably not as desirable in a closed, maintenance-free unit like a wallwart. There are more options than fuses, of course. One can also look at MOVs, crowbar (zener plus SCR) and clamp (zener plus transistor) overvoltage protection.
At any rate the message should be clear: unregulated linear power supplies are easy and cheap, but one should not skimp on the safeties.
For those looking at the scarcity of posts on this blog and wondering what in the world happened to me, I can offer the following explanation: personal (health) issues, as well as the embarking on writing this one book for Packt Publishing on AndEngine game development have taken up most of my time recently. Unfortunately I haven’t had much opportunity to write on this blog for that reason. Fortunately, however, I have not been sitting completely idle and have begun a new project which at least some may find interesting.
The project is a custom CPU architecture I have been wanting to develop for a while now. ‘Great’, I can hear some of you think, ‘Another CPU architecture, why would we need another one?!’ The short version is that this is a pretty experimental architecture, exploring features and designs not commonly used in any mainstream CPU architectures. Consider it a bit of a research project, one aimed at developing a CPU architecture which may be useful for HPC (high-performance computing) as well as general-purpose computing.
The project’s name is ‘Nyanko Grid-scaling System’, or NGS for short. Currently I’m working on the first prototype – a simplified 16-bit version of NGS – featuring only a single ALU. This prototype is referred to as ‘NGS-16’. Even then it has many of the essential features which I think make this into such an interesting project, including:
– unclocked design: all components work without a central clock or pipeline directing them.
– task scheduler: integrating the functionality of the software-based scheduler of an OS.
– virtual memory management: virtual memory management done in hardware.
– driver management: drivers for hardware devices are either in hardware, or directly communicate with the CPU.
Essentially this means that there’s no software-based operating system (OS) as such. A shell will be required to do the actual interfacing with human beings and to instruct the NGS task scheduler to launch new processes, but no OS in the traditional sense. While this also means that existing operating systems cannot be ported to the NGS architecture in any realistic fashion, it does not mean that applications can not be compiled for it. After porting a C/C++ toolchain (GCC or LLVM) to NGS, the average C/C++-based application would only be some library-wrangling and recompile away from functioning.
Moving back to the present, I’m writing NGS-16 in VHDL, with the Lattice MachX02-7000  as the target FPGA. The basic structure has been laid out (components, top entities, signals), with just the architecture implementations and debugging/simulation left to finish. While this prototype is taking the usual short-cuts (leaving out unneeded components, etc.) to ease development, it should nevertheless be a useful representation of what the NGS architecture can do.
The FPGA board I’ll be using is actually produced by a friend, who called it the FleaFPGA following the name of his company: Fleasystems . As you can see on the FleaFPGA page , it offers quite a reasonable amount of I/O, including VGA, USB (host), PS/2, audio and an I/O header. The idea is to use as much of this hardware as possible with the initial range of prototypes. I also have another FPGA board (Digilent Nexys 2, Spartan 3E-based), which offers similar specifications (LEs and I/O). Depending on how things work out I may also run NGS-16 on that board. Ultimately I may want to build my own FPGA board aimed specifically at running NGS.
Over the coming months I’ll be blogging about my progress with this NGS-16 prototype and beyond, so stay tuned 🙂
The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.
Here’s an excerpt:
The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 71,000 times in 2013. If it were an exhibit at the Louvre Museum, it would take about 3 days for that many people to see it.
This one falls under the heading of things you should definitely know as a C/C++ programmer, but which are easy to get wrong by accident. For me it was while working on an Sliding Discrete Fourier Transform (SDFT) implementation in C++ that I stumbled over this gotcha. When I got nonsense output from the algorithm I took a long, detailed look at all aspects of it until finally a friend pointed me at something which I had overlooked until then because my brain had been telling itself that it couldn’t possibly be something that simple.
First of all, a little bit of theory on arrays in C/C++: all they are is just a series (array) of bytes in memory of which you tell the compiler that it’s special. You also give it a type, which doesn’t do anything to the bytes, but just hints to the compiler how it should treat the array in certain operations. This type is usually char, but can be anything else as well, including int and float. The trick here is that the compiler will thus essentially know not only the type this array contains, but also how many bytes go into each unit of the array.
Now, arrays are usually allocated on the heap, which in C++ takes the following format:
char* pChar = new char;
This gives us a single pointer into the array, which we have told the compiler contains char types. The pointer we have is thus also of the type char. This is the entire clue we have to the next procedure where we attempt to read 32-bit floating point types (float) from the array. We therefore want to get 4 bytes at a time where the array is said to contain chars, which are a single byte each. The naive but wrong approach is the following:
float f = *pChar;
Here we hope that the compiler will be so kind as to deposit four bytes into our four-byte destination type from the array. Unfortunately compilers aren’t very nice and thus from our dereferenced char pointer we only get a single byte, namely the char value it was pointing at.
To actually obtain four bytes from the array in one go we need to talk a bit with the compiler. This is also called ‘casting’, whereby we tell the compiler that we want to stop to pretend that this blob of bits is a certain type and that we’d rather have the compiler treat it as something else. This is a common technique in many applications, whereby the by itself unusable void type is instrumental. Fortunately we don’t have to go that far here. All we want in this case is to let the compiler know that we want to have this array treated as a series of floats instead of chars now:
float f = *((float*) pChar);
What we do here via some delicious brackets magic to make things flow in the right order, is to first cast the char pointer we have into a float pointer, which means that it now points at four bytes instead of just one. When we thus dereference the result we are copying four bytes into the destination instead of one. Mission accomplished.
It is possible to go even fancier here than in the above example using C++’s myriad of fancy casting mechanisms, but for basic casting as we need here the C-style method suffices. It’s best to leave those for special cases anyway, as they tend to be significantly more specialized and unnecessary for casting of basic types.
Hopefully the above will be useful to someone, whether a beginner or a more advanced C/C++ user, even just as a quick reminder. I know I could have used it a few days ago 🙂
While recently working on a project involving C++, Qt and networking using TCP sockets I came across a nasty surprise in the form of blocking socket functions not working on Windows with QTcpSocket . Not working as in fundamentally broken and a bug report  dating back to early 2012, when Qt 4.8 was new. After a quick look through the relevant Qt source code (native socket engine and kin) in an attempt to determine whether it was something I could reasonably fix myself, I determined that this was not a realistic option due to the amount of work involved.
Instead I found myself drawn to the option of implementing a QObject-based class wrapping around the native sockets on Windows (Winsock2) and others (POSIX/Berkeley). Having used these native sockets before, I could only think of how easy it’d be to write such a wrapper class for my needs. These needs involved TCP sockets, blocking functions and client-side functionality, all of which take only a little bit of effort to implement. Any further features could be implemented on an as-needed basis. The result of this is the NNetworkSocket class, which I put on Github today  in a slightly expanded version.
As I suspect that it won’t be the first add-on/drop-in/something else class I’ll be writing to complement the Qt framework, I decided to come up with the so very creative name of ‘Nt’ for the project. Much like ‘Qt’, its pronunciation is obvious and silly: ‘Qt’ as ‘cute’ and ‘Nt’ as ‘neat’. Feel free to have a look at the code I put online including the sample application (samples/NNetworkSocket_sample). Documentation will follow at some point once the class has matured some more.
As usual, feel free to provide feedback, patches and donations 🙂
Related to my previous post  involving a project using Java sockets, I’d like to post about an issue I encountered while debugging the project. Allow me to first describe the environment and set up.
The Java side as an extended version of the class described in the linked post ran as client on Android, specifically a Galaxy Nexus device running Android 4.2.2 and later 4.3. Its goal was to send locally collected arrays of bytes via the socket to the server after connecting. The server was written in C++ with part of the networking side handled by the Qt framework (QTcpServer) and the actual communication via native sockets on a Windows 7 Ultimate x64 system.
The problem occurred upon the connecting of the Android client to the server: the connecting would be handled fine, the thread to handle the native socket initialized and started as it should be. After that however the issue was that never any data would be received on the server-side of the client socket. Checks using select() showed that there never arrived any data in the buffer. Upon verification with a telnet client (Putty) it turned out that the server was able to receive data just fine, and thus that the issue had to lie with the client side, i.e. the Android client.
Inspection using the Wireshark network traffic sniffer during the communication between the Android client and the server showed a normal TCP sequence, with SYN, SYN-ACK and ACK packets followed by a PSH-ACK from the client with the first data. This followed by an ACK from the server, indicating that the server network stack had at least acknowledged the data package. Everything seemed in order, although it was somewhat remarkable that the first client-side ACK had the exact same timestamp in Wireshark as the PSH-ACK packet.
Mystified, I stumbled over a few posts  on StackOverflow in which it was suggested that using Thread.sleep() after the connecting phase would resolve this. Trying this solution with a 500 ms sleep period I found that suddenly the client-server communication went flawlessly. My only question hereby is why this is the case.
Looking through the TCP specifications I didn’t find anything definite, even though the evidence so far suggests that the ACK on SYN-ACK can not be accompanied by a PSH or similar at the same time. Possibly that something else plays a role here, but it seems that the lesson here is that in case of non-functioning Java sockets one just has to wait a little while before starting to send data. I’d gladly welcome further clarification on why this happens.