Archive

Author Archive

Purgatory or Hell: Escape from eternal Alpha status

January 17, 2021 Leave a comment

Many of us will have laughed and scoffed at Google’s liberal use of the tag ‘Beta software’ these past years. Is the label ‘Beta’ nothing more than an excuse for any bugs and issues that may still exist in code, even when it has been running in what is essentially a production environment for years? Similarly, the label ‘Alpha’ when given to software would also seem to seek a kind of indemnity for any issues or lacking features: to dismiss any issue or complaint raised with the excuse that the software is still ‘in alpha’.

Obviously, any software project needs time to develop. Ideally it would have a clear course through the design and requirements phase, smooth sailing through Alpha phase as all the features are bolted onto the well-designed architecture, and finally the polishing of the software during the Beta and Release Candidate (RC) phases. Yet it’s all too easy to mess things up here, which usually ends up with a prolonged stay in the Alpha phase.

A common issue that leads to this is too little time spent in the initial design and requirements phase. Without a clear idea of what the application’s architecture should look like, the result is that during the Alpha phase both the features and architecture end up being designed on the spot. This is akin to building a house before the architectural plans are drawn up, but one wants to starts building anyway, because one has a rough idea of what a house looks like.

When I began work on the NymphCast project [1] a few years back, all I had was a vague idea of ‘streaming audio’, which slowly grew over time. With the demise of Google’s ChromeCast Audio product, it gave me a hint to look at what that product did, and what people looked at it. By that time NymphCast was little more than a concept and an idea in my head, and I’m somewhat ashamed to say that it took me far too long to work out solid requirements and a workable design and architecture.

Looking back, what NymphCast was at the beginning of 2020 – when it got a sudden surge of attention after an overly enthusiastic post from me on the topic – was essentially a prototype. A prototype is somewhat like an Alpha-level construction, but never meant to be turned into a product: it’s a way to gather information for the design and requirements phase, so that a better architecture and product can be developed. Realising this was essential for me take the appropriate steps with the NymphCast project.

With only a vague idea of one’s direction and goals while in the Alpha phase, one can be doomed to stay there for a long time, or even forever. After all, when is the Alpha phase ‘done’, when one doesn’t even have a clear definition of what ‘done’ actually means in that context? Clearly one needs to have a clear feature set, clear requirements, a clear schedule and definition of ‘done’ for all of those. Even for a hobby project like NymphCast, there is no fun in being stuck in Alpha Limbo for months or even years.

After my recent post [2] on the continuation of the NymphCast project after a brief burn-out spell, I have not yet gotten the project into a Beta stage. What I have done is frozen the feature set, and together with a friend I’m gradually going through the remaining list of Things That Do Not Work Properly Yet. Most of this is small stuff, though the small stuff is usually the kind of thing that will have big consequences on user friendliness and overall system stability. This is also the point where there are big rewards for getting issues fixed.

The refactored ring buffer class has had some issues fixed, and an issue with a Stop condition was recently resolved. The user experience on the player side has seen some bug fixes as well. This is what Alpha-level testing should be like: the hunting down of issues that impede a smooth use of the software, until everything seems in order.

The moral of this story then is that before one even writes a line of code, it’s imperative that one has a clear map of where to go and what to do, lest one becomes lost. The second moral is that it’s equally imperative to set limits. Be realistic about the features one can implement this time around. Sort the essential from the ‘nice to have’. If one does it right now, there is always a new development cycle after release into production where one gets to tear everything apart again and add new things.

Ultimately, the Alpha phase ends when it’s ‘good enough’. The Beta phase ends when the issue tracker begins to run dry. Release Candidates exist because life is full of unexpected surprises, especially when it concerns new software. Yet starting the Alpha phase before putting together a plan makes as much sense as walking into the living room at night without turning a light on because ‘you know where to walk’.

Fortunately, even after you have repeatedly bumped your shins against furniture and fallen over a chair, it’s still not too late to turn on a light and do the limping walk of shame ๐Ÿ™‚

Maya

[1] https://github.com/MayaPosch/NymphCast
[2] https://mayaposch.wordpress.com/2020/12/27/nymphcast-on-getting-a-chromecast-killer-to-a-beta-release/

NymphCast: on getting a ‘ChromeCast killer’ to a Beta release

December 27, 2020 1 comment

It’s been a solid nine months since I first wrote about the NymphCast project [1] on my personal blog [2]. That particular blog post ended up igniting a lot of media attention [3], as it also began to dawn on me how much work would still be required to truly get it to a ‘release’ state. Amidst the stress from this, the 2020 pandemic and other factors, the project ended up slumbering for a few months as I tried to stave off burn-out on the project as a whole.

Sometimes such a break from a project is essential, to be able to step back instead of bashing one’s head against the same seemingly insurmountable problems over and over as they threaten to drown you into an ocean of despair, frustration and helplessness. You know, the usual reason why ‘grinding’, let alone a full-blown death march, is such a terrible thing in software development.

One thing I did do during that time off was to solve one particular issue that had made me rather sad during initial NymphCast development: that of auto-discovery of NymphCast servers on the local network. I had attempted to use DNS Service Discovery (DNS-SD, mDNS) for this, but ran into issue that there is no cross-platform solution for mDNS that Just Works ™. Before reading up on mDNS I had in my mind a setup where the application itself would announce its presence to the network, or to a central mDNS server on the system, as that made sense to me.

Instead I found myself dealing with a half-working solution that basically required Avahi on Linux, Bonjour on MacOS and something custom installed and configured on Windows, not to mention other desktop operating systems. On the client side things were even more miserable, with me finding only a single library for mDNS that was somewhat easy to integrate. Yet even then I had no luck making it work across different OSes, with the running server instances regularly not found, or requiring specific changes to the service name string to get a match.

The troubleshooting there was one factor that nearly made me burn out on the NymphCast project. Then, during that break I figured that I might as well write something myself to replace mDNS. After all, I just needed something that spit out a UDP Broadcast message, and something that listened for it and responded to it. This idea turned into NyanSD [4], which I wrote about before [5].

I have since integrated NyanSD into NymphCast on the server & client side, with as result that I have had no problems any more with service discovery, regardless of the platform.

Other aspects of NymphCast were less troublesome, but mostly just annoying, such as getting a mobile client for NymphCast. Originally I had planned to use a single codebase for the graphical NymphCast Player application, using Qt’s Android & iOS cross-platform functionality to target desktop and mobile platforms. Unfortunately this ran into the harsh reality of Qt’s limited Android support and spotty documentation [6]. This led me to work on a standard, native Android application written in Java for the GUI and using the JNI to use the same C++ client codebase. This way I only have to port the Qt-specific code on the Android side to the Java-Android equivalent.

Status at this point is that all features for the targeted v0.1 release have been implemented, with testing ongoing. An additional feature that got integrated at the last moment was the synchronisation of music and video playback between different NymphCast devices, for multi-room playback and similar. The project also saw the addition of a MediaServer [7], which allows clients to browse the media files shared by the server, and start playback of these files on any of the NymphCast servers (receivers) on the network. I also refactored the in-memory buffer to use a simple ringbuffer instead of the previous, more complicated buffer.

In order to get the v0.1 development branch out of Alpha and into Beta, a few more usage scenarios have to be tested, specifically the playback of large media files (100+ MB), both with a single NymphCast receiver and a group, and directly from a client as well as using a MediaServer instance. The synchronisation feature has seen some fixes recently already while testing it, but needs more testing to make it half-way usable.

A major issue I found with this synchronisation feature was the difficulty of determining local time on all the distinct devices. With the lack of a real-time clock (RTC) on Raspberry Pi SBCs in particular, I had to refactor the latency algorithm to only rely on the clock of the receiver that was used as the master receiver. Likely this issue may require more tweaking over the coming time to get synchronisation with better than 100 ms de-synchronisation.

I think that in the run-up to a v0.1 release, the Beta phase will be highly useful in figuring out the optimal end-user scenarios, both in terms of easy setup and configuration, as well as the day to day usage. This is the point where I pretty much have to rely on the community to get a solid idea of what are good ideas, and what patterns should be avoided.

That said, it’s somewhat exciting to see the project now finally progressing to a first-ever Beta release. Shouldn’t be more than a year or two before the first Release Candidate now, perhaps ๐Ÿ™‚

Maya

[1] https://github.com/MayaPosch/NymphCast
[2] https://mayaposch.blogspot.com/2020/03/nymphcast-casual-attempt-at-open.html
[3] https://mayaposch.blogspot.com/2020/03/the-fickle-world-of-software-development.html
[4] https://github.com/MayaPosch/NyanSD
[5] https://mayaposch.wordpress.com/2020/07/26/easy-network-service-discovery-with-nyansd/
[6] https://bugreports.qt.io/browse/QTBUG-83372
[7] https://github.com/MayaPosch/NymphCast-MediaServer

Categories: nymphcast

Easy network service discovery with NyanSD

July 26, 2020 1 comment

In the process of developing an open alternative to ChromeCast called NymphCast [1], I found myself having to deal with DNS-SD (DNS service discovery) and mDNS [2]. This was rather frustrating, if only because one cannot simply add a standard mDNS client to a cross-platform C++ application, nor is setting up an mDNS record for a cross-platform service (daemon) an easy task, with the Linux world mostly using Avahi, while MacOS uses Bonjour, and Windows also kinda-sorta-somewhat using Bonjour if it’s been set up and configured by the user or third-party application.

As all that I wanted for NymphCast was to have an easy way to discover NymphCast receivers (services) running on the local network from a NymphCast client, this all turned out to be a bit of a tragedy, with the resulting solution only really working when running the server and client on Linux. This was clearly sub-optimal, and made me face the options of fighting some more with existing mDNS solutions, implement my own mDNS server and client, or to write something from scratch.

As mDNS (and thus DNS-SD) is a rather complex protocol, and it isn’t something which I feel a desperate need to work with when it comes to network service discovery of custom services, I decided to implement a light-weight protocol and reference implementation called ‘NyanSD’, for ‘Nyanko Service Discovery’ [3].

NyanSD is a simple binary protocol that uses a UDP broadcast socket on the client and UDP listening sockets on the server side. The client sends out a broadcast query which can optionally request responses matching a specific service name and/or network protocol (TCP/UDP). The server registers one or more services, which could be running on the local system, or somewhere else. This way the server acts more as a registry, allowing one to also specify services which do not necessarily run on the same LAN.

The way that I envisioned NyanSD originally was merely as an integrated solution within NymphCast, so that the NymphCast server can advertise itself on the UDP port, while accepting service requests on its TCP port. As I put the finishing touches on this, it hit me that I could easily make a full-blown daemon/service solution out of it as well. With the NyanSD functionality implemented in a single header and source file, it was fairly easy to create a server that would read in service files from a standard location (/etc/nyansd/services on Linux/BSD/MacOS, %ProgramData%\NyanSD\services on Windows). This also allowed me implement my first ever Windows service, which was definitely educational.

Over the coming time I’ll be integrating NyanSD into NymphCast and likely discarding the dodgy mDNS/DNS-SD attempt. It will be interesting to see whether I or others will find a use for the NyanSD server. While I think it would be a more elegant solution than the current mess with mDNS/DNS-SD and UPnP network discovery, some may disagree with this notion. I’m definitely looking forward to discussing the merits and potential improvements of NyanSD.

Maya

[1] https://github.com/MayaPosch/NymphCast
[2] https://en.wikipedia.org/wiki/Zero-configuration_networking#DNS-based_service_discovery
[3] https://github.com/MayaPosch/NyanSD

Keeping history alive with a 1959 FACOM 128B relay-based computer

August 4, 2019 2 comments

Back in the 1950s, the competition was between vacuum tube (valve) based computers and their relay-based brethren. Whereas the former type was theoretically faster, vacuum tubes suffer from reliability issues, which meant that relay-based computers would be used alongside tube-based ones. Not surprisingly, Fujitsu also designed a number of such electro-mechanical computers back then. More surprisingly, they are still keeping a FACOM 128B in tip-top shape.

Known in the 1950s as Fuji Tsushinki Manufacturing Corporation, Fujitsu’s Ikeda Toshio was involved in the design of first the FACOM 100, which was completed in 1954, followed by the FACOM 128A in 1956. The 128B was a 1958 upgrade of the 128A based on user experiences. Fujitsu installed a FACOM 128B at their own offices in 1959 to assist with projects ranging from the design of camera lenses to the NAMC YS-11 passenger plane, as well as calculation services.

As a successor in a long line of electro-mechanical computers (including the US’s 1944 Harvard Mark I) performance was pretty much as good as it was going to get with relays. Ratings of the FACOM 128B were listed as 0.1-0.2 seconds for addition/subtraction operations, 0.1-0.35 seconds for multiplication, with operations involving complex numbers and logarithmic operations taking in the order of seconds. Maybe not amazing by today’s (or 1970s) standards, but back then their point was to massively and consistently outperform human computers, with (ideally) unfailing accuracy.

Today, this same FACOM 128B can be found at the Toshio Ikeda Memorial Hall at Fujitsuโ€™s Numazu Plant, where it’s lovingly maintained by the 49-year old engineer Tadao Hamada. Working as the leader of Fujitsu’s 2006 project to pass down technology that is still historically relevant, his job is basically to keep this relay-based computer working the way it has done since it was installed in 1959.

Read more…

Parsing command line arguments in C++

March 17, 2019 4 comments

One of the things which have frustrated me since I first started programming has been the difficulty in using command line arguments provided to one’s application. Everyone of us is aware of the standard formulation of the main function:

int main(int argc, char** argv);

Here argc is the number of arguments (separated by spaces), including the name of the application binary itself. Then argv is an array of C-style strings, each containing an argument. This leads to the most commonly used style of categorising the arguments:

app.exe -h --long argument_text

What annoyed me all these years is not having a built-in way to parse command line arguments in C++. Sure, there’s the getopt [2] way if one uses Linux or a similar OS. There are a range of argument parser libs or APIs in frameworks, such as gflags [3], Boost Program Options [4], POCO [5], Qt [6] and many others.

What these do not provide is a simple, zero-dependency way to add argument parsing to C++, while also being as uncomplicated as possible. This led me to put together a simple command line argument parsing class, which does exactly what I desire of such an API, without any complications.

Meet Sarge [1] and its integration test application:

#include "../src/sarge.h"

#include 


int main(int argc, char** argv) {
	Sarge sarge;
	
	sarge.setArgument("h", "help", "Get help.", false);
	sarge.setArgument("k", "kittens", "K is for kittens. Everyone needs kittens in their life.", true);
	sarge.setDescription("Sarge command line argument parsing testing app. For demonstration purposes and testing.");
	sarge.setUsage("sarge_test ");
	
	if (!sarge.parseArguments(argc, argv)) {
		std::cerr << "Couldn't parse arguments..." << std::endl;
		return 1;
	}
	
	std::cout << "Number of flags found: " << sarge.flagCount() << std::endl;
	
	if (sarge.exists("help")) {
		sarge.printHelp();
	}
	else {
		std::cout << "No help requested..." << std::endl;
	}
	
	std::string kittens;
	if (sarge.getFlag("kittens", kittens)) {
		std::cout << "Got kittens: " << kittens << std::endl;
	}
	
	return 0;
}

Here one can see most of the Sarge API, with the setting of arguments we are looking for, followed by the application description and usage, as it'll be printed if the user requests the help view or our code decides to print it in the case of missing options or similar.

The Sarge class implementation itself is very basic, using nothing but the STL features, specifically the vector, map, memory, iostream and string headers, in as of writing 136 lines of code.

When asked to parse the command line arguments, it will scan the argument list (argv) for known flags, flags which require a value, and unknown flags. It'll detect unknown flags and missing values, while allowing for short options (single-character) to be chained together.

I'll be using Sarge for my own projects from now on, making additions and tweaks as I see fit. Feel free to have a look and poke at the project as well, and let me know your thoughts.

Maya

[1] https://github.com/MayaPosch/Sarge
[2] https://en.wikipedia.org/wiki/Getopt
[3] https://github.com/gflags/gflags
[4] http://www.boost.org/doc/libs/1_64_0/doc/html/program_options.html
[5] https://pocoproject.org/docs/Poco.Util.OptionProcessor.html
[6] http://qt.io

Categories: C++, Projects Tags: , ,

Reviewing dual-layer PCBWay PCBs

March 11, 2019 Leave a comment

This review is an addendum to the first part in the Greentropia Base Board article series [1]. Here we have a look at the PCB ordering options, process and product delivered by PCBWay and conclude with impressions of the Greentropia Base realized with these PCBs.

Much to the delight of professional hardware developers and hobbyists alike, prices for dual layer FR4 PCBs have come down to a point where shipping from Asia has become the major cost factor. An online price comparison [2] brings up the usual suspects, with new and lesser-known PCB manufacturers added to the mix.

In this competitive environment, reputation is just as important as consistently high quality and great service. Thus PCBWay [3] reached out to us to talk about their PCB manufacturing process and products by providing free PCBs, which we accepted as an opportunity to fast-lane the Greentropia Base board [4], a primary building block of the ongoing Greentropia indoor farming project [5].

Ordering the PCBs

PCB specifications guide the design process and show up again when ordering the actual PCBs. They are at the beginning and the end of the board design process – hopefully without escalation to smaller drill sizes, trace widths and layer count.

The manufacturing capabilities [6] are obviously just bounds for the values selected in a definitive set of design rules, leaving room for a trade-off between design challenges and manufacturing cost. Sometimes relaxing the minimum trace width and spacing from 5/5mil (0.125 mm) to 6/6mil (0.15 mm) can make a noticeable difference in PCB cost. And then again, switching from 0.3 mm to 0.25 mm minimum drill size can make fan-out and routing in tight spaces happen, albeit at a certain price.

Logically we will need to look at the price tag of standard and extended manufacturing capabilities. The following picture displays pricing as of the writing of this article:pcbway_order_spec

For some options the pricing is very attractive. Most notably an array of attractive colours is available at no additional charge. With RoHS and REACH directives in place however it remains to be seen whether lead-free hot air surface levelling (HASL) will become the new standard at no added cost.

Luckily for our project we do not need to stray far from the well-trodden path and just opt for the lead-free finish on a blue 1.6mm PCB.

The ordering process is hassle-free and provides frequent status updates:

pcbway_order_progress

A month after our order, an online gerber viewer [7] was introduced to help designers quickly verify their gerber output before uploading them for the order. It must be noted however that this online feature is at an early stage and is expected to provide layer de-duplication, automatic and consistent color assignment and appropriate z-order and better rendering speed in the future.

pcbway_gerber_viewer

Gerbv [8] is a viable alternative which also provides last-minute editing capabilities (e.g. deleting a stray silkscreen element).

Visual inspection

PCBs were received within one week after ordering, packaged in a vacuum sealed bag and delivered in a cardboard box with foam sheets for shock protection. One extra PCB was also included in the shipment, which is nice to have.

The boards present with cleanly machined edges, well-aligned drill pattern and stop masks on both sides and without scratches or defects. The silkscreen has good coverage and high resolution. Adhesion of stop mask and silkscreen printing are excellent. The lead-free HASL finish is glossy and flat, and while we couldn’t put it to the test with this layout, the TSSOP footprint results suggest no issues with TSSOP, TQFP and BGA components down to 0.5mm pitch.

The board identifier is thankfully hidden underneath an SOIC component in the final product. Pads show the expected probe marks from e-test. without affecting the final reflow result. No probe damage to the pads is evident.

Realising the project

We conclude with some impressions of the assembled PCBs, which we will use in the following articles to build an automated watering system.

Here we see our signature paws with the 2 mm wide capacitor C15 next to them for scale. The pitch of the vertical header is 2.54 mm. Tenting of the vias is also consistent and smooth.

greentropia_base_mask_quality

Good mask alignment and print quality.

The next picture shows successful reflow of components of different size and thermal mass after a lead-free reflow cycle in a convection oven. As the PCBs were properly sealed and fresh, no issues with delamination occured.

greentropia_base_different_size_reflow_result

DC-DC section reflow result.

The reflow result with the lead-free HASL PCB and the stencil ordered along with it is also quite promising. No solder bridges were observed despite lack of mask webbing, which is likely due to our mask relief settings and minimum webbing width. Very thin webbing can be destroyed during HASL, so if the additional safety in the 0.15 to 0.2 mm between the pads is needed it’s worth checking back with the manufacturer.

greentropia_base_tssop_hasl_result

TSSOP reflow result.

While testing the 5V to 12V boost converter, it was found that it worked without issues. Initial testing of the ADC was also promising. As we continue to test the boards over the coming time weโ€™ll find out whether there are zero issues, but so far it appears that everything is working as it should.

Maya

[1] https://mayaposch.wordpress.com/2019/03/06/keeping-plants-happy-with-the-greentropia-base-board-part-1/
[2] https://pcbshopper.com/
[3] https://www.pcbway.com/
[4] https://github.com/MayaPosch/Greentropia_Base
[5] http://www.nyantronics.com/greentropia.php
[6] https://www.pcbway.com/capabilities.html
[7] https://www.pcbway.com/project/OnlineGerberViewer.html
[8] http://gerbv.sourceforge.net/

Keeping plants happy with the Greentropia Base board – Part 1

March 6, 2019 Leave a comment

Last year I got started on an automatic plant watering project, with as goal a completely stand-alone, self-sufficient solution. It should be capable of not only monitoring the level of moisture in the soil, but also control a pump that would add water to the soil when needed.

Later iterations of this basic design added a scale to measure the level in the water reservoir, as well as a multi-colour LED to be used as a system indicator as well as for more decorative purposes. This design was initially developed further for my third book that got released [1][2][3] in February of this year. In chapter 5 of that book it is featured as an example project, using the BMaC [4] firmware for the ESP8266 microcontroller.

That’s where the project remained for a while, as even though a PCB design (the Greentropia [5] base board) had been created that would accommodate the project’s complete functionality on a single board, converting that into a physical product along with the associated effort and costs prevented me from just pushing the button on ordering the PCBs and components.

Thus the board remained just a digital render:

iop_plant_base_002

When I got suddenly contacted by a representative from PCBWay [6] with an offer to have free PCBs made in exchange for a review of the finished board, it made it all too easy to finally take the step to have the board produced for real.

After some last-minute, frantic validation of the design and board layout by yours truly and a good friend, the Gerber files were submitted to PCBWay. We used the Gerber viewer in KiCad to check the files prior to submitting them. Later I learned that PCBWay also offers an online Gerber viewer [7]. We did not use that one, but it’s important to use a Gerber viewer before one submits a design, to be sure that the resulting PCB will look like and function the way it should.

After a couple of days of PCB production and shipping from China to Germany, the boards arrived:

IMG_20190102_161941

Top side:

IMG_20190102_162412

Bottom side:

IMG_20190102_162432

All boards looked pretty good, with pretty sharp silkscreen features and the soldermask being aligned with the pads. We compared them with another Nyantronics PCB that we have been working on for a while now, that one being from JLCPCB. It is a good way to compare the blue soldermask that they use:

IMG_20190102_165819

Which colour you prefer is a personal choice, of course. Personally I like the more deep-blue colour of the JLCPCB board, but the PCBWay blue isn’t half bad either. The real concern is of course whether or not the PCB does what it’s supposed to, which is what we’d find out once we assembled the boards.

For this we used a professional reflow oven, courtesy of the local university:

IMG_0947IMG_0968

This resulted in the following boards, after a few through-hole components being added by hand:

IMG_20190203_041937IMG_20190203_041718IMG_20190203_042545
img_20190305_211945

Each of these boards has sockets for a NodeMCU board, which contains an ESP-12E or 12F module with the ESP8266 microcontroller. This provides the ability to control the pump output and SPI bus, as well as read out the HX711-based scale interface and soil sensor.

Microscope images of the finished boards were also made and can be found in this addendum article: https://mayaposch.wordpress.com/2019/03/11/reviewing-dual-layer-pcbway-pcbs/

In the next parts we will wrap up the remaining development of the hardware, and conclude with the development of the firmware for this board.

Maya

[1] https://www.amazon.com/Hands-Embedded-Programming-versatile-solutions-dp-1788629302/dp/1788629302/
[2] https://www.packtpub.com/application-development/hands-embedded-programming-c17
[3] https://www.amazon.de/Hands-Embedded-Programming-versatile-solutions/dp/1788629302/
[4] https://github.com/MayaPosch/BMaC
[5] http://nyantronics.com/greentropia.php
[6] http://www.pcbway.com/
[7] https://www.pcbway.com/project/OnlineGerberViewer

Designing an RC debounce circuit

June 26, 2018 2 comments

While working on a project earlier this year which involved the monitoring the state of a number of switches, I had to find a way to deal with the mechanical bouncing of these switches. I found that despite my initial assumption that this would be easy to find detailed information on, I failed to find any clear guides or tutorials. At least as far as hardware-based debouncing methods went.

Though I was aware of software-based switch debounce algorithms, I decided against using those, on account of them adding complexity to the code, taking away system timers from the pool and the extra burden this would put on testing and debugging the software design. Instead I opted to use an RC circuit-based solution. Without easy tutorials being available, in the end I simply copied the complete design from someone else, because it seemed to work for that purpose.

 

The RC circuit

When debouncing switches in hardware, it matters which type of switch we are debouncing. The switches which I had to debounce for the project were SPST (Single-Pole, Single-Throw, with one output), meaning one has just a single signal wire to work with. This means that the delay created by an RC network charging or discharging is used to smooth out the erratic signal from a mechanical switch opening and closing.

With an SPDT (Single-Pole, Double-Throw) switch, one can use the same RC circuit, use an AND gate-based debounce circuit (not covered in this article), or use a hardware-based timer circuit.

The RC debounce circuit we’ll be looking at in this article is the following:

The way that this circuit works is that the capacitor (C1) is charged over R1 whenever the switch is in the open position, using the diode (D1) to bypass R2. This results in a logical ‘1’ being achieved after the delay created by the resistance value of R1.

When the switch closes, it discharges C1 over R2, with the latter’s value determining the delay, resulting in a logical ‘0’ being achieved. The output of the RC circuit is connected to U1, which is an inverse Schmitt trigger. This creates the expected logical ‘1’ when the switch is closed, and ‘0’ when it has been opened.

In addition to this, the Schmitt trigger (a CD40106 hex inverting Schmitt trigger IC) also adds hysteresis, essentially adding two trigger points to its output (which we connect to the input pin of our processor), which smooths out any remaining ripple by not switching to the opposite value immediately, but only after reaching the trigger point. This effectively creates a dead zone in between the logical values where any analogue noise has no effect.

 

Understanding the circuit

When I decided to use this project as a practical example for my upcoming book on embedded C++ development [1], I realised that I needed to dive a bit deeper into the how and why of these circuits, especially on how to calculate the appropriate values for the RC circuit.

Fundamental to RC networks is the RC time constant, or ฯ„ (tau), which is defined as:

\tau = { R C }

This time constant is defined in seconds, with one ฯ„ being the time it takes for the capacitor to charge up to 63.2%, and 2ฯ„ to 86%. For discharging the capacitor, 1ฯ„ would discharge it to 37%, and 13.5% after 2ฯ„. This shows that a capacitor is never fully charged or discharged, but has the process simply slow down. Of relevance for us here is that 1ฯ„ roughly corresponds to the charge level required to reach the opposite logical output level and thus the effective delay we get for a specific RC value.

In addition, we can calculate the voltage of the capacitor at any given time, when charging and discharging respectively:

V(s) = { V_s (1 - e^{ -(t/RC) } ) }

V(s) = { V_s e^{-(t/RC) }}

Here t is the elapsed time in seconds, V_s the source voltage and e the mathematical constant (approximately 2.71828), also known as Euler’s number.

 

Running the numbers

For the earlier given circuit diagram, we can take its values and use the RC time constant to calculate the delay we achieve and thus what length of switch bounce we can compensate for. We’ll first look at the charging time (51k Ohm, 1 uF), then the discharging time (22k Ohm, 1 uF):

0.051 = { 51000 \cdot 0.000001 }

0.022 = { 22000 \cdot 0.000001 }

With the used RC values we achieve 51 milliseconds for charging (switch opening) and 22 milliseconds for discharging (switch closing). As 20 ms is a common bounce time for mechanical switches, the used values seem therefore reasonable. For any practical applications we would need to use the actual bounce time of the used switches to pick the appropriate values, however.

 

Conclusion

When it all comes down to it, designing an RC debounce circuit isn’t incredibly complex once one understands the principles and physics behind it. Using the RC time constant, it is a matter of picking an appropriate capacitor value, then sizing the resistors to reach the required charge and discharge times. The used Schmitt trigger IC isn’t terribly crucial, and can even be omitted in favour of an SoC’s built-in hysteresis.

The project which led me to research this topic resulted in me designing an entire debounce HAT [2] for the Raspberry Pi series of Single Board Computers (SBC):

This design uses all six Schmitt triggers in the CD40106 IC, allowing for up to six switches or equivalent to be connected. The integrated EEPROM allows the board to be automatically configured by the OS installed on the SBC by reading out the GPIO pins it is connected to, setting the appropriate direction and modes.

Naturally the RC values for this design can be altered at will to fit the requirements, so long as they fit the 0805 footprint.

Maya

[1] https://www.packtpub.com/application-development/hands-embedded-programming-c17
[2] https://github.com/MayaPosch/DebounceHat

NymphRPC: my take on remote procedure call libraries

January 29, 2018 1 comment

Recently I open-sourced NymphRPC [1], which is a Remote Procedure Call (RPC) library I have been working on for the past months. In this article I would like to explain exactly why I felt it to be necessary to unleash yet another RPC library onto the world.

The past years I have had to deal quite a lot with a particular RPC library (Etch [2][3]) due to a commercial project. This is now a defunct project, but it spent some time languishing as an Apache project after Cisco open-sourced it in 2011 and got picked up by BMW as an internal protocol for their infotainment systems [4].

During the course of this aforementioned commercial project it quickly became clear to me that the Etch C library which I was using had lots of issues, including stability and general usability issues (like the calling of abort() without recovery option when any internal assert failed). As the project progressed, I found myself faced with the choice to either debug this existing library, or reimplement it.

At this point the C-based library was around 45,000 lines of code (LoC), with countless concurrency-related and other issues which made Valgrind spit out very long log files, and which proved to be virtually impossible to diagnose and fix. Many attempts resulted in the runtime becoming more unstable in other places.

Thus it was that I made the decision to reimplement the Etch protocol from scratch in C++. Even though there was already an official C++ runtime, it was still in Beta and it too suffered from stability issues. After dismissing it as an option, this led me to the next problem: the undocumented Etch protocol. Beyond a high-level overview of runtime concepts, there was virtually no documentation for Etch or its protocol.

Reimplementation

Fast-forward a few months and I had reverse-engineered the Etch protocol using countless Wireshark traces and implemented the protocol in a light-weight C++-based runtime of around 2,000 LoC. Foregoing the ‘official’ runtime architecture, I had elected to model a basic message serialisation/deserialisation flow architecture instead. Another big change was the foregoing of any domain specific language (DSL) as with Etch to define the available methods.

The latter choice was primarily to avoid the complexity that comes with having a DSL and compiler architecture which has to generate functioning code that then has to be compiled into the project in question. In the case of a medium-sized Etch-based project, this auto-generated code ended up adding another 15,000 LoC to the project. With my runtime functions were defined in code and added to the runtime on start-up.

In the end this new runtime I wrote performed much better (faster, lower RAM usage) than the original runtime, but it left me wondering whether there was a better RPC library out there. Projects I looked at included Apache Thrift [5] and Google Protocol Buffers [6].

Both sadly also are quite similar to Etch in that they follow the same DSL (ISL) and auto-generated code for clients/servers path. Using them is still fairly involved and cumbersome. Probably rpclib [7] comes closest, but it’s still very new and has made a lot of design choices which do not appeal to me, including the lack of any type of parameter validation for methods being called.

NymphRPC

Design choices I made in NymphRPC include such things as an extremely compact binary protocol (about 2x more compact than the Etch protocol) while allowing for a wide range of types. I also added dynamic callbacks (settable by the client). To save one the trouble of defining each RPC method in both the client and server, instead the client downloads the server’s API upon connecting to it.

At this point NymphRPC is being used for my file revision control project, FLR [8], as the communication fabric between clients and server.

Performance and future

Even though the network code in NymphRPC is pretty robust and essentially the same as what currently runs on thousands of customer systems around the world – as a result of this project that originally inspired the development of NymphRPC – it is still very much under development.

The primary focus during the development of NymphRPC has been on features and stability. The next steps will be to expand upon those features, especially more robust validation and ease of exception handling, and to optimise the existing code.

The coming time I’ll be benchmarking [9] NymphRPC to see how it compares other libraries and optimise any bottlenecks that show up. Until then I welcome anyone who wishes it to play around with NymphRPC (and FLR) and provide feedback ๐Ÿ™‚

Maya

[1] https://github.com/MayaPosch/NymphRPC
[2] https://etch.apache.org/
[3] https://en.wikipedia.org/wiki/Etch_%28protocol%29
[4] http://www.bmw-carit.de/open-source/etch.php
[5] https://en.wikipedia.org/wiki/Apache_Thrift
[6] https://en.wikipedia.org/wiki/Protocol_Buffers
[7] https://github.com/rpclib/rpclib
[8] https://github.com/MayaPosch/FLR
[9] http://szelei.me/rpc-benchmark-part1/

Categories: C++, Networking, NymphRPC, Protocols, RPC Tags: , ,

MQTTCute: a new MQTT desktop client

January 2, 2018 Leave a comment

At the beginning of 2017 I was first introduced to the world of MQTT as part of a building monitoring and control project, and while this was generally a positive experience, I felt rather frustrated with one area of this ecosystem: the lack of proper MQTT clients, regardless of mobile or desktop. The custom binary protocol that was being used to communicate over MQTT with the sensor and control nodes also made those existing clients rather useless.

I would regularly have to resort to using Wireshark/tcpdump to check the MQTT traffic on TCP level, or dump the messages received into a file and open it with a hex editor, just so that I could inspect the payloads being sent by the nodes and services. This was annoying enough, and even more annoying was that the system was intended to be fully AES-encrypted, with only mosquitto_pub and mosquitto_sub actually supporting TLS certificates.

As a result I have had this urge to write my own MQTT client that would actually work in this scenario. Courtesy of getting laid off just before Christmas, I had some additional time to work on this new project. After about a week of work, I released the 0.1 Alpha version today on my GitHub account [1].

MQTTCute screenshot

Called MQTTCute, it’s written in C++ and Qt 5, and using the Mosquitto client library for MQTT communication. Not surprisingly, it shares a fair bit of code with the Command & Control client for the BMaC system [2] which I also developed during the last year. I’ll be writing more on the BMaC project the coming time.

With this first version of the MQTTCute client all basic functionality is present: connecting to an MQTT broker, publishing on and subscribing to topics, along with being able to publish binary messages and see received messages both in their text and hexadecimal formats. Since an MDI interface is used, it should be possible to keep track of a large number of topics without too much trouble.

I’m hoping for feedback on this MQTT client, but regardless I’ll be implementing new features to improve the workflow and general options with the client. Hopefully someone beyond just myself will be finding it useful ๐Ÿ™‚

Maya

[1] https://github.com/MayaPosch/MQTTCute
[2] https://github.com/MayaPosch/BMaC

Categories: C++, MQTT Tags: , , ,