Parsing command line arguments in C++

March 17, 2019 2 comments

One of the things which have frustrated me since I first started programming has been the difficulty in using command line arguments provided to one’s application. Everyone of us is aware of the standard formulation of the main function:

int main(int argc, char** argv);

Here argc is the number of arguments (separated by spaces), including the name of the application binary itself. Then argv is an array of C-style strings, each containing an argument. This leads to the most commonly used style of categorising the arguments:

app.exe -h --long argument_text

What annoyed me all these years is not having a built-in way to parse command line arguments in C++. Sure, there’s the getopt [2] way if one uses Linux or a similar OS. There are a range of argument parser libs or APIs in frameworks, such as gflags [3], Boost Program Options [4], POCO [5], Qt [6] and many others.

What these do not provide is a simple, zero-dependency way to add argument parsing to C++, while also being as uncomplicated as possible. This led me to put together a simple command line argument parsing class, which does exactly what I desire of such an API, without any complications.

Meet Sarge [1] and its integration test application:

#include "../src/sarge.h"

#include 


int main(int argc, char** argv) {
	Sarge sarge;
	
	sarge.setArgument("h", "help", "Get help.", false);
	sarge.setArgument("k", "kittens", "K is for kittens. Everyone needs kittens in their life.", true);
	sarge.setDescription("Sarge command line argument parsing testing app. For demonstration purposes and testing.");
	sarge.setUsage("sarge_test ");
	
	if (!sarge.parseArguments(argc, argv)) {
		std::cerr << "Couldn't parse arguments..." << std::endl;
		return 1;
	}
	
	std::cout << "Number of flags found: " << sarge.flagCount() << std::endl;
	
	if (sarge.exists("help")) {
		sarge.printHelp();
	}
	else {
		std::cout << "No help requested..." << std::endl;
	}
	
	std::string kittens;
	if (sarge.getFlag("kittens", kittens)) {
		std::cout << "Got kittens: " << kittens << std::endl;
	}
	
	return 0;
}

Here one can see most of the Sarge API, with the setting of arguments we are looking for, followed by the application description and usage, as it'll be printed if the user requests the help view or our code decides to print it in the case of missing options or similar.

The Sarge class implementation itself is very basic, using nothing but the STL features, specifically the vector, map, memory, iostream and string headers, in as of writing 136 lines of code.

When asked to parse the command line arguments, it will scan the argument list (argv) for known flags, flags which require a value, and unknown flags. It'll detect unknown flags and missing values, while allowing for short options (single-character) to be chained together.

I'll be using Sarge for my own projects from now on, making additions and tweaks as I see fit. Feel free to have a look and poke at the project as well, and let me know your thoughts.

Maya

[1] https://github.com/MayaPosch/Sarge
[2] https://en.wikipedia.org/wiki/Getopt
[3] https://github.com/gflags/gflags
[4] http://www.boost.org/doc/libs/1_64_0/doc/html/program_options.html
[5] https://pocoproject.org/docs/Poco.Util.OptionProcessor.html
[6] http://qt.io

Advertisements
Categories: C++, Projects Tags: , ,

Reviewing dual-layer PCBWay PCBs

March 11, 2019 Leave a comment

This review is an addendum to the first part in the Greentropia Base Board article series [1]. Here we have a look at the PCB ordering options, process and product delivered by PCBWay and conclude with impressions of the Greentropia Base realized with these PCBs.

Much to the delight of professional hardware developers and hobbyists alike, prices for dual layer FR4 PCBs have come down to a point where shipping from Asia has become the major cost factor. An online price comparison [2] brings up the usual suspects, with new and lesser-known PCB manufacturers added to the mix.

In this competitive environment, reputation is just as important as consistently high quality and great service. Thus PCBWay [3] reached out to us to talk about their PCB manufacturing process and products by providing free PCBs, which we accepted as an opportunity to fast-lane the Greentropia Base board [4], a primary building block of the ongoing Greentropia indoor farming project [5].

Ordering the PCBs

PCB specifications guide the design process and show up again when ordering the actual PCBs. They are at the beginning and the end of the board design process – hopefully without escalation to smaller drill sizes, trace widths and layer count.

The manufacturing capabilities [6] are obviously just bounds for the values selected in a definitive set of design rules, leaving room for a trade-off between design challenges and manufacturing cost. Sometimes relaxing the minimum trace width and spacing from 5/5mil (0.125 mm) to 6/6mil (0.15 mm) can make a noticeable difference in PCB cost. And then again, switching from 0.3 mm to 0.25 mm minimum drill size can make fan-out and routing in tight spaces happen, albeit at a certain price.

Logically we will need to look at the price tag of standard and extended manufacturing capabilities. The following picture displays pricing as of the writing of this article:pcbway_order_spec

For some options the pricing is very attractive. Most notably an array of attractive colours is available at no additional charge. With RoHS and REACH directives in place however it remains to be seen whether lead-free hot air surface levelling (HASL) will become the new standard at no added cost.

Luckily for our project we do not need to stray far from the well-trodden path and just opt for the lead-free finish on a blue 1.6mm PCB.

The ordering process is hassle-free and provides frequent status updates:

pcbway_order_progress

A month after our order, an online gerber viewer [7] was introduced to help designers quickly verify their gerber output before uploading them for the order. It must be noted however that this online feature is at an early stage and is expected to provide layer de-duplication, automatic and consistent color assignment and appropriate z-order and better rendering speed in the future.

pcbway_gerber_viewer

Gerbv [8] is a viable alternative which also provides last-minute editing capabilities (e.g. deleting a stray silkscreen element).

Visual inspection

PCBs were received within one week after ordering, packaged in a vacuum sealed bag and delivered in a cardboard box with foam sheets for shock protection. One extra PCB was also included in the shipment, which is nice to have.

The boards present with cleanly machined edges, well-aligned drill pattern and stop masks on both sides and without scratches or defects. The silkscreen has good coverage and high resolution. Adhesion of stop mask and silkscreen printing are excellent. The lead-free HASL finish is glossy and flat, and while we couldn’t put it to the test with this layout, the TSSOP footprint results suggest no issues with TSSOP, TQFP and BGA components down to 0.5mm pitch.

The board identifier is thankfully hidden underneath an SOIC component in the final product. Pads show the expected probe marks from e-test. without affecting the final reflow result. No probe damage to the pads is evident.

Realising the project

We conclude with some impressions of the assembled PCBs, which we will use in the following articles to build an automated watering system.

Here we see our signature paws with the 2 mm wide capacitor C15 next to them for scale. The pitch of the vertical header is 2.54 mm. Tenting of the vias is also consistent and smooth.

greentropia_base_mask_quality

Good mask alignment and print quality.

The next picture shows successful reflow of components of different size and thermal mass after a lead-free reflow cycle in a convection oven. As the PCBs were properly sealed and fresh, no issues with delamination occured.

greentropia_base_different_size_reflow_result

DC-DC section reflow result.

The reflow result with the lead-free HASL PCB and the stencil ordered along with it is also quite promising. No solder bridges were observed despite lack of mask webbing, which is likely due to our mask relief settings and minimum webbing width. Very thin webbing can be destroyed during HASL, so if the additional safety in the 0.15 to 0.2 mm between the pads is needed it’s worth checking back with the manufacturer.

greentropia_base_tssop_hasl_result

TSSOP reflow result.

While testing the 5V to 12V boost converter, it was found that it worked without issues. Initial testing of the ADC was also promising. As we continue to test the boards over the coming time we’ll find out whether there are zero issues, but so far it appears that everything is working as it should.

Maya

[1] https://mayaposch.wordpress.com/2019/03/06/keeping-plants-happy-with-the-greentropia-base-board-part-1/
[2] https://pcbshopper.com/
[3] https://www.pcbway.com/
[4] https://github.com/MayaPosch/Greentropia_Base
[5] http://www.nyantronics.com/greentropia.php
[6] https://www.pcbway.com/capabilities.html
[7] https://www.pcbway.com/project/OnlineGerberViewer.html
[8] http://gerbv.sourceforge.net/

Keeping plants happy with the Greentropia Base board – Part 1

March 6, 2019 Leave a comment

Last year I got started on an automatic plant watering project, with as goal a completely stand-alone, self-sufficient solution. It should be capable of not only monitoring the level of moisture in the soil, but also control a pump that would add water to the soil when needed.

Later iterations of this basic design added a scale to measure the level in the water reservoir, as well as a multi-colour LED to be used as a system indicator as well as for more decorative purposes. This design was initially developed further for my third book that got released [1][2][3] in February of this year. In chapter 5 of that book it is featured as an example project, using the BMaC [4] firmware for the ESP8266 microcontroller.

That’s where the project remained for a while, as even though a PCB design (the Greentropia [5] base board) had been created that would accommodate the project’s complete functionality on a single board, converting that into a physical product along with the associated effort and costs prevented me from just pushing the button on ordering the PCBs and components.

Thus the board remained just a digital render:

iop_plant_base_002

When I got suddenly contacted by a representative from PCBWay [6] with an offer to have free PCBs made in exchange for a review of the finished board, it made it all too easy to finally take the step to have the board produced for real.

After some last-minute, frantic validation of the design and board layout by yours truly and a good friend, the Gerber files were submitted to PCBWay. We used the Gerber viewer in KiCad to check the files prior to submitting them. Later I learned that PCBWay also offers an online Gerber viewer [7]. We did not use that one, but it’s important to use a Gerber viewer before one submits a design, to be sure that the resulting PCB will look like and function the way it should.

After a couple of days of PCB production and shipping from China to Germany, the boards arrived:

IMG_20190102_161941

Top side:

IMG_20190102_162412

Bottom side:

IMG_20190102_162432

All boards looked pretty good, with pretty sharp silkscreen features and the soldermask being aligned with the pads. We compared them with another Nyantronics PCB that we have been working on for a while now, that one being from JLCPCB. It is a good way to compare the blue soldermask that they use:

IMG_20190102_165819

Which colour you prefer is a personal choice, of course. Personally I like the more deep-blue colour of the JLCPCB board, but the PCBWay blue isn’t half bad either. The real concern is of course whether or not the PCB does what it’s supposed to, which is what we’d find out once we assembled the boards.

For this we used a professional reflow oven, courtesy of the local university:

IMG_0947IMG_0968

This resulted in the following boards, after a few through-hole components being added by hand:

IMG_20190203_041937IMG_20190203_041718IMG_20190203_042545
img_20190305_211945

Each of these boards has sockets for a NodeMCU board, which contains an ESP-12E or 12F module with the ESP8266 microcontroller. This provides the ability to control the pump output and SPI bus, as well as read out the HX711-based scale interface and soil sensor.

Microscope images of the finished boards were also made and can be found in this addendum article: https://mayaposch.wordpress.com/2019/03/11/reviewing-dual-layer-pcbway-pcbs/

In the next parts we will wrap up the remaining development of the hardware, and conclude with the development of the firmware for this board.

Maya

[1] https://www.amazon.com/Hands-Embedded-Programming-versatile-solutions-dp-1788629302/dp/1788629302/
[2] https://www.packtpub.com/application-development/hands-embedded-programming-c17
[3] https://www.amazon.de/Hands-Embedded-Programming-versatile-solutions/dp/1788629302/
[4] https://github.com/MayaPosch/BMaC
[5] http://nyantronics.com/greentropia.php
[6] http://www.pcbway.com/
[7] https://www.pcbway.com/project/OnlineGerberViewer

Designing an RC debounce circuit

June 26, 2018 2 comments

While working on a project earlier this year which involved the monitoring the state of a number of switches, I had to find a way to deal with the mechanical bouncing of these switches. I found that despite my initial assumption that this would be easy to find detailed information on, I failed to find any clear guides or tutorials. At least as far as hardware-based debouncing methods went.

Though I was aware of software-based switch debounce algorithms, I decided against using those, on account of them adding complexity to the code, taking away system timers from the pool and the extra burden this would put on testing and debugging the software design. Instead I opted to use an RC circuit-based solution. Without easy tutorials being available, in the end I simply copied the complete design from someone else, because it seemed to work for that purpose.

 

The RC circuit

When debouncing switches in hardware, it matters which type of switch we are debouncing. The switches which I had to debounce for the project were SPST (Single-Pole, Single-Throw, with one output), meaning one has just a single signal wire to work with. This means that the delay created by an RC network charging or discharging is used to smooth out the erratic signal from a mechanical switch opening and closing.

With an SPDT (Single-Pole, Double-Throw) switch, one can use the same RC circuit, use an AND gate-based debounce circuit (not covered in this article), or use a hardware-based timer circuit.

The RC debounce circuit we’ll be looking at in this article is the following:

The way that this circuit works is that the capacitor (C1) is charged over R1 whenever the switch is in the open position, using the diode (D1) to bypass R2. This results in a logical ‘1’ being achieved after the delay created by the resistance value of R1.

When the switch closes, it discharges C1 over R2, with the latter’s value determining the delay, resulting in a logical ‘0’ being achieved. The output of the RC circuit is connected to U1, which is an inverse Schmitt trigger. This creates the expected logical ‘1’ when the switch is closed, and ‘0’ when it has been opened.

In addition to this, the Schmitt trigger (a CD40106 hex inverting Schmitt trigger IC) also adds hysteresis, essentially adding two trigger points to its output (which we connect to the input pin of our processor), which smooths out any remaining ripple by not switching to the opposite value immediately, but only after reaching the trigger point. This effectively creates a dead zone in between the logical values where any analogue noise has no effect.

 

Understanding the circuit

When I decided to use this project as a practical example for my upcoming book on embedded C++ development [1], I realised that I needed to dive a bit deeper into the how and why of these circuits, especially on how to calculate the appropriate values for the RC circuit.

Fundamental to RC networks is the RC time constant, or τ (tau), which is defined as:

\tau = { R C }

This time constant is defined in seconds, with one τ being the time it takes for the capacitor to charge up to 63.2%, and 2τ to 86%. For discharging the capacitor, 1τ would discharge it to 37%, and 13.5% after 2τ. This shows that a capacitor is never fully charged or discharged, but has the process simply slow down. Of relevance for us here is that 1τ roughly corresponds to the charge level required to reach the opposite logical output level and thus the effective delay we get for a specific RC value.

In addition, we can calculate the voltage of the capacitor at any given time, when charging and discharging respectively:

V(s) = { V_s (1 - e^{ -(t/RC) } ) }

V(s) = { V_s e^{-(t/RC) }}

Here t is the elapsed time in seconds, V_s the source voltage and e the mathematical constant (approximately 2.71828), also known as Euler’s number.

 

Running the numbers

For the earlier given circuit diagram, we can take its values and use the RC time constant to calculate the delay we achieve and thus what length of switch bounce we can compensate for. We’ll first look at the charging time (51k Ohm, 1 uF), then the discharging time (22k Ohm, 1 uF):

0.051 = { 51000 \cdot 0.000001 }

0.022 = { 22000 \cdot 0.000001 }

With the used RC values we achieve 51 milliseconds for charging (switch opening) and 22 milliseconds for discharging (switch closing). As 20 ms is a common bounce time for mechanical switches, the used values seem therefore reasonable. For any practical applications we would need to use the actual bounce time of the used switches to pick the appropriate values, however.

 

Conclusion

When it all comes down to it, designing an RC debounce circuit isn’t incredibly complex once one understands the principles and physics behind it. Using the RC time constant, it is a matter of picking an appropriate capacitor value, then sizing the resistors to reach the required charge and discharge times. The used Schmitt trigger IC isn’t terribly crucial, and can even be omitted in favour of an SoC’s built-in hysteresis.

The project which led me to research this topic resulted in me designing an entire debounce HAT [2] for the Raspberry Pi series of Single Board Computers (SBC):

This design uses all six Schmitt triggers in the CD40106 IC, allowing for up to six switches or equivalent to be connected. The integrated EEPROM allows the board to be automatically configured by the OS installed on the SBC by reading out the GPIO pins it is connected to, setting the appropriate direction and modes.

Naturally the RC values for this design can be altered at will to fit the requirements, so long as they fit the 0805 footprint.

Maya

[1] https://www.packtpub.com/application-development/hands-embedded-programming-c17
[2] https://github.com/MayaPosch/DebounceHat

NymphRPC: my take on remote procedure call libraries

January 29, 2018 1 comment

Recently I open-sourced NymphRPC [1], which is a Remote Procedure Call (RPC) library I have been working on for the past months. In this article I would like to explain exactly why I felt it to be necessary to unleash yet another RPC library onto the world.

The past years I have had to deal quite a lot with a particular RPC library (Etch [2][3]) due to a commercial project. This is now a defunct project, but it spent some time languishing as an Apache project after Cisco open-sourced it in 2011 and got picked up by BMW as an internal protocol for their infotainment systems [4].

During the course of this aforementioned commercial project it quickly became clear to me that the Etch C library which I was using had lots of issues, including stability and general usability issues (like the calling of abort() without recovery option when any internal assert failed). As the project progressed, I found myself faced with the choice to either debug this existing library, or reimplement it.

At this point the C-based library was around 45,000 lines of code (LoC), with countless concurrency-related and other issues which made Valgrind spit out very long log files, and which proved to be virtually impossible to diagnose and fix. Many attempts resulted in the runtime becoming more unstable in other places.

Thus it was that I made the decision to reimplement the Etch protocol from scratch in C++. Even though there was already an official C++ runtime, it was still in Beta and it too suffered from stability issues. After dismissing it as an option, this led me to the next problem: the undocumented Etch protocol. Beyond a high-level overview of runtime concepts, there was virtually no documentation for Etch or its protocol.

Reimplementation

Fast-forward a few months and I had reverse-engineered the Etch protocol using countless Wireshark traces and implemented the protocol in a light-weight C++-based runtime of around 2,000 LoC. Foregoing the ‘official’ runtime architecture, I had elected to model a basic message serialisation/deserialisation flow architecture instead. Another big change was the foregoing of any domain specific language (DSL) as with Etch to define the available methods.

The latter choice was primarily to avoid the complexity that comes with having a DSL and compiler architecture which has to generate functioning code that then has to be compiled into the project in question. In the case of a medium-sized Etch-based project, this auto-generated code ended up adding another 15,000 LoC to the project. With my runtime functions were defined in code and added to the runtime on start-up.

In the end this new runtime I wrote performed much better (faster, lower RAM usage) than the original runtime, but it left me wondering whether there was a better RPC library out there. Projects I looked at included Apache Thrift [5] and Google Protocol Buffers [6].

Both sadly also are quite similar to Etch in that they follow the same DSL (ISL) and auto-generated code for clients/servers path. Using them is still fairly involved and cumbersome. Probably rpclib [7] comes closest, but it’s still very new and has made a lot of design choices which do not appeal to me, including the lack of any type of parameter validation for methods being called.

NymphRPC

Design choices I made in NymphRPC include such things as an extremely compact binary protocol (about 2x more compact than the Etch protocol) while allowing for a wide range of types. I also added dynamic callbacks (settable by the client). To save one the trouble of defining each RPC method in both the client and server, instead the client downloads the server’s API upon connecting to it.

At this point NymphRPC is being used for my file revision control project, FLR [8], as the communication fabric between clients and server.

Performance and future

Even though the network code in NymphRPC is pretty robust and essentially the same as what currently runs on thousands of customer systems around the world – as a result of this project that originally inspired the development of NymphRPC – it is still very much under development.

The primary focus during the development of NymphRPC has been on features and stability. The next steps will be to expand upon those features, especially more robust validation and ease of exception handling, and to optimise the existing code.

The coming time I’ll be benchmarking [9] NymphRPC to see how it compares other libraries and optimise any bottlenecks that show up. Until then I welcome anyone who wishes it to play around with NymphRPC (and FLR) and provide feedback 🙂

Maya

[1] https://github.com/MayaPosch/NymphRPC
[2] https://etch.apache.org/
[3] https://en.wikipedia.org/wiki/Etch_%28protocol%29
[4] http://www.bmw-carit.de/open-source/etch.php
[5] https://en.wikipedia.org/wiki/Apache_Thrift
[6] https://en.wikipedia.org/wiki/Protocol_Buffers
[7] https://github.com/rpclib/rpclib
[8] https://github.com/MayaPosch/FLR
[9] http://szelei.me/rpc-benchmark-part1/

Categories: C++, Networking, NymphRPC, Protocols, RPC Tags: , ,

MQTTCute: a new MQTT desktop client

January 2, 2018 Leave a comment

At the beginning of 2017 I was first introduced to the world of MQTT as part of a building monitoring and control project, and while this was generally a positive experience, I felt rather frustrated with one area of this ecosystem: the lack of proper MQTT clients, regardless of mobile or desktop. The custom binary protocol that was being used to communicate over MQTT with the sensor and control nodes also made those existing clients rather useless.

I would regularly have to resort to using Wireshark/tcpdump to check the MQTT traffic on TCP level, or dump the messages received into a file and open it with a hex editor, just so that I could inspect the payloads being sent by the nodes and services. This was annoying enough, and even more annoying was that the system was intended to be fully AES-encrypted, with only mosquitto_pub and mosquitto_sub actually supporting TLS certificates.

As a result I have had this urge to write my own MQTT client that would actually work in this scenario. Courtesy of getting laid off just before Christmas, I had some additional time to work on this new project. After about a week of work, I released the 0.1 Alpha version today on my GitHub account [1].

MQTTCute screenshot

Called MQTTCute, it’s written in C++ and Qt 5, and using the Mosquitto client library for MQTT communication. Not surprisingly, it shares a fair bit of code with the Command & Control client for the BMaC system [2] which I also developed during the last year. I’ll be writing more on the BMaC project the coming time.

With this first version of the MQTTCute client all basic functionality is present: connecting to an MQTT broker, publishing on and subscribing to topics, along with being able to publish binary messages and see received messages both in their text and hexadecimal formats. Since an MDI interface is used, it should be possible to keep track of a large number of topics without too much trouble.

I’m hoping for feedback on this MQTT client, but regardless I’ll be implementing new features to improve the workflow and general options with the client. Hopefully someone beyond just myself will be finding it useful 🙂

Maya

[1] https://github.com/MayaPosch/MQTTCute
[2] https://github.com/MayaPosch/BMaC

Categories: C++, MQTT Tags: , , ,

On the merits of comment-driven development

April 9, 2017 3 comments

The number of ‘correct’ ways to develop software are almost too many to number. While some are all-encompassing for a (multi-person) project, others apply mostly to the engineering of the code itself. This article is about a method in the latter category.

In recent years, all-encompassing methods that have become popular include ‘Waterfall’ and ‘Agile’. For the ensuring of code quality, so-called ‘test-driven development’ (TDD) is often enforced.

Personally, I have been doing software development since I was about 7 years old, starting with QBasic (on MSDOS) and trying out many languages before settling on using C++ and VHDL as my preferred languages. What I have come to appreciate over these decades is a) to start with a plan, and b) writing out the code flow in comments first before committing any lines of code to the structure.

Obvious things

I hope that I do not have to explain here why starting off with a set of requirements and a worked-out design is a good idea. Even though software is often easier to fix afterwards than a hardware-based design, the cost of a late refactoring can be higher than one may be willing – or can afford – to pay. And sometimes one’s software project is part of a satellite around a distant planet.

The same is true of documenting APIs, classes, methods and general application structure, as well as protocols. Not doing this seems like a brilliant idea until you’re the person doing the cursing at some predecessor or previous developer who figured that anyone could figure out what they were thinking when they wrote that code.

Comment-driven development

As I mentioned earlier, I prefer to first put down comments before I start writing code. This is of course not true for trivial code or standard routines, but definitely for everything else. The main benefit of this for me is that it allows me to organise my thoughts and consider alternate approaches before committing myself to a certain architecture.

Recently I became aware of this style of developing being called ‘comment-driven development’, or CDD. While some seem to take this style as a bit of a joke, more and more people are taking it seriously these days.

When I look back at my old code from years ago, I really appreciate the comment blocks where I pen down my thoughts and considerations. Instead of reading the code itself, I can read these comments and use the code merely for illustrative purposes. This to me is another major benefit of CDD: it makes the source code truly self-documenting.

The steps of CDD can be summed as follows:

  1. Take the worked out requirements and design documents.
  2. Implement basic application structure.
  3. Fill in the skeleton classes and functions with comment blocks describing intent and considerations.
  4. Find any collisions and issues with assumptions made in one comment relative to another.
  5. Go back and fix the issues in the design and/or architecture which caused these results.
  6. Repeat steps 4 through 5 until everything looks right.
  7. Start implementing the first code blocks.
  8. When finding issues with the commentary, return to step 4.
  9. Perform tests (unit, integration, etc.). If an error in the commentary text is found, go back to step 4. Code implementation errors are okay.
  10. Final integration, validation, documentation and delivery.

What is very valuable about CDD is that it necessitates one to consider the validity of assumptions made, or more simply put: whether one truly wants to write that code one was thinking of writing or should reconsider some aspects or all of it. It also forms a valuable bridge between requirements and design documents, as well as source code, tests and documentation.

CDD doesn’t supersede or seek to compete with Waterfall, Agile, TDD or such, but instead complements any development process by providing an up-to-date view on the current status of a design and its implementation. When combined with a file revision system such as SVN or Git one can thus track the changes to the design.

It’s also possible to add tags to the comments to indicate questions or known defects. Common here is to use the ‘TODO:’ and ‘FIXME:’ strings, which some editors also parse and display in an overview. Using such a system it’s immediately clear at a glance which issues and questions exist.

Considerations

One thing which I often hear about comments is that one should not use them at all, that the ‘working documentation’ for code is contained in file revision system commit messages and kin. Also that comments are always out of date.

The thing there is that even the best commit messages I have seen cannot provide the in-context level of detail which comments can provide. Commit messages also rarely contain questions, remarks and the like. Where they do it’s not easy to provide a central overview in an editor of outstanding issues even with access to the repository.

Out of date comments is merely a sign of lack of discipline. CDD isn’t unlike TDD in that if one doesn’t maintain the comments or tests, the whole system stops working. As both systems are complementary, this isn’t too surprising.

A good reason to complement TDD with CDD is that it can drastically reduce the number of test cases. By strategically testing only specific cases which came forward as being ‘important’ during the CDD phase, only a limited number of unit tests are required, with integration testing sufficient for further cases. CDD improves test planning.

Writing documentation is made infinitely easier with CDD, as all one has to do is to take the commentary in each source and header file and turn it into a more standard documentation format. Accuracy and completeness are improved.

Final thoughts

The above are mostly just my own thoughts and experiences on the subject of CDD. I do not claim that the above is the end-of-be-all of CDD, just that it’s the form of CDD which I have used for years and which works really well for me.

I welcome constructive feedback and thoughts on the topic of CDD and other ways in which it can improve or support a project. If CDD is an integral part of your personal or even professional projects I would like to hear about it as well 🙂

Maya