Sometimes you just want a single way to build a software project, regardless of what platform or build tool you are using. The promise of CMake is that this should be possible, but in practice, it sometimes doesn’t always seem that way. One particular area where this becomes apparent is scripted builds, especially for things like continuous integration systems, automated testing processes, etc. Since each platform typically has its own commonly used build tool and developers tend to be more familiar with that tool than with CMake, the tendency is to invoke that tool directly in scripts. Unfortunately, this means such scripts end up handling each platform’s build tool separately. But it doesn’t have to be that way. This article will address this and a few other smaller details associated with setting up a platform independent scripted CMake build.
Updated December 2018: Parts of this article have been reworked to account for improvements made with the CMake 3.13.0 release. Key updates are noted within the article.
In all but trivial CMake projects, it is common to find targets built from a large number of source files. These files may be distributed across various subdirectories, which may themselves be nested multiple levels deep. In such projects, traditional approaches usually either list all source files at the top-most level or build up the list of source files in a variable and pass that to
add_executable(), etc. With CMake 3.1, a new command
target_sources() was introduced which provides the missing piece among the various
target_... commands. While the CMake documentation succintly describes what
target_sources() does, it fails to highlight just how useful the new command is and why it promotes better CMake projects:
- It can lead to cleaner and more concise CMakeLists.txt project files.
- Dependency information can be specified closer to where the actual dependencies exist in the directory hierarchy.
- Source files gain the ability to become part of a target’s interface.
- Source files can be added to third party project targets without having to modify the third party project files.
Let’s explore your understanding of member function overloading. For a given class, how many different non-template overloads can you define for a given function where the function takes no arguments? Putting aside exception specifications (since allowing them would make the answer to this essentially infinite), let’s make this a multiple choice, pick your answer from: 1, 2, 4 or 8. I’ll even provide a clue that the return types of the functions don’t matter. For extra points, how many of the function overloads are likely to be useful?
I recently came across an interesting use of
std::move which looked something like the following:
for (auto item : items)
// Do things which may add new items to m_items
The intent of the code was that every time the member function
processItems() was called, it would perform some operation on each item held in the member variable
m_items. Each processed item should be removed from
m_items. The operation to be performed might generate new items which would be added to
m_items, so care had to be taken about how to iterate over the set of items.
To ensure robust iteration over the items to be processed, the code transfers the contents of
m_items to a local object and then iterates over that local object. Thus, if new items are created during processing, they would be added to
m_items and the container being iterated over (
items) would not be affected. All good, right? Well, probably yes, but by no means is it guaranteed.
CMake/CPack does a pretty good job of making it relatively easy to create a basic Windows installer. Sometimes, however, it trips you up when you want to do something seemingly common. One such example is creating Start Menu shortcuts for an executable where you also want to pass it some command line arguments. Surprisingly, CMake/CPack doesn’t give you a simple or generic way to do this. It provides very basic functionality via the CPACK_PACKAGE_EXECUTABLES variable in the CPack module, but that’s just a simple mapping of executable to menu shortcut name with no opportunity to provide command line arguments.
UPDATED December 2015:
Since the original article was written, gtest and gmock have been merged and moved into a single repository on Github under the name GoogleTest. I’ve updated the content here to reflect the changes and the article now also covers both gtest and gmock. I’ve also revised the general purpose implementation to make it more flexible, expanded its documentation and made it available on Github under a MIT license. I hope you find it useful.
UPDATED September 2019:
The generalised implementation was extended further and became the FetchContent module, which was added to CMake in 3.11. The module documentation uses GoogleTest in some of its examples.
Using gtest/gmock with CMake is awesome. Not so awesome is when you don’t have a pre-built gtest/gmock available to use. This article demonstrates a convenient way to add them with automated source download and have them build directly as part of your project using
add_subdirectory(). Unlike other common approaches, no manual information has to be provided other than the package to download. The approach is general enough to be applied to any CMake-based external project, not just gtest/gmock.
In a previous article, the
OnLeavingScope class was presented as a technique for robustly and concisely handling scenarios involving multi-step setup, run and cleanup stages. It focused on ease of learning and wide applicability rather than performance, so while the implementation was complete, it was not necessarily optimal. This article picks up where the previous article left off and deals with some of the more advanced aspects to provide some improvements.
A common sequence of steps we mortal software developers frequently find ourselves implementing goes something like this:
- Perform some sort of setup or acquire some sort of resource.
- Carry out some arbitrary sequence of actions.
- Tear down things we setup or release resources we acquired in step 1.
There are well-known patterns for implementing this scenario robustly, but when there are multiple sub-steps to be performed in the setup phase and where any of those sub-steps can each fail individually, things get more complicated. This article presents a concise, self-documenting and robust way to handle these more complicated cases. A follow-up article will extend this further to improve some performance characteristics and ends up having a lot in common with the
ScopeGuard11 pattern described in various places online.
The multi-step setup problem
Conceptually, the problem we want to solve can be described as follows:
- For each setup sub-step:
- Perform sub-step.
- If sub-step fails, stop and release/clean up after all previous setup sub-steps.
- Carry out some arbitrary sequence of actions.
- Tear down things we setup or release resources we acquired in all sub-steps of step 1.
A common situation facing many projects is how to incorporate large binary assets into the main source code and its build process. Examples of such assets include firmware binaries for embedded products, videos, user manuals, test data and so on. These binary assets often have their own workflow for managing source materials, change history and building the binaries. This article demonstrates an approach to handling this situation with CMake builds.
Updated June 2020
With the constant evolution of C++, build systems have had to deal with the complication of selecting the relevant compiler and linker flags. If your project targets multiple platforms and compilers, this can be a headache to set up. Happily, with features added in CMake 3.1, it is trivial to handle this in a generic way.