3 Types of Macros That Improve C++ Code
Macros are bad, it’s a well known fact, they’re vestiges from the past that really, really don’t fit well with the ever-growing modernity of C++.
Well, except the macros that are good, that is.
There is a rule that says that every rule has its exceptions. It implies that this rule itself has exceptions too, which means that there exists a rule somewhere, that doesn’t have exceptions. But that rule isn’t “don’t use macros”.
Indeed, even if a lot of macros end up making the code confusing, some macros constitute an improvement to the code, and can make it more expressive and still correct.
The worst macro in the world: max
Why are macros bad, to begin with? Indeed, Effective C++ item 2 recommends to stay away from #define
directives, and show how other basic features of C++ can do the same job, only better.
Macros have indeed a lot of issues. One of them is that they don’t have scope. This means that if a file, say aHeader.hpp
, declares a #define
directive, then the rest of that file along with every line of every other files that include aHeader.hpp
, directly or indirectly, are impacted by this #define
. Whether they like it or not.
And that’s a big impact, since that macro is going to change their code. If it says #define A B
for example, then the preprocessor will replace every A
by a B
in those files even if they remotely #include
the culprit aHeader.hpp
. Sometimes the designer of a .cpp
is not even aware that they include aHeader.hpp
, or even who this aHeader.hpp
is to begin with.
Contrary to a function, an object or a type, you can’t confine a macro to a class or a namespace. As long as you #include
it, all your A
s become B
s.
Another issue is coming from the fact that operate at the level of the text of the source code. Which means that they’re oblivious to the semantics of the code they’re operating on. To illustrate, consider the example of the max
macro:
#define max(a,b) (a < b) ? b : a
This looks like this gives the bigger of two values. In a lot of case it does, like in the following code:
int x = 42; int y = 43; int z = max(x, y); std::cout << x << '\n' << y << '\n' << z << '\n';
The code outputs:
42 43 43
But consider this slightly modified version of the code:
int x = 42; int y = 43; int z = max(++x, ++y); std::cout << x << '\n' << y << '\n' << z << '\n';
Even if this is questionable code, the result we’d expect is x
being 43
, y
being 44
and z
being 44
. But instead this program outputs this:
43 45 45
And it makes sense when we think about what the preprocessor is doing: replacing text. The expanded version of the macro is then:
int x = 42; int y = 43; int z = (++x < ++y) ? ++y : ++x; std::cout << x << '\n' << y << '\n' << z << '\n';
The bigger value, here y
, gets incremented twice.
The text replacement, combined with a poor integration with C++ features, make a dangerous mix. In this case, if you #include
another header that defines a max
function (not a macro), you won’t be able to call it. Indeed, the preprocessor will silently replace the function calls with the expansion of the macro.
Such macros create bugs. And macros have other issues, such as being hard to step through in a debugger.
So if macros have so many problems, in which case do they bring enough value to outbalance their risks and improve the code as a result?
Useful macro #1: The macro that bridges a gap between two C++ features
C++ is a pretty rich language, and its features suffice to write a lot of applications. But in some advanced designs, two parts of the code won’t connect together seamlessly.
One of those cases is described in Chapter 10 of Modern C++ Design (my all-time favourite C++ book), where Andrei Alexandrescu uses a policy-based design to implement the design pattern Visitor.
He writes:
“We need a way to implement Accept
in the library and to inject this function into the application’s DocElement
hierarchy. Alas, C++ has no such direct mechanism. There are workarounds that use virtual inheritance, but they are less than stellar and have non-negligible costs. We have to resort to a macro and require each class in the visitable hierarchy to use that macro inside the class definition.
Using macros, with all the clumsiness they bring, is not an easy decision to make, but any other solution does not add much commodity, at considerable expense in time and space. Because C++ programmers are known to be practical people, efficiency is reason enough for relying on macros from time to time instead of using esoteric but ineffective techniques.”
But then, how to keep control when there are macros around our code? The author carries on with a piece of advice to limit the risks associated to macros:
“The single most important rule in defining a macro is to let it do as little as possible by itself and to forward to a “real” entity (function, class) as quickly as possible. We define the macro for visitable classes as follows:
#define DEFINE_VISITABLE() \ virtual ReturnType Accept(BaseVisitor& guest) \ { return AcceptImpl(*this, guest); }
I like how he underlines that we need to stay “practical”. My understanding of this is that we shouldn’t follow rules blindly. By learning the rationale behind rules, we get to know the pros and cons of keeping them, and in which situation it makes sense to bend them or even break them.
Useful macro #2: The macro that shortens a redundant expression
There are at least two cases in modern C++ where you type something twice in the code, and where it would be more pleasant, both for the writer and for the readers of the code, if the expression were more concise by writing it once. Macros can help in those cases.
FWD
The first one is Vittorio Romeo’s FWD
macro. In template code, we often use std::forward
since C++11, to pass values along without losing the fact that they are l-value or r-value references:
template<typename MyType, typename MyOtherType> void f(MyType&& myValue, MyOtherType&& myOtherValue) { g(std::forward<MyType>(myValue), std::forward<MyOtherType>(myOtherValue)); }
The &&
in this template code means that the values can be l-value or r-value references depending on whether the values they bind to are l-values or r-values. std::forward
allows to pass on this information to g
.
But it’s a lot of code to express that, it’s annoying to type every time, and it takes up some space when reading.
Vittorio proposes to use the following macro:
#define FWD(...) ::std::forward<decltype(__VA_ARGS__)>(__VA_ARGS__)
Here is how the previous code now looks like by using it:
template<typename MyType, typename MyOtherType> void f(MyType&& myValue, MyOtherType&& myOtherValue) { g(FWD(myValue), FWD(myOtherValue)); }
The macro made the code easier to type and read.
noexcept(noexcept(
Another case where you type the same thing twice is in the noexcept
specifier. You can tack on the noexcept
specifier at the end of a function prototype if that function will not throw an exception (why it’s a good idea to do this is beyond the scope of this article and you can read all about it in item 14 of Effective Modern C++).
Basically, if you declare a function with the noexcept
specifier, it means that the function will not throw an exception, period:
void f() noexcept; // no exceptions, period.
But sometimes it’s not all black or white, and the function can guarantee not to throw exceptions only if a certain condition is met:
void f() noexcept(condition); // no exceptions if condition is met.
A typical condition is that if another expression (for instance one that f
uses) is itself noexcept
. For that, we can use the noexcept
operator: for example, noexcept(T{})
returns true
if the expression T{}
is itself noexcept
.
Combining the noexcept
specifier with the noexcept
operator gives:;
void f() noexcept(noexcept(T{})); // no exceptions if T{} doesn't throw.
It makes sense when you break it down but, as a whole,
noexcept(noexcept(T{}))
has a funny look. You may be totally fine and used to it. Or maybe you’d rather have the code be a little more explicit, and a macro can then change that expression. The SFME project uses noexcept_if
for example (and one of its authors told me he saw it in Vittorio’s work), and I suppose we could also call it noexcept_like
:
#define noexcept_like(expression) noexcept(noexcept(expression))
which transforms our code this way:
void f() noexcept_like(T{}); // no exceptions if T{} doesn't throw.
How to go about that one is in part a matter of taste.
Useful macro #3: the macro that brings low-level polymorphism
Yes, macros can be used for polymorphism. But for a very special type of polymorphism: the one that is resolved at pre-processing time, which happens even before compile time. So the input to resolve that type of polymorphism must be there before compile time.
How does this work? You define compilation parameters that start with -D
, and you can test the existence of those parameters with #ifdef
directives in the code. Depending on their existence you can use different #define
s to give different meaning to an expression in the code.
There are at least two types of information you can pass on to your program this way:
- the type of OS (UNIX vs Windows) that allows making system calls code to be portable
- the version of C++ available (C++98, C++03, C++11, C++14, C++17, etc.).
Making the code aware of the version of C++ is useful in library code that is designed to be used in different projects. It gives the library code the flexibility to write modern and efficient implementations if they are available, and fall back on less modern features if the programming environment is still catching up to a recent version of C++.
In libraries that uses advanced features of C++, it also makes sense to pass information about the compiler itself and its version, if the library has to work around certain compiler bugs. This is a common practice in Boost for example.
Either way, for environment- or language-related directives, you want to keep this kind of checks at the lowest level possible, deeply encapsulated inside implementation code. And you want the vast majority of your codeline to be portable and independent from a given environment.
The world of macros
Note that even if the three types of macros bring value, they still have no scope. One way to mitigate the risk of calling them by accident is to give them names that you won’t call by accident. In particular max
is a bad name in this regard, while BOOST_NO_CXX11_NUMERIC_LIMITS
is less likely to be used without being aware of its existence.
If you want to go further with macros, you can enter a whole language of its own. For example, you can check out the chapters on preprocessor in C++ Template Metaprogramming, or the Boost Preprocessor library.
It’s a wild place, to be treaded with caution, but knowing its existence and the sort of creatures that live in there can only make you a more seasoned explorer of the world of C++.
And for everyday code, the 3 above types of macros can be useful to make the code more expressive, and still correct.
You may also like
l-value, r-value and their references
Don't want to miss out ? Follow:   Share this post!