Skip to content

Value Semantics

C++ is an old language. Many aspects of our programming styles have become habits that we do not think about too much today. In this blog, I’m going to address one major issue that has resulted from such habits and get you prepared for a new bright world of C++2b. So without further ado, let’s dive!

Identifying a Problem

There is a problem that we don’t want to face. It’s called reference semantics. Reference semantics tells us that the variable is referenced by one or more places in code, like a crossroads, leading to a center of the city. Let’s address an elephant in the room: the global variable.

You probably already know that the global variable is a big no-no in the C++ community because the prediction of the state of a global variable is borderline impossible, if it’s writable and readable. If the code is concurrent, it becomes a big pain to synchronize all the ins and outs to make it work.

Thus, global variables are frowned upon. Though we’ve become pretty good at replacing them with other things, the issue still exists within references. The global variable is, in its core, a reference to memory that is globally allocated. It has a way to be addressed from any point in code, adding roads to the crossroads of reference semantics. It’s the same as a reference or a pointer to something, just on a larger scale.

So, what’s the cost of it? Upon expansion, programs become more coupled and fragile, accumulating technical debt. Removing a global variable from a program is hard, but the same goes for anything that is shared or pointed to in a non-unique manner.

We are not going to tackle all the problems the reference semantics cause today. Instead, we’ll move on to and focus on function calls.

Call Safe, Not Fast.

Why focus on functions? Because we need to specify the guarantees that we, as programmers, give to our library users.

First, let’s define some common value semantics rules for easier understanding.

Those rules are:

  1. Regularity: the state of an object is equivalent to its copy.
  2. Independence: the state of an object remains constant for the local scope and can be changed only by a local scope.

Those give us the following conditions:

  • Values can’t be written by operations on other values.
  • Writing the value does not affect other values.

Let’s look at an example:

void add(int& a, const int&b){
    a+=b;
}
void add2(int& a, const int&b){
   add(a,b);
   add(a,b);
}

int main(){
    int x = 10;
    add2(x,x);
    return x;
}

Even if it is somewhat artificial, can you guess what the output will be? If you said 40, congratulations!

But why is it 40? Add adds const& b to a. So the first time, it should be 20 and, the second time, it’s 30! Well, we passed x as a const reference, even though its const didn’t give any guarantee that the value is not going to change by external factors. So const& is not const enough.

Even if we designed and documented the usage of the function, the model of human thinking ignores the possibility of overlapping values. Additionally, the documentation often does not help with that, since we assume the library user knows that already.

The solution would be to pass the ints by value and return them by value, thus fulfilling the value semantics rules.

But what about for larger types? For those, the possible solutions would be to wrap them in the unique pointer and return it. Move semantics help us in that regard. The shared pointer is another way to ensure lifetime, but because C++ does not have borrowed checker there may be some implications with concurrent usage of shared pointers.

Accelerating!

We’ve looked at the problems with data coherency. Now, let’s talk about lifetimes. Assume we have a function:

auto f(const std::vector<int>& vec, class SomeClass& to_fill);

We know that data isn’t overlapping and “to_fill” is not modifying the vec in any circumstance. But still, there is a phantom menace lurking around. Let’s define some words about function execution: the function gives strong guarantee of execution if the selected scenario of execution provides strong lifetime to its arguments and the internal state of the function. Basically, it’s a way to say that on current execution will not result in UB under any conditions.

int main()
{
    SomeClass c;
    std::array<int, 4> arr{1,2,3,4};
    f(arr, c);
}

This call is strong because the lifetimes of the objects extend beyond function call.

But now, what if the function “f” is asynchronous, either a regular async function or an eager coroutine?

Task f(const std::vector<int>& vec, class SomeClass& to_fill)
{
	co_await resume_on_threadpool();
	// Do some expensive calculations
}

Now this function returns immediately after resume_on_threadpool() is called. Suddenly, all the objects are destroyed, even before the execution is finished. Now, references and reference semantics become a big evil. Because we can’t guarantee the lifetimes of objects, function becomes weak and, if not executed within synchronous context or not “co_await”ed upon, may and probably will result in UB.

This is where value semantics come to the rescue at last! We can change vector& to vector itself and move in the value.

But what should we do with the class here? If we assume it comes from another part of a program that has it instantiated, the best bet is to use smart pointers.

Task<std::unique_ptr<SomeClass>> f(
std::vector<int> vec, 
std::unique_ptr<SomeClass>> to_fill
);

Now the execution is ensured to run safely.

Conclusion

Value semantics is a great tool for ensuring the stability of your code. Use it sparingly, because C++ is eagerly copying everything you pass into functions, sometimes resulting in performance loss.

Because asynchronous programming is coming towards us at an astonishing speed with coroutines, we need to consider the correct behavior and the lifetime of arguments for more use cases.

Thank you for reading. There is much more I could share, but this would quickly become an unusually long blog if I were to share it all here. Stay tuned for more advanced C++ in the future.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

2 thoughts on “Value Semantics”

  1. I totally agree with general message of the article, but I think the part with coroutines is technically incorrect. Whether resume_on_threadpool() will be executed immediately during the initial call to f() is determined by Task’s promise class and when coroutines are used as async functions (opposed to generators) it’s more convenient to have them as lazy – function suspends immediately after constructing the promise object without executing a single line of the function itself, and execution (usually) begins when returned Task object is being co_await’ed upon (so that the continuation is already available).

    Therefore I would argue that the example with chaining coroutines is perfectly correct:
    Task g()
    {
    SomeClass c;
    std::array arr{1,2,3,4};
    co_await f(arr, c);
    }

    Eager tasks (when function is executed immediately) require quite tricky locking because (as in this example) – eagerly starting function can lead to its quite early completion and some memory may be released or destroyed even before call from f() is returned. Having eager tasks in C++ would violate the principle of zero overhead.

    1. Hello, Yuri, thank you for the commentary.
      Your argument is correct. The implementation was picked up from Windows tasks, provided with Windows Runtime (CPPWinRT). Those tasks are eager, this allows for late picking, which results in significant performance boost.

      Lazy tasks are more modest. Because co_await tends to freeze execution thread and wait for complition usage of those are limited in multithread scenarios. In my opinion because eager coroutines are allowed and quite successfully implemented, their use needs to be addressed, hence the difficulty with input parameters arises. Also the regular async functions tend to suffer from the same problems.

      Simplified example from my recent work:

      ver::Action Initialize()
      {
      auto g = gfx.Initialize();
      auto a = audio.Initialize(“music.ogg”); //g and a are already executed in parallel on background threads
      co_await winrt::when_all(g,a); // we need them to be finished before swapchain creation
      co_await gfx.CreateSwapchain(hwnd); // completely initialized gfx and audio
      }
      Of course there are a lot more to the coroutines, this will be addressed more in KDAB Introduction to structured concurrency training.

      Sincerely,
      Ilya “Agrael” Doroshenko

Leave a Reply

Your email address will not be published. Required fields are marked *