Skip to main content

How to outperform std::vector in 1 easy step

Everyone who's familiar with C++ knows that you should avoid resizing a std::vector inside a loop wherever possible. The reasoning's pretty obvious: the memory allocated for the vector doubles in size each time it fills up and that doubling is a costly operation. Have you ever wondered why it's so costly though?

It's tempting to assume that because implementations of the STL have been around for so long that they must be pretty efficient. It turns out that's a bad assumption because the problem, in this case, is the standard itself: specifically, the allocator interface.

The allocator interface provides two methods that obtain and release memory:

allocates uninitialized storage
(public member function)
deallocates storage
(public member function)

(taken from this page).

What's missing is a way of growing an existing memory allocation in place. In C this is provided by the realloc function, but there's no equivalent in the std::allocator interface. To see why this is costly, here's pseudocode for doubling the capacity of a vector using just allocate() and deallocate():

  1. Allocate a new block of uninitialised memory, twice as large as the old block.
  2. Copy the old values to the new memory block.
  3. Destroy the values in the old memory block.
  4. Deallocate the old memory block.

For comparison, here's pseudocode for the same thing if we have a reallocate() function with the same semantics as C's realloc():

  1. Grow the existing memory block.

...and that's it. We don't need to do any copying, so there are no copy constructor invocations and no destructor invocations either.

One of the gotchas with realloc() is that it will move the underlying memory block if it can't be grown in place. This means any pointers into the vector, including pointers between elements in the same vector, will become invalid. In practice this makes no difference: vectors already have this limitation anyway.

I wrote some test code to see how much of a difference this made. I made a template class called realloc_vector with the same semantics as a std::vector, but which calls realloc under the hood. The test code called push_back 10 million times in a row, first on a std::vector<size_t>, then on a realloc_vector<size_t>. I repeated with vectors of std::string as well, to check the performance when there are constructors and which need to be called. Compiled with g++ -O3 on 64-bit Linux, the results were:

push 10,000,000 size_ts:
std::vector<size_t>: 121.866 ms
realloc_vector<size_t>: 52.483 ms

push 10,000,000 copies of "Hello world" as a std::string:
std::vector<std::string>: 501.651 ms
realloc_vector<std::string>: 154.316 ms

The realloc_vector was approximately 2.5 times faster in both cases. This is an extremely artificial test; in practice, if you know the size in advance you would reserve space in the vector up front to minimize the number of reallocations. When I modified the test to do this, the times for both vector classes were identical.

So there you have it. That's how you can outperform std::vector in one easy step: change it to use realloc(), to grow it's memory allocation in place where possible.


  1. I'm working on the same problem! Is realloc_vector available?

    1. Hi & thanks for your interest. The code was pretty trivial so I didn't bother to keep it around when I got a new computer. Sorry!

      I'm not sure I'd actually recommend using a realloc_vector in production code anyway. It's not a drop-in replacement for a std
      ::vector because it's only guaranteed to work correctly for vectors of POD-objects. It might work for other things (like in the std::string example I gave) but it could also cause some surprising bugs.

      In practice I think it's wiser to use a std::vector and reserve() memory up front, using an estimate of the final size if you don't know it exactly up front.


Post a Comment

Popular posts from this blog

Octree node identifiers

Let's say we have an octree and we want to come up with a unique integer that can identify any node in the tree - including interior nodes, not just leaf nodes. Let's also say that the octree has a maximum depth no greater than 9 levels, i.e. the level containing the leaf nodes divides space into 512 parts along each axis.

The encoding The morton encoding of a node's i,j,k coordinates within the tree lets us identify a node uniquely if we already know it's depth. Without knowing the depth, there's no way to differentiate between cells at different depths in the tree. For example, the node at depth 1 with coords 0,0,0 has exactly the same morton encoding as the node at depth 2 with coords 0,0,0.

We can fix this by appending the depth of the node to the morton encoding. If we have an octree of depth 9 then we need up to 27 bits for the morton encoding and 4 bits for the depth, which still fits nicely into a 32-bit integer. We'll shift the morton code up so that i…

Faster morton codes with compiler intrinsics

Today I learned that newer Intel processors have an instruction which is tailor-made for generating morton codes: the PDEP instruction. There's an instruction for the inverse as well, PEXT.

These exist in 32- and 64-bit versions and you can use them directly from C or C++ code via compiler intrinsics: _pdep_u32/u64 and _pext_u32/u64. Miraculously, both the Visual C++ and GCC versions of the intrinsics have the same names. You'll need an Intel Haswell processor or newer to be able to take advantage of them though.

Docs for the instructions:

Intel's docsGCC docsVisual C++ docs
This page has a great write up of older techniques for generating morton codes:

Jeroen Baert's blog ...but the real gold is hidden at the bottom of that page in a comment from Julien Bilalte, which is what clued me in to the existence of these instructions.
Update: there's some useful info on Wikipedia about these intructions too.