Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Standard C++ Library Changes in Visual C++ 2010

0.00/5 (No votes)
28 Sep 2010 1  
Reveals the important changes in STL.

The STL continues...

Prerequisites

  • Understanding of C++0x standard concepts, viz. auto keyword, lambda expressions, R-value references.
  • Reasonably good understanding of STL. A couple or more container classes, and iterator knowledge are sufficient.
  • You must have the Visual C++ 2010 compiler, or a compiler that supports the new C++ and updated STL (I don't have references to other compatible compilers; if you have, let me know).

This article explains the changes in the STL library. Like the changes in C++ languages, as explained in this article, the changes in STL are on top of TR1. Briefly, following are the advancements in the Standard Template Library:

Constant iterators

First things first; constant iterator is not the same as const iterator. A constant iterator is const_iterator. When the keyword const is applied to an iterator (or any other variable), that variable cannot be changed further. A const_iterator (constant iterator), on the other hand, does not allow the referenced element of the container. You would like to re-read this paragraph after glimpsing the following code:

using namespace std;
vector<int> IntVector;

...
 
//  Normal iterator
vector<int>::iterator   iter_nc = IntVector.begin();
*iter_nc = 100;  // Valid
iter_nc = IntVector.end(); // Valid

// Constant Iterator
vector<int>::const_iterator   iter_c = IntVector.begin();
*iter_c = 100; // INVALID!
iter_c = IntVector.end(); // Valid

// The 'const' iterator
const vector<int>::iterator   iter_const = IntVector.begin();
*iter_const = 100; // Valid
iter_const = IntVector.end(); // Invalid (Why? Learn C++!)

// The 'const' Constant Iterator
const vector<int>::const_iterator   iter_const_c = IntVector.begin();
*iter_const_c = 100; // Invalid!
iter_const_c = IntVector.end(); // Invalid

So, what's new in this?

Well, the member functions that would explicitly return Constant Iterators (i.e., const_iterator) irrespective of the context where those methods are called. Before that, I would like to let you know that iterator accessor methods (like begin, end) would return a normal or constant iterator as:

  • If container is constant, or the accessor is being called in the class' method which is const, const_iterator would be returned.
  • If, on the left side is a const_iterator type variable, the normal iterator would be returned, but would be sliced down to const_iterator (i.e., would be converted).
  • If, either way, the container is non-const - and no variable is declared, or the iterator variable is on the left side, it is the normal iterator.

Note: Most iterator accessor methods, like begin and end, are overloaded to return constant or non-constant iterators.

Thus, by just looking at one line, you cannot determine if a normal or constant iterator would be returned. Neither can you explicitly ask for a constant iterator:

IntVector.begin();

The new methods are:

Accessor Method Meaning
cbegin Returns const_iterator of the first element in the container.
cend Gets the const_iterator to one past the last element in the container.
crbegin Retrieves the const_iterator to the first element in the reverse iterator. Essentially, a constant iterator to the last element.
crend Returns const_iterator to one past element in reverse iteration. Logically, the iterator before the first element in the container.

All the methods are declared const in the respective container classes.

What's the whole point of introducing all these methods?

Good question, that you should ask! When, one way or the other, you can specify which type of iterator you need, you probably don't need these new methods.

Well, the answer is: the revamped auto keyword! If you have read about the new meaning of the auto keyword, you know that you can avoid typing the type of variable being declared. The compiler can deduce the type from the expression on the right. Now when you just make a call to:

auto iter = IntVector.begin();

You cannot tell (by looking at the code) if a normal or constant iterator is being retrieved. Thus, to explicitly ask for constant iterator (const_iterator), you can use:

auto iter = IntVector.cbegin(); // cbegin
// The return type is:  vector<int>::const_iterator

To iterate through the vector in read-only mode, you can code:

for(auto iter = IntVector.cbegin(); iter != IntVector.cend(); ++iter) { }

The constant accessor methods simply call a non-constant method (I don't guarantee, I saw in the source code). Thus, is it perfectly valid to compare with cend, instead of end; and vice-versa (disclaimer attached!).

What about sets and sets?

For all set classes (set, multiset, unordered_set, and unordered_multiset), the iterator is always constant. The means, methods to retrieve begin or end (any variant), calling find, and index operator ([]) will return a constant iterator. The data type for seemingly non-constant iterator methods like begin, find would, however, return iterator, instead of const_iterator; their behavior would be as of constant iterator.

Consider the following code:

set<int> is;
is.insert(10);

*is.begin() = 120;

wcout << is.count(10) << ", " << is.count(120);

It inserts 10 into set, and then tries to modify the content on top of the set. Since only one element is inserted, the begin call would return the iterator of the same element. It simply modifies it. On VC9 and earlier compilers, it gets compiled successfully, and the value is amended to 120.

Beginning with the VC10 compiler, the third line doesn't appease the compiler, and the compiler complains:

error C3892: 'std::**::begin' : you cannot assign to a variable that is const

For map, key modification isn't allowed, and value modification is allowed. The behavior is intact. Here is a code example:

map<int, int> imap; 

imap.insert( make_pair( 1, 1028 ));
imap.insert( make_pair( 2, 2048 ));

imap.find(1)->second = 1024; // Compiles
imap.find(2)->first = 4;     // Error

imap[2] = 2000;

*imap.begin()->first = 10;   // Error
*imap.begin()->second= 10;   // Compiles

*imap.cbegin()->second= 10;
// Error on VC10, since iterator is constant. NA for VC9

The array class

The array is a container class which stores elements in a fixed sized array. The size of the array is given as a template argument, along with the data type. The size must be a compile time constant (like ordinary C/C++ arrays). Other than reducing or increasing the size of the container (i.e., the number of elements), it supports other standard methods - like iterating, randomly accessing, swapping, assigning etc.

  • Header: <array>
  • Namespace: tr1. But, 'using tr1::array' is implied by the header. Thus, only the std specification is sufficient.

An example:

array<int, 64> IntArray;

The first argument to template is the data-type, and the second is the compile time integer constant. The size of IntArray cannot be changed later. You can, for sure, use other data types instead of int or basic data types. Depending on the operations being performed, copy constructor, assignment operator, comparison operator may be required for a custom data type. For a custom data type (class, struct), the default constructor must be accessible.

This class is introduced to seamlessly integrate with other STL components. For example, if you use vector or list, you need to call the begin/end methods to access elements. When you decide to make that container a fixed-sized ordinary array (C/C++ array), your code would fail to compile. You need to change. By using the array class, you get the flexibility of STL and the performance of ordinary arrays.

With the latest compiler support (TR1 addition), you can also initialize an array as:

array<int, 4> ByteUnits= {1024, 2048, 4096, 8192}; 

When you omit one or more elements to be initialized, they would be set to zero. When you do not initialize any element of array, all elements would be uninitialized, and thus undefined (standard C/C++ rule!).

Following STL standard methods are supported by the array class:

  • at, operator [] - Returns the reference of the specified position.
  • back, front - Returns the reference to the first and last positions, respectively.
  • begin, cbegin, rbegin, crbegin - Returns the iterator of the first element.
  • end, cend, rend, crend - Returns the iterator of one past last element.
  • empty - Determines if the container is empty. It would return true only if the array size is zero.
  • size, max_size - The size of the array object. Always the same, since it is the compile time defined size of the array.

Methods that need refined discussion:

  • array::assign and array:fill
  • These methods are identical, and their task is to fill all elements of an array with the given value. They would replace all values of the array object with the specified value. For example:

    array<int,10> MyArray;
    MyArray.fill(40); // or 'assign'
    
    for_each(MyArray.cbegin(), MyArray.cend(), 
          [](int n) { std::wcout << n << "\t";} );

    This would set all ten elements with value 40, and then display all the values on the console.

  • array::data
  • This method would return the address of the first element of the array. For native arrays (say, int_array[N]), this method is the same as the expression &int_array[0] or int_array. This method is given for direct pointer manipulation. Both const and non-const methods are available. For example:

    int* pArray = MyArray.data();
    
    // Or
    
    auto pArray = MyArray.data();
  • array::swap(array&) and swap(array&, array&)
  • Swap the contents of two exactly same size and type array objects. The first version, which is a non-static class method, takes an argument as another array object and exchanges the content with that array. The second version is a global function, which takes two array objects (exactly the same type!) and exchanges their contents. Examples:

    typedef array<int,10> MyArrayType;
     
    MyArrayType Array1 = {1,2,4,8,16};
    MyArrayType Array2;
    Array2.assign(64);
    
    Array1.swap(Array2);
    // Or - swap(Array1, Array2);
    
    // Array1 - 64, 64...64
    // Array2 - 1,2,4,....0

    If you attempt to swap arrays of dissimilar kinds, the compiler would complain. For example:

    array<int,5>   IntArrayOf5 = {1,2,3,4,5};
    array<int,4>    IntArrayOf4 = {10,20,40};
    array<float,4>  FloatArrayOf4; 
    
    IntArrayOf5.swap(IntArrayOf4); // ERROR! 
    swap(IntArrayOf4, FloatArrayOf4); // ERROR!
  • All six comparison (relational) operators
  • The operators ==, !=, <, >, <=, and >= are defined as global functions for comparing two exactly same type array objects, and returns true if the condition is met; otherwise false. For example:

    typedef array<int,10> MyArrayType;
    MyArrayType Array1 = {1,2,4,8,16};
    MyArrayType Array2;
    Array2.assign(64);
    
    if (Array2 == Array1)
        wcout << "Same";
    else
        wcout << "Not Same";

    which would display "Not Same". See the documentation, and practice yourself how less than and greater than operators would behave.

The tuple class

Any regular STL programmer would know of a struct named pair, which can hold two elements of any data type. Other than maps, it is also useful to pack two elements, instead of defining a struct for them. The first and second member variables are actually those packed values. Just to recollect and for the sake of relevant discussion:

pair<string,int>  NameAndAge;
NameAndAge.first = "Intel";
NameAndAge.second = 40;

Programmers often typedef them, so that variables can be declared easily and pair can be passed elegantly to functions.

What if you need to pack more than two elements?

Well, in that case, you probably would define a struct, or use multiple pairing. The former approach doesn't allow seamless integration with STL, and the latter does not allow elegant and easy coding.

Here is a boon for you: the tuple class. The tuple class allows 2 to 10 elements to be put together. All types can be different, and depending on the desired operations, special methods from your class would be demanded. "How" tuple class works for template overloads is beyond my comprehension. I would discuss "what" can be achieved, and "where" it can be used.

  • Header: <tuple>
  • Namespace: tr1. 'using tr1::tuple' is implied under std.

Let's start with the examples:

Declaration of a two-element tuple:

tuple<int,int> TwoElements;

Initialization at construction:

tuple<int,int> TwoElement (400, 800);

Initialization of all elements (if being initialized) is mandatory. The following is an error:

tuple<int,int> TwoElement (400); // Second initializer missing

The error would not be straightforward, but you can understand that initialization of all elements is required!

Initialization after construction - it is not as simple! Not as elusive though. If you remember, there is a make_pair helper for pair construction/initialization. You got it right! The helper function make_tuple is for you. If you have read, or will be reading about Concurrency Runtime in VC++ 2010, you would find that there are other new make_ functions in the C++ library.

Anyway, here is how you initialize a tuple object:

TwoElement = make_tuple(400, 800);

Like the tuple template class, make_tuple is also overloaded to take 2 to 10 parameters. It is interesting to know that make_tuple utilizes R-Value reference concepts of the C++0x language, and the perfect forwarding feature of STL. The former is discussed in this article, and the latter will be discussed later. In short, they allow the same object to be re-used instead of calling copy constructor and assignment operators, thus improving overall performance.

You can use make_tuple along with the auto keyword to smartly define a tuple whose type would be deduced by the compiler:

auto Elements = make_tuple(20, 'X', 3.14159);
// Eq: tuple<int, char, double> Elements(20, 'X', 3.14159);

Moving ahead. Those were the few ways to initialize a tuple object. Now the obvious question that would arise would be "How to access elements of a tuple?".

For a pair object, we can simply use its templatized member variables: first and second. But, since the tuple class does not define any named member variables as such, we need to use the helper function get to access those unnamed variables. Example:

tuple<string, int, char> NameAgeGender("Gandhi", 52, 'M');
int nAge = get<1>(NameAgeGender);

The explicit template parameter, which would be an integer (specifically, size_t), specifies the zero-based index. The index must be within the range of the tuple's number of template arguments. In this case, it must be 0, 1, or 2. In the above example, the get function is used to retrieve the second element of the tuple object. The second element is an integer, which is being stored in the nAge variable.

If you specify out of index to the get function, the compiler will not be happy. It must be a compile time constant and within range. Also, the return type is also deduced at compile time; you cannot fool the compiler by:

int nAge = get<0>(NameAgeGender);
// ERROR - cannot convert from 'string' to 'int'

This also helps in finding the possible code flaws:

char cGender = get<1>(NameAgeGender); // Leve 4 C4244 warning - 'int' to 'char'
// Should be get<2>

More importantly, it allows to use the auto keyword!

auto sName = get<0>(NameAgeGender); // Type deduction - 'string'

Since for non-const tuple objects, the return type is the reference of the desired tuple's element, you can also change the element's value:

// Modify Age
get<1>(NameAgeGender) = 79;

Of course, you can put it in a reference variable and later modify it. The auto keyword is also available:

auto & rnAge = get<1>(NameAgeGender) ;

rnAge = 79

To give the get function its full credit, I must mention that it can also be used with the array class:

array<char, 5> Vowels = {'A','E','I','o','U'};
 
char c = get<1>(Vowels); // 'E'
get<3>(Vowels) = 'O';  // Modify
get<10><Vowels);  // ERROR - Out of range!

The tie function

This function is logically the inverse of the make_tuple function. With the tie function, you can initialize a set of variables from a tuple object. An example first:

tuple<string, int, char> NameAgeGender("Gandhi", 52, 'M');

string sName;
int nAge;
char cSex;
  
tie(sName, nAge, cSex) = NameAgeGender;

This would set these three variable with the values "Gandhi", 52, and 'M', respectively. My inquisitiveness has shown me that tie can also be used in place of make_tuple. But the reverse is not true.

tuple<string, int, char> NameAgeGender;
NameAgeGender = tie("Gates", 46, 'M'); // make_tuple
 
string sName;  int nAge;   char cSex;
make_tuple(sName, nAge, cSex) = NameAgeGender; // NO error. See below.

The tie function takes and returns a reference to a tuple object; make_tuple logically creates an object, and takes/returns a non-reference. The last line would compile, but does not give the expected results, since all three parameters are passed by value, and an attempt to modify the temporary tuple is being made. I recommend you to use the tie and make_tuple helper functions for what they are designed.

The six relational operators

Like array, all six relational operators are defined for the tuple class also. All overloaded operators demand that both operands (tuple objects) must be exactly of the same type! Following C/C+ semantics, the functions return true or false as the result.

New functions in <algorithm>

The following functions are new to the <algorithm> header as part of the TR1 additions:

  1. all_of
  2. any_of
  3. none_of
  4. find_if_not
  5. copy_if
  6. partition_copy
  7. is_partitioned
  8. partition_point
  9. minmax_element
  10. is_sorted
  11. is_sorted_until
  12. is_heap
  13. is_heap_until
  14. move

A basic understanding of how algorithm functions work is expected from the reader. Like how the count, count_if, or find_if functions work. In short, they work on the given iterator(s), and return the desired result. Like count_if counts the number of elements in a given container (specified by iterators) that match a specified criteria. Most functions do not change the given ranges. A few functions do modify one of the given ranges (like copy, remove_copy), and a few functions modify the given range (for_each, fill, generate).

  • all_of function
  • The function all_of tests all elements in the given range and returns true only if all elements match the given predicate. If any element fails to match the predicate, the function aborts further search and returns false. Example:

    vector<int> IntVector(10);
    
    fill(IntVector.begin(), IntVector.end(), 64);
    // IntVector[1]=3;
    
    bool bAllAre4 = all_of(IntVector.cbegin(), IntVector.cend(),
        [](int nValue) { return nValue % 2 == 0; });

    The code above fills all 10 elements of the vector with the value 64. It then calls the all_of function to test if all the elements are even numbers. The predicate is provided via the C++0x lambda. It can be a function pointer or a function object (functor). Here, it would return true, since all elements are even. If you uncomment the commented (third) line, the return value would be false.

    For this function, and all forthcoming functions, it is important to note that the given function does not copy the given functor/lambda. By that I mean: the given predicate would be called for each iteration, and the behavior depends on what the predicate is returning on that call. Let me explicate through an example:

    int nCheck = 2;
    bool bAllAre4 = all_of(IntVector.cbegin(), IntVector.cend(),
        [&nCheck](int nValue) { return nValue % nCheck++ == 0; });

    With stateful lambda, the meaning of the predicate function has changed. Therefore, with each call, the function would behave differently. For the first call, it would test with modulo 2, then with 3, and so on. Thus, you see that the given predicate function is not copied by the algorithm function, but called on each iteration. Though this behavior may be desirous for you - do make a note of it!

  • any_of function
  • You can guess the behavior of this function; it tests if any element matches the given predicate, and if so, returns true. If no element matches the given predicate, it would return false. It is worth noting that the function would exit as soon as it finds an item matching the given predicate. This function is not perfectly the same as the find function. The find/find_if function returns an iterator, but any_of returns bool. Many times, we just need to locate if an item exists in a container or not. Using 'iter != container.end()' is cumbersome. So, here is a boon for you:

    array<int, 5> IntArray = {1,2,3,4,5};
    
    bool bDoes3Exists = any_of(IntArray.begin(), IntArray.end(),
       [](int nValue) { return nValue == 3; });

    It would return true, since number 3 exists in the container. To find if any even/odd number exists in the container, you can change it appropriately. Here is another example, which finds a country name in a given list of strings:

    list<string> Countries;
    
    Countries.push_back("Japan"); 
    Countries.push_back("India");
    Countries.push_back("Singapore");
    Countries.push_back("USA");
    
    if ( any_of(Countries.begin(), Countries.end(),
         [](const string& sName) { return sName.compare("Singapore")==0; } )
       )
       {
           wcout << "Country exists";
       }

    If you want to know which element has matched the criteria, you need to use the find or find_if function. Unfortunately, the any_of function doesn't support searching with a given value; it needs a function/lambda!

  • none_of function
  • The none_of function returns true if no element matches the given predicate. Semantically, it is same as !any_of. It returns true only if all elements fail to match the criteria (i.e., the predicate always returns false). If any match succeeds, it returns false. Example:

    array<int, 5> IntArray = {1,8,64,128,1024};
    //IntArray[2] = 8192;
    
    auto bLessThan2K = none_of(IntArray.cbegin(), IntArray.cend(),
        [](int nValue) { return nValue>=2048;});

    The variable bLessThan2K would be true since no item is more than 2048. When you uncomment the second line, the return value of none_of would be false, since it finds a match failing the predicate. You achieve the same by replacing 'none_of' with '!any_of'.

  • find_if_not function
  • Similar to the find/find_if functions, it returns the iterator of the appropriate type. It attempts to locate the first item which does not match the given predicate. For example, you would like to find a person who is not married. Since marital status can be multiple, it is better to find if someone is not married (a search for 'unmarried' may not do good!). Let's see an example:

    using namespace std;
     
    // Tuple of Name, Age, Sex, Marital Status
    typedef tuple<string, int, char, char> Person;
    
    vector<Person> People;
    
    People.push_back(make_tuple("Adam", 21, 'M', 'M'));
    People.push_back(make_tuple("Eve", 19, 'F', 'M'));
    People.push_back(make_tuple("Natasha", 23, 'F', 'C')); // Committed 
    People.push_back(make_tuple("Enrique", 32, 'M', 'U'));
    People.push_back(make_tuple("James", 38, 'M', 'D')); 
    
    auto iter = find_if_not(People.cbegin(), People.cend(), [](const Person& person)
    {
        // Lambdas dont capture namespace
        return std::get<3>(person) == 'M';
    });
    
    if(iter == People.end())
    {
        wcout << "No one found!";
    }
    else
    {
        wcout << "Found a person!\n" <<
            "Name: " << get<0>(*iter).c_str() // I hate std::string
            << "\nAge: " << get<1>(*iter);
    }
    
    // Output:
    //   Name: Natasha,  Age: 23

    The code above demands you to revisit the tuple! The type of iter is vector<Person>::const_iterator, which is shortened via the auto keyword.

  • copy_if function
  • This function demands some attention and explanation. First, the root function: copy, which copies a range (input) into another range (output). Since, this is a copy function, and not insert, the destination range must have enough space. Example:

    vector<int> IntVector(10);
    array<int, 10> IntArray = {1,2,3,4,5,6,7,8,9,10};
    
    copy(IntArray.cbegin(), IntArray.cend(), IntVector.begin());

    If the destination is not large enough, the Debug version will assert. To implement insertion, you can use the back_inserter class:

    // Starts from IntVector.end
    copy(IntArray.cbegin(), IntArray.cend(), back_inserter(IntVector));

    Moving ahead, for a relevant discussion. The remove_copy function can be used to copy only elements that match the given value to another container/range. Further, remove_copy_if can be used to copy by a given predicate. The "remove" word in the function name does not mean that it would remove something from the original range, but copying only values that match value/predicate, removing (excluding) others. The requirement of the target range is the same as the copy function. Here is an example for both:

    vector<int> IntVector; // Size = 0
    array<int, 10> IntArray = {1, 2, 3, 4, -1, 6, 7, -1, 8, -1}; 
    
    // Copy all excluding -1
    remove_copy(IntArray.cbegin(), IntArray.cend(), back_inserter(IntVector), -1);
    
    wcout << "Size: " << IntVector.size(); // 7
     
    ...
    
    IntVector.clear(); // size  = 0
     
    // Copies ONLY elements that are NOT even
    remove_copy_if(IntArray.cbegin(), IntArray.cend(), back_inserter(IntVector),
        [](int nValue) { return nValue % 2 == 0;});
    
    wcout << "Size: " << IntVector.size(); // 6

    Now, if we need to copy only even numbers, we just need to change the expression in lambda as:

    return nValue % 2 != 0;

    You see, to put something you need, you need to express in "don't need" form. If the condition involves multiple conditions (AND, ORs), it becomes more complex to write and understand. Yes, the solution is to negate (!) the whole condition. But that approach again requires you to write in "don't need" form, and is elusive to understand.

    The copy_if function provides a clean and direct approach. You can say it is a copy function with a condition test. The following is code for the "I need events only" requirement:

    copy_if(IntArray.cbegin(), IntArray.cend(), back_inserter(IntVector),
          [](int nValue) { return nValue % 2 == 0;} ); // Positive condition

    Let's have another example, involving the Person tuple. In a vector of Persons, we need to copy only married males. We can write:

    typedef tuple<string, int, char, char> Person;
    vector<Person> OriginalPeople, RequiredPeople;
    
    OriginalPeople.push_back(make_tuple("Adam", 21, 'M', 'M'));
    OriginalPeople.push_back(make_tuple("Eve", 19, 'F', 'M'));
    OriginalPeople.push_back(make_tuple("Natasha", 23, 'F', 'C')); // Committed 
    OriginalPeople.push_back(make_tuple("Enrique", 32, 'M', 'M')); // Now married!
    OriginalPeople.push_back(make_tuple("James", 38, 'M', 'C')); 
    
    copy_if(OriginalPeople.cbegin(), OriginalPeople.cend(),
            back_inserter(RequiredPeople), // Copy here 
          [](const Person& person_obj) -> bool // No simple return, -> required!
          {
              if ( std::get<2>(person_obj) == 'M' && 
                   std::get<3>(person_obj) == 'M' )
              {
                   return true;
              }
              else
              {
                   return false;
              } 
    
           } // Lambda end
    );
     
    // RequiredPeople would have 2 elements

    Note: As of this writing, the MSDN documentation of copy_if stands incorrect. It says it needs three arguments, but actually it takes four arguments.

  • partition_copy function
  • The partition_copy function partitions the input range into two different output ranges, depending on the condition. If condition is true, it copies into the first output range. If condition is false, it copies into the second output range. Here is an example which generates a few random numbers, and then copies even-numbers and odd-numbers in different vectors:

    vector<int> IntVectorEven, IntVectorOdd;
    array<int, 50> IntArray;
     
    // Generate random numbers
    generate(IntArray.begin(), IntArray.end(), rand);
    
    partition_copy(IntArray.cbegin(), IntArray.cend(), 
              back_inserter(IntVectorEven), back_inserter(IntVectorOdd),
              [](int nValue) { return nValue%2==0;});
     
    // IntVectorEven.size() + IntVectorSize would be: IntArray.size() !

    In the Person vector, let's divide people based on gender:

    typedef tuple<string, int, char, char> Person;
    
    vector<Person> OriginalPeople, RequiredPeople;
    list<Person> Males, Females;
    
    /**  All 5 people inserted same as above **/
    
    partition_copy(OriginalPeople.cbegin(), OriginalPeople.cend(),
          back_inserter(Males), back_inserter(Females),
          [](const Person& objPerson) 
      { 
          return std::get<2>(objPerson) == 'M';
      });
    
    // Males.size = 3, Females.size = 2
  • is_partitioned function
  • This function tests if the given range is partitioned based on the condition given. For example, an array of 10 elements may have a set of negative and zero/positive numbers. Further, assume all negatives are placed before positives, and all positives follow negatives. In that case, it is said to be partitioned. The function is_partitioned would return true if the range is properly partitioned; otherwise; it returns false.

    First, the call to is_partitioned:

    bool bPartitioned = is_partitioned(IntArray.cbegin(), IntArray.cend(), 
          [](int nValue){return nValue < 0;} );

    And here, a set of different values is placed into IntArray, and its outcome is partitioned:

    array<int, 10> IntArray = 
         {-1, -3, -4, -5, 0, 1, 2, 3, 4, 5}; // TRUE
         {-1, 1, 3, 4, 0, 1, 2, 3, 4, 5}; // TRUE 
         {1, 3, 4, 0, -12, -2,-5, 3, 4, 5}; // FALSE
         {0, 1, 3, 4, 0, 1, 2, 3, 4, 5}; // TRUE - Only one partition exists
         {-1, -3, -4, -5, -1, -21, -22, -3,-4, -5}; // TRUE
         {5, 4, 3, 1, 0, -1, -2, -4, -192, -129}; // FALSE, Partitioned, but not as expected

    No. Partitioned range doesn't mean a sorted range. With the following call, we can test if IntArray is partitioned based on the even/odd discipline:

    array<int, 10> IntArray = {2, 4, 8, 64, 256, 1024, 9, 17, 29, 47};
    
    bool bPartitioned = is_partitioned(IntArray.cbegin(), IntArray.cend(), 
         [](int nValue){return nValue%2 == 0;} );
    // Returns true, since it is logically partitioned!
  • partition_point function
  • Documentation says it differently, but by working and looking at the source code, I found out this is the same as calling find_if_not. Thus, I would not write about it until I get the correct information about this function.

  • minmax_element function
  • The minmax_element combines the work performed by the min_element and max_element functions. A recap for you: min_element finds the smallest element in a given range, and max_element finds the largest value in a given range. Since they return an iterator, and not a value, the first found iterator of relevant occurrence would be returned.

    The minmax_element returns a pair of type pair<iterator,iterator>, the first iterator being a reference to the minimum element and the second being a reference to the largest element. Other than just avoiding two function calls, it also performs better than two combined. It performs faster: it finds both values in N * 1.5 attempts, as opposed to N * 2 attempts the two function calls would take (N is the number of elements in a given range).

    For example:

    array<int, 10> IntArray = {2, 4, 8, 64, 256, 1024, 9, 17, 29, 47};
    
    // It returns pair<X,Y>
    auto minmax_pair = minmax_element(IntArray.cbegin(), IntArray.cend() );
     
    // Since the pair holds iterators, they must be indirected to get value
    wcout << "\nSmallest:" << (*minmax_pair.first) << 
           "\tLargest: "<< (*minmax_pair.second);

    All three functions are overloaded to take 2 or 3 arguments. The 3-argument function takes a predicate to test for. The predicates should logically implement less-than comparison.

  • is_sorted function
  • As the name suggests, the function is_sorted determines if the given range is sorted ascending (tested via less than operator), or as specified by the predicate function. If the range is sorted, it returns true, otherwise it returns false. The following example is clear-cut, and needs no further explanation.

    array<int, 10> IntArray = {2, 4, 8, 64, 256, 1024, 9, 17, 29, 47};
    
    bool bIsSorted = is_sorted(IntArray.cbegin(), IntArray.cend());
    // Would be false
    
    sort(IntArray.begin(), IntArray.end());
    
    bIsSorted = is_sorted(IntArray.cbegin(), IntArray.cend());
    // Now, it would return true

    If we need to test if a vector of Person tuple is sorted or not, we must pass the binary predicate as the comparer, since tuple doesn't define operator < to assist it. In the following example, our "is vector sorted" is based on the person's age.

    typedef tuple<string, int, char, char> Person;
    vector<Person> People;
    
    OriginalPeople.push_back(make_tuple("Eve", 19, 'F', 'M'));
    OriginalPeople.push_back(make_tuple("Adam", 21, 'M', 'M'));
    OriginalPeople.push_back(make_tuple("Natasha", 23, 'F', 'C'));
    OriginalPeople.push_back(make_tuple("Enrique", 32, 'M', 'M'));
    OriginalPeople.push_back(make_tuple("James", 38, 'M', 'C')); 
    
    bIsSorted = is_sorted(OriginalPeople.cbegin(), OriginalPeople.cend(),
        [](const Person& left_person, const Person& right_person)
        {
            // The second element is age
            return std::get<1>(left_person) < std::get<1>(right_person);
        } );
    // The return value would be true, since elements are inserted in age ascending order

    In case you have some other type of object (like struct, class, pointers), you must provide the appropriate comparer as predicate. Since this article is about new concepts in STL, I am not attempting to give standard examples.

  • is_sorted_until function
  • The big brother of the is_sorted function. The is_sorted_until function is called by is_sorted. The actual task is performed by is_sorted_until.

    What's the difference, you might ask! Well, only that, is_sorted returns bool, whereas is_sorted_until returns the iterator, after which the sorting test fails. For a range [First, Last], if the sorting test succeeds, it returns Last (i.e., the end of the given range); otherwise, it returns one of the iterators where the sorting-test failed. For example:

    array<int, 10> IntArray = {2, 4, 8, 64, 256, 1024, 9, 17, 29, 47};
    array<int,10>::const_iterator iter; // You see how convenient 'auto' is!
    
    iter = is_sorted_until(IntArray.cbegin(), IntArray.cend());
    
    if(iter != IntArray.cend())
    {
    // Would refer to 1024, which is the point range was sorted.
        wcout << "Sorting fails at: "<< *iter << 
                 "which is " << (iter - IntArray.cbegin()) << 
                 " items away from begin.";
    }
    else
    {
        wcout << "Array is sorted!";
    }

    The predicate version (which takes three arguments) is the same as is_sorted, and its comparer requirements are also the same. Remember, is_sorted calls is_sorted_until:

    // Simplified
    bool is_sorted(iterator first, iterator last)
    {
       return is_sorted_until(first, last) == last;
    }
About STL heaps

Frankly, I did not know what an STL heap data structure is. A search did not give me enough results, so I assumed it was unknown among STL programmers. After I studied and practically tried heaps, I found that heaps are not useful for most program designs. Still, for completeness (is_heap, is_heap_until function elaboration), I need to explicate what a heap is, how it is created, and what possible uses it has.

In STL parlance, heap is not a data-structure by itself - there is no STL class relegated for it. An STL heap works on top of a random access container (like deque, vector, array, native-arrays). There are a set of functions for making a random-accessed container a heap.

Note: A heap is not the same as heap-memory. A heap, in Computer Science, is a data structure!

In the heap data-structure, the largest element always remains on top. The rest of the elements are placed so that they form a (specialized) binary tree (say binary heap). Unlike, the binary-tree data structure, which requires pointers, and left and right nodes to hold data, the heap is generally implemented through an array. Also, in heap, there is no sorting (when iterated in Root-Left-Right manner); only that, elements are placed so that the root element is the largest among its children (the same rule for all underlying nodes). Before I discuss more, let's see a binary heap example:

{ 500, 40, 300 };

Which is a valid binary heap:

  500
40  300

Let's have more elements in the binary heap:

{ 4096, 512, 1024, 16, 32, 128, 4};

Which can be represented as:

      4096
   512   1024
 16  32  128  4

And finally a tree, with 10 elements, which would form a non-strict binary tree:

        10
     9      7
   8   5  6   3 
 1  4 2

Thus, you see that a binary heap has the following properties:

  • Largest element is always on top (which is a max-heap; min-heap would have the smallest on top). For each level of tree, the same rule applies.
  • The elements are not sorted, though they appear to be sorted. Neither is it a binary tree.
  • For 2^(N-1) elements, the binary-tree is fully filled. Otherwise, items on lowest level are placed on the left of the tree.
  • Heaps can be used to implement a priority queue (in reality, priority_queue is based on heaps and the STL functions given below). I am unaware of other useful implementations.

Creation of heap, and other heap operations

For any randomly accessed container, the STL function make_heap can be used to turn it into a heap. For example:

// Or ordinary array, vector, deque...
array<int,10> IntArray = {1,2,3,4,5,6,7,8,9,10};

make_heap(IntArray.begin(), IntArray.end()); // {10,9,7,8,5,6,3,1,4,2}

This would make IntArray a heap (exactly as shown in the last binary heap). Further, pop_heap can be used to retrieve the largest element from the heap (and logically remove it). The pop_heap function removes the largest element, and puts it into the end of the heap, and re-forms the heap (excluding the item removed). Example:

// Continuing with heap created above...
pop_heap(IntArray.begin(), IntArray.end());

wcout << "Largest: " << IntArray[9]; // Displays 10
 
// Heap now: {9,8,7,4,5,6,3,1,2,10}

It is important to know that the heap by itself is now of nine elements only. Thus, it is the programmer's responsibility to remove that item from the heap. Since IntArray is an array, no item can be removed. For this, we can use a counter, and use IntArray.end() - nCounter for heap operations. For vector and deque, the method pop_back can be used.

To insert an item into a heap, the push_heap function can be used. Before that, the actual element must be pushed into the relevant container. For example:

vector<int> IntVector;

IntVector.push_back(20);
IntVector.push_back(40);
IntVector.push_back(80);
IntVector.push_back(120);
IntVector.push_back(180);

make_heap(IntVector.begin(), IntVector.end());
// {180,120,80,20,40}

// Let's display largest item from heap
wcout << IntVector.front() << endl;

// Remove it from heap, AND from container
pop_heap(IntVector.begin(), IntVector.end());
IntVector.pop_back();

// Put new item into container
IntVector.push_back(128);

// And let the 'heap' know it, so that it can re-form the heap
push_heap(IntVector.begin(), IntVector.end()); // {128,120,80,20,40}

It's on you how you manipulate heaps - if you pop-the-heap first and then get the last element; or if you get the first element and then pop it from the heap (followed by the removal of the actual element, in both cases).

The last function, just for completeness, is the sort_heap function which will sort the heap. After the heap has been sorted, it is actually a sorted container, and is now no longer a heap.

In a nutshell, about STL heaps:

  • Based on random access container.
  • Call make_heap to initially convert a range into a heap.
  • Access front-element to get the largest item in a container.
  • Call pop_heap to logically remove it from a container, and then actually remove it from the heap. Call the appropriate function to remove it physically from the container.
  • Use push_heap, followed by the actual item insertion, to re-form heap.
  • Convert the heap to sorted range using the sort_heap function.

Back on track! Let's discuss about the new STL functions, on heap.

  • is_heap function
  • This function determines if the given range is a valid heap or not. Return value is true, if the range is a valid heap; otherwise false. For example:

    array<int,10> IntArray = {1,2,3,4,5,6,7,8,9,10};
    make_heap(IntArray.begin(), IntArray.end());
    
    bool bIsHeap = is_heap(IntArray.cbegin(), IntArray.cend()); // true
     
    // Logically remove from heap..
    bIsHeap == is_heap(IntArray.cbegin(), IntArray.cend()); 
    // Would be false since largest item moved to end...
    
     
    // Following would return true 
    bIsHeap == is_heap(IntArray.cbegin(), IntArray.cend() - 1); // (exclude last)

    Just like the is_sorted function, another overload of this function also takes a binary predicate as comparer. The heap should have been created using a similar predicate (i.e., via three argument versions of make_heap, push_heap, and pop_heap).

  • is_heap_until function
  • Similar to the is_sorted and is_sorted_until relationship, this function returns an iterator instead of bool. The returned value can be used to test where the heap-test failed. The is_heap function calls this function, same as is_sorted calls is_sorted_until.

    This function also takes a binary predicate, and it is required that the heap should be created with a similar binary predicate.

  • The move function
  • Ah! The C++0x move semantics, STL's perfect forwarding, and the variants of std::move functions... Since move has its different variants in multiple headers, and it also requires how STL utilizes R-value references for performance improvements - discussing all these deserves a different section. Thus, I would cover all these topics later in this article.

    In short, move functions call the move constructor and move assignment operator, wherever applicable and possible, to move one resource to another, without creating new copy and deleting the old copy.

The <random> header

Most C++ programmers do not like the rand function and need to fiddle around to get a good random number generator, and with large variations. Well, STL guys have listened and prepared a brand new header for us: <random>. This header facilitates dozens of classes to get random numbers - of different data types: bool, integer, and float/doubles!

Before I attempt to explain the concepts about this header, which would probably twiddle your brainbox, let me start with a naive example:

using namespace std;

default_random_engine def_rand;

int nRandom;

for (int nIter = 0; nIter < 20; nIter++)
{
   nRandom = def_rand();
   wcout << nRandom << "\t"; 
}

As you can see, it generates 20 random numbers and displays them on screen. The type default_random_engine, which is actually a typedef to another class, generates random numbers for you. operator() returns the next random number. The code above would result in something like:

-795755684      581869302       -404620562      -708632711      545404204
-133711905      -372047867      949333985       -1579004998     1323567403
418932835       -1944672731     1196140740      809094426       -1946129057
-30574576       -182506777      -15198492       -150802599      -138749190

It is better than the rand function:

  • The values returned are in the range of signed long (4-byte signed integer).
  • Both positive and negative values are returned.

Now, let's not further disgrace our old friend rand. The <random> header has much more for you, which cannot be compared with the standard random generator functions.

If you run the code above multiple times, it will give the same set of results. To get different values, you need to use the seed method. Though, seed method has three variants, I exemplify one which takes an integer:

// Putting it before the loop.
def_rand.seed(45001); // Takes 'unsigned long', must not be (less than) zero

This would now generate a different set of numbers. Yes, with the same seed value, the program would still generate the same set of random numbers. Not to worry, the header now has much to offer.

And now the concepts and terms!

The following are logical divisions of classes in the <random> header:

  • Random Number Generator is an object that produces random numbers. Simply put, an instance variable (def_rand for the above example). This has a specific meaning since it can be combined with the class types mentioned below.
  • An Engine is the generator class, which actually produces random numbers. An engine can be a Simple Engine, which generates random numbers by itself; or it can be a Compound Engine, which utilizes one or more other simple engines to generate the final random number. Furthermore, an engine can be combined with a Distribution. (The default_random_engine, is one type of engine.)
  • A Distribution is a class which generates uniformly distributed random values. A distributor utilizes an engine to generate random values.

All engines, except random_device_engine, generate pseudo-random values - which means those engines would produce the same results if unseeded; or if seeded, with the same seed value. Thus, it may be required to utilize random_device_class to seed a random value for the appropriate engine. An example:

random_device rnd_device; 

def_rand.seed(rnd_device());

Which causes the def_rand (i.e., default_random_engine) to produce a different set of random values on each run. More about this class soon.

For progressive continuation, allow me to defer class-classifications, and let me present more examples. Now, instead of utilizing arbitrary random values, you would need values to be in a given range. For example, you can do modulo with 100 (and plus one, if you need) to get values within a range:

for (int nIter = 0; nIter < 20; nIter++)
{
   nRandom = def_rand() % 100; // +1
   wcout << nRandom << "\t"; 
}

which would produce values only in the range 0 to 99. This is where distribution classes can be utilized! For this scenario, you can use the uniform_int_distribution class. The following code generates true random values in the 1 to 100 range:

default_random_engine   def_rand; 
uniform_int_distribution<int>  uni_dist(1,100); // Range

int nRandom;

random_device  rnd_device; 

def_rand.seed(rnd_device()); // Good random seed

for (int nIter = 0; nIter < 20; nIter++)
{
   nRandom = uni_dist(def_rand); // Distribution utilizes the Engine
   wcout << nRandom << "\t"
}

uniform_int_distribution is one of many distributions. On construction, I have setup it to emit out only values in 1 to 100. The random_device class is used to seed the def_rand object, which would facilitate it to produce truly random values on each run. Further, with the expression uni_dist(def_rand), the engine is combined with the distributor. The distributor calls the operator() of the given engine.

On multiple runs, this code would produce something like:

/* Run 1 */
36      91      61      97      17      11      91      45      88      56
5       96      100     1       25      26      27      69      4       90

/* Run 2 */
14      51      98      90      59      77      9       16      17      4
43      60      99      92      7       13      12      66      20      87

To get a firm understanding, play with the given code the way you please!

Let's have another example, where you would ask for true or false generation, and you would demand 70% probability for true. For this, we can use the class named bernoulli_distribution:

bernoulli_distribution ber_dist(0.7);
... 

// In loop:
nRandom = ber_dist(def_rand);

Which would produce something like:

1       1       1       1       1       0       1       1       0       1
1       1       0       0       1       0       1       1       1       1

Don't count! A probability is a probability, not a necessity. If you need 70% false probability, just change 0.7 to 0.3!

Okay. Till now, you had two Simple engines: default_random_engine typedef and random_device class; and two Distribution classes: uniform_int_distribution and bernoulli_distribution (ignore the meaning!). You see that the Engine actually generates the pseudo-random value, and Distribution filters it appropriately.

Properties of Engine classes

Every engine has the following properties:

Property Meaning
Argument Type
  • The class template argument type. Can be one, two, or none; depending on input and/or result types taken by the engine template class.
  • If none, the engine class is not a template class, and result_type is explicitly defined by the class (like random_device, which has int as the result_type).
result_type
  • The data-type which is returned by operator() for the class. This is the first argument type, if any, taken by the template argument.
  • This type is returned by the min and max methods of the class.
  • One version of seed takes this type as its argument.
min and max methods
  • Returns the minimum and maximum value that can be returned by operator() of this engine class.
  • These values may be predefined, or are dependent on the template arguments given.
seed method
  • Seeds the engine for the next random values.
  • The seed method can take integer/real value to seed (actual type depends on the engine class).
  • It also takes the seed_seq class (described later).
  • Another overload also takes functor/lambda, which is called multiple times to generate a seed.
  • Not available for the random_device class.
operator()
  • The operator which returns the next random value.
  • The return type is result_type.
  • Returns in range between min and max.
discard method
  • Calls operator() the given number of times, effectively discarding the set of the generated random values.

I did not want to write so much of theory and reference material, but I needed to do; since there is no good understandable text available, as of now. So bear with me! The tables and text shown may seem alien now, but you would need to refer to them again, and they would prove to be beneficial.

Anyway, let me put some code excerpt to appease you:

random_device::result_type nMin, nMax, nRandom;
random_device rnd_device;

// Both methods are static, for all engine classes
// Thus object is not required.
nMin = random_device::min();
nMax = rnd_device.max();

 
// Operator ()
nRandom = rnd_device();

... 
  
default_random_engine def_eng;
def_eng.seed(1000);

// Discards next 10 random generations, 
// same as calling op() 10 times
def_eng.discard(10); 

...
 
// One of compound class. Takes template parameters.
// The data-type (__int64) defines the result_type
// The numeric argument (32) defines the min, max values
typedef independent_bits_engine<random_device, 32, __int64> T32Bits;
T32Bits ib32; 

T32Bits::result_type rt; // rt is '__int64'
 
// Again, min and max are static methods, but template 
// redefines them each time a class-template is instantiated. 
rt= ib32.max(); // 1<<32-1  (4294967295)
rt = T32Bits::min(); // 0

rt = ib32(); // random value 

The example shown above is just for illustrating Engine properties, not all class(es) shown have been discussed. The example shown above also exemplifies the compound engine class, which is yet to be elaborated.

For all engine classes and for all distribution classes, the result type and template argument type can be one of the following (these are not typedefs or enums, but just template parameter placeholders in the documentation/header).

  • IntType indicates class takes either signed or unsigned type.
  • UIntType states that class takes unsigned integral type. For both types, integral would mean bool to __int64 as base integer type.
  • RealType indicates float or double type.
Properties of Distribution classes

There are more number of distribution classes (around 20) than there are engine classes (around 8). Therefore, unfortunately for readers, there are more properties applicable for distribution classes. The properties that are same as engine classes are clubbed together in the first row of the following table:

Property Meaning

Argument Type
Result Type
min and max,
operator()

  • All distribution classes have these properties/methods.
  • The seed method is not applicable for distributions.
  • The operator() takes engine, which would generate the pseudo-random value.
input_type
  • The type that engine's operator() should return.
  • The compile would convert types, else it would give warning/error.
  • This type is either predefined by the distribution class, or specified by the template argument.
operator()
  • Takes an engine object as argument.
  • Calls operator() on that engine
  • Distributes (filters) the returned value as per distribution, and returns value to caller.
  • Another overload takes param_type as the argument.
reset()
  • Resets any cached values, so that the next call to operator() does not depend on the previously obtained values.
param_type
  • Internal type, which is beyond the discussion.
Pre-Condition
  • Precondition as specified by the distribution that must be met, while constructing the object.
  • The debug build simply calls the assert function, if pre-condition is not met.
  • An example would be 0.0 to 0.5 as precondition, for the distribution argument (recollect 0.7 ?).

Okay, let me try to explicate things, by giving more of examples and their outputs. I will not explore all classes in the <random> header, but a few important/interesting ones. At the end of this section, there is (would be) a table of all classes and their salient features. Believe me, gathering relevant information, producing good examples, and writing it down is far difficult than the difficultly level of reading and understanding it!

Let's use the class that implements the oldest and best-known random number generator, Linear Congruential Generator. This algorithm is used by many runtimes, and is used by the rand function. [Need not understand the following]. The LCG algorithm generates the next number, as defined by the following formula:

X[i+1] = (A * X[i] + C) % M;  //  x(i+1) = (a * x(i) + C) mod m

Where A, C, and M are integral constants. The constants A and C must be less than M. The engine class (Simple Engine) for this is linear_congruential_engine. This is the class template, whose first argument takes UIntType and the rest of the arguments take unsigned integer constants that reflect A, C, and M. The prototype of the class template is:

template <class UIntType, 
          UIntType A, UIntType C, UIntType M> 
          linear_congruential_engine ;

And this is how we can instantiate and use it (don't ask the meaningfulness of constants!):

linear_congruential_engine<int, 12, 16, 8000> lce;
int nRandom;

nRandom = lce(); // Returns 28
nRandom = lce(); // Returns 352 

As pointed out in Properties of Engine classes above, this is one of the engine classes which takes the data-type and that data type becomes the result_type. Therefore, result_type for this instance (lce), is int. This is one of the Simple Engines.

We have used the default engine, named default_random_engine, which is actually type-defined with mt19937:

typedef mt19937 default_random_engine;

What's that? Well, mt19937 is type-defined with the template instance of mersenne_twister. Another type-definition for 64-bit Mersenne Twister is expressed as mt19937_64, which is the template instance of the class mersenne_twister_engine with a 64-bit data type (result type).

typedef mersenne_twister<unsigned long, 32, 624, 397, 31,
        0x9908b0df, 11, 7, 0x9d2c5680, 15, 0xefc60000, 18>
        mt19937;

typedef mersenne_twister_engine<_ULonglong, 64, 312, 156, 31,
        0xb5026f5aa96619e9ULL, 29,
        0x5555555555555555ULL, 17,
        0x71d67fffeda60000ULL, 37,
        0xfff7eee000000000ULL, 43,
        6364136223846793005ULL>
        mt19937_64;

Don't get it? Don't bother! I just mentioned it for completeness, since default_random_engine is used in this article, and you'd probably use it for quick code. Also, in MSDN documentation and examples, mt19937 is used frequently. mersenne_twister(_engine) is, again, a simple engine.

Note: All VC10 engine classes end with _engine. If not, they are not new to VC10.

Now, let me explain a few Compound Engines, the engines that take another engine to act upon.

If you remember a common method for all engines named discard, you would recollect that it effectively discards a given number of next random values. It does so by calling operator() for the same class. What if you need to attach something with an engine that discards N random numbers, and gives the next random numbers? For this, you can use the Compound Engine named discard_block_engine. This template class takes another engine, and some constant numbers - using that, it shells out random values. Here is an example:

default_random_engine def_rand;
discard_block_engine<default_random_engine, 5, 2> dbe; // SEE DESCRIPTION BELOW

for(int nIter = 0; nIter <20; nIter++)
{
    wcout << def_rand() % 100 << "\t";
}

wcout << "===============\n";

for(int nIter = 0; nIter < 20; nIter++)
{
    wcout << dbe() % 100 << "\t";
}

The object dbe, of type discard_block_engine, is instantiated, taking default_random_engine as its parent engine. The random values would be generated by the parent engine only, and discard_block_engine will just discard and return a set of randoms depending on the template parameters. The second template argument (integer) specifies the total block size (specified as P), and the third template argument specifies the used block size (R).

The engine produces R (2, as per code above) random values, and after that count, it ignores (discards) P - R (3) values. For example, if the original numbers generated as 1 to 20, it will produce:

1, 2, 6, 7, 11, 12, 16, 17...

For this program, which is based on default_random_engine (Mersenne Twister), the result would be as follows:

12      2       34      85      4       91      29      85      98      3
35      65      40      26      39      20      19      4       97      6
===============
12      2       91      29      35      65      20      19      9       9
26      36      12      9       95      53      44      6       76      45

The modulo with 100 is done just to produce small numbers, so that it can be analyzed easily.

Well... The random discussion is getting long and long, and I am postponing further discussion of different classes. If time allows, I would definitely do more research and update the gained information here.

Here is a tabular summary of all classes in the <random> header.

Engines

Class Name Simple or Compound
discard_block_engine

Compound

independent_bits_engine

Compound

linear_congruential_engine

Simple

mersenne_twister_engine

Simple

random_device

Simple

shuffle_order_engine

Compound

subtract_with_carry_engine

Simple

xor_combine

Compound

Distributions

Class Name Implements
bernoulli_distribution

Bernoulli Distribution

binomial_distribution

Binomial Distribution

cauchy_distribution

Cauchy Distribution

chi_squared_distribution

Chi-Square Distribution

discrete_distribution

Unsure about!

exponential_distribution

Exponential Distribution

extreme_value_distribution

Extreme Value Distribution

fisher_f_distribution

Fisher F Distribution

gamma_distribution

Gamma Distribution

geometric_distribution

Unsure about!

lognormal_distribution

Log-normal Distribution

negative_binomial_distribution

Negative Binomial Distribution

normal_distribution

Normal Distribution

piecewise_constant_distribution

Unsure about!

piecewise_linear_distribution

Unsure about!

poisson_distribution

Poisson Distribution

student_t_distribution

Student's T Distribution

uniform_int_distribution

Unsure about!

uniform_real_distribution

Unsure about!

weibull_distribution

Weibull Distribution

Improvement in sets

First thing first, about sets. In STL, a set is a data structure that implements an associative data container. It holds only unique values, discarding any duplicate insertions. It stores them in ordered form, meaning that when you iterate over the set, the values would be ascending sorted (with the default comparison function). The following example inserts some random values, and then displays them:

// Codename: SET1.CPP
set<int> us;

mt19937 rnd; // mersenne!

uniform_int_distribution<> uid(1,100); // <int>

for(int n=0;n<100;n++)
   us.insert( uid(rnd) );// Random generation, distribution

// wcout << "Elements: " << us.size() << "==>\n";

for_each(us.begin(), us.end(),[](int n)
{
   std::wcout << n << "\t";
});

Since code above generates only pseudo-random values, it will always have the following 60 values:

1       4       5       6       10      11      12      13      14      15
16      18      19      22      23      28      30      31      32      37
39      40      41      43      44      45      46      48      49      50
51      55      64      65      66      67      68      69      70      71
73      75      76      77      80      81      82      83      84      85
88      91      92      93      94      96      97      98      99      100

Note: If you did not read about the random header, or are not interested in it, you can replace random elements insertion with rand. The set size would differ!

In the same family, the class multiset allows non-unique elements to be put into a container, in sorted order. For example:

// Codename: SET2.CPP
multiset<int> ms;

for(int n=0;n<20;n++)
    ms.insert( n % 4);

for(auto iter = ms.cbegin(); iter != ms.cend(); ++iter )
    wcout << *iter << "\t";

Would produce the following output:

0       0       0       0       0       1       1       1       1       1
2       2       2       2       2       3       3       3       3       3

Since it allows multiple items to have the same value.

Advancements in sets

  • Inclusion of constant iterators, already elaborated above.
  • Emplace functions, and move-semantics (will be described later).
  • New classes, named unordered_set and unordered_multiset, and their new special methods.

You see that set always puts elements in ordered sequence. Thus, the insertion is costly (O(log n) approximately) - and, as you might understand by "Big-O" notation, that insertion time complexity increases as the number of elements increases. Anyway, I am not here to put these jargons. Let me put it short, a set keeps only unique elements, and in sorted ordered. Insertion, Deletion, and Searching take the same time: O(log n).

With TR1, a few compilers, including Visual C++, added an unordered set named unordered_set, which keeps only unique elements, but not in any specific order. It puts elements in a set of buckets with the help of a hash function.

Header for unordered sets: <unordered_set>

Note: Before TR1, the unorderedness of sets were available through the hash_set and hash_multiset classes. These classes, however, did not have all the features that are in the new classes.

Okay, before something gets more complicated, let's change our set to unordered_set (in code file SET1.CPP):

#include <unordered_set>

....

// This is in code-named SET1.CPP, as given above
unordered_set<int> us;

And see the output:

18      82      14      91      28      92      84      77      13      97
23      64      31      10      55      83      19      100     32      96
80      16      73      98      99      75      11      49      81      94
30      15      65      1       43      76      12      88      51      66
68      4       37      85      22      5       69      40      48      71
6       70      50      67      44      39      41      46      45      93

The same collection of values, no duplicates - but not in any given sequence. The logical question you should ask, "Is unordered_set faster than set?". Well, yes! The insertion time is almost the same (expert comments needed), but unordered_set would locate the element faster than set. See this article for performance benchmarking. The following code inserts elements into a set, then searches half of them:

/*unordered_*/ set<int> the_set;
const int nSize = 5000000;

for(int n=0;n<nSize;n++)
    the_set.insert(n);

nEnd = GetTickCount(); // Please ignore the 15ms timing precision!
wcout << (nEnd - nStart )<< " ms\n";

// Timing for search
nStart = GetTickCount();

for(int n=nSize; n>0; n -= 2) // Alternate
{
   the_set.find(n);
}

nEnd = GetTickCount();
wcout << (nEnd - nStart )<< "m s\n";

Which, on my machine, results in displaying:

1123 ms 250m s

And when the set is changed to unordered_set, the result is:

1139ms
140ms

Which loudly declares unordered_set to be the winner for the searching-competition.

Similarly, behavior wise, the unordered_multiset would behave the same as the multiset class. The change of set to unordered_multiset in SET1.CPP results in the following output:

18      18      82      14      91      28      28      92      92      84
13      13      13      97      97      97      23      64      64      31
31      10      10      55      55      19      19      100     100     100
32      32      96      96      96      96      80      80      80      80
80      80      16      73      98      99      75      75      11      49
49      49      81      81      94      30      15      15      1       43
43      76      76      12      88      88      51      66      66      68
4       4       4       4       37      85      22      22      69      40
40      48      71      71      5       83      83      70      77      77
50      67      44      39      6       41      46      45      65      93

These are a total of 100 numbers (and not just 60 numbers), since multi-sets allow non-unique numbers. You also see that unlike multiset, which stores elements in sorted order, the unordered_multiset doesn't adhere to sorting. But, a close analysis of numbers shown above (which is just the begin to end iteration) reveals that similar numbers are grouped together. This is where buckets and hash-functions play and show their role. Remember, buckets and hash-functions also play a role in unordered sets; though it is not explained yet.

Likewise, when you change from multiset to unordered_multiset in SET2.CPP, it would put numbers in buckets. Since numbers were few and already inserted in order (0 to 3), the result doesn't differ (see SET2.CPP's output). But for other sets of values and insertion order, multiset and unordered multiset would definitely differ, since unordered set doesn't put elements in sorted order.

Unordered sets and maps use buckets and hash-functions to store elements. There are a set of functions to see how they are performing, how many buckets are there, and how many elements there are in the buckets, and the load order of the buckets. You can also ask to rearrange the buckets (re-hash). It is also possible to use your own function as the hash-function. Since these concepts are common to both unordered sets and unordered maps, I would explicate them after the unordered maps discussion below.

Improvements in maps

For the reader who has no idea or has a foggy idea about STL maps, let me brief. A map stores a key and its value, and the key must be unique. With set, the key is the value (i.e., both are same). The value can be duplicate, but key cannot. In a students-collection, the roll-number can act as key (roll-numbers are unique), and the student's name can be the value. The following example clears out some fog:

map<int, string> Roll2Name;

Roll2Name.insert(make_pair(12, "Canon"));
Roll2Name.insert(make_pair(13, "Sony"));
Roll2Name.insert(make_pair(14, "Nikon"));
Roll2Name.insert(make_pair(15, "Olympus"));

// Get name of student having roll-no 14
string sName = Roll2Name[14]; // Same as Roll2Name.find(14)->second;

wcout << "Name: " << sName.c_str();

An element of the map is of type pair<Key, Value>. Thus, for the above code, it is the pair <int,string>. The find method returns an iterator of the pair<K,V> type.

Similarly, multimap allows non-unique keys to be stored into a map. Both map types store keys in sorted order. Examples below would illustrate more.

Advancements in maps

  • Inclusion of constant iterators.
  • Emplace functions and move-semantics, which would be described later.
  • Member function map::at, which behaves the same as map::operator[] for element access. But, unlike operator[], it does not allow insertion of elements if the key is not found in the map.
  • New classes, named unordered_map and unordered_multimap, and their new special operations.

Analogous to changes in ordered sets to unordered sets, STL now facilitates unordered maps and unordered multimaps, through the unordered_map and unordered_multimap classes, respectively.

Header required for unordered maps: <unordered_map>

Before TR1, unorderedness of maps were available through the hash_map and hash_multimap classes, though they did not have all features that are in the new classes.

Let's have another example, which I would use further to explain different types of maps:

// NOTE: This example is just for illustration,
// The squares would not correspond to their actual sqrts.

map<int, int> Squares;
int e;
for(int n=2; n<100; n++)
{
   e = (n % 10)+1;
   Squares.insert(make_pair(e, n*n));
}

for (auto iter = Squares.cbegin(); iter != Squares.cend(); ++iter)
{
     wcout << "\nNumber: " << iter->first <<
              "\tSquare: " << iter->second;
}

Since keys (e) are only limited from 1 to 10, the next insert call would simply replace the value (see documentation to find more). Thus, the output would be as follows:

Number: 1       Square: 100
Number: 2       Square: 121
Number: 3       Square: 4
Number: 4       Square: 9
Number: 5       Square: 16
Number: 6       Square: 25
Number: 7       Square: 36
Number: 8       Square: 49
Number: 9       Square: 64
Number: 10      Square: 81

And, if I change the type of square from map to multimap, the output would be like:

...
Number: 8       Square: 7569
Number: 8       Square: 9409
Number: 9       Square: 64
Number: 9       Square: 324
...
Number: 9       Square: 7744
Number: 9       Square: 9604
Number: 10      Square: 81
Number: 10      Square: 361
...

Why? Just like multiset, which allows multiple values to be inserted, multimap allows multiple keys (with different values!) to be inserted.

Now, coming to the new classes. When I change the type from map to unordered_map, the collection would change as shown below:

Number: 3       Square: 4
Number: 4       Square: 9
Number: 5       Square: 16
Number: 6       Square: 25
Number: 7       Square: 36
Number: 8       Square: 49
Number: 1       Square: 100
Number: 9       Square: 64
Number: 10      Square: 81
Number: 2       Square: 121

The keys, as you can see, are now unordered, since unordered_map does not care about ordering of keys. It utilizes buckets and the hash-function to put elements, which I would elaborate shortly.

And, finally, when I change the data-type of Square to unordered_multimap, the output is:

...
Number: 8       Square: 7569
Number: 8       Square: 9409
Number: 1       Square: 100
Number: 1       Square: 400
...
Number: 1       Square: 8100
Number: 9       Square: 64
Number: 9       Square: 324
...
Number: 9       Square: 9604
Number: 10      Square: 81
...
Number: 10      Square: 9801
Number: 2       Square: 121
Number: 2       Square: 441
Number: 2       Square: 961
...

Just like unordered_multiset's scatteredness, the elements are placed into different buckets, thus iterating through doesn't give the proper order. It should, however, be noted that the values of the same keys are put together (expert comment expected).

Okay, let's have another code that would show us the performance statistics of all four map classes:

map<int, double> IntDouble;
const int nSize = 2500000;

DWORD nStart = GetTickCount();
DWORD nEnd;

// Insert
for (int n=0; n<nSize; n++)
{
   IntDouble.insert( make_pair (n, n * 2.5));
}

nEnd = GetTickCount();
wcout << (nEnd - nStart )<< " ms\n";

nStart = GetTickCount();

// Alternate Search
for(int n= nSize; n>0 ; n -= 2)
   IntDouble.find(n);

nEnd = GetTickCount();
wcout << (nEnd - nStart )<< " ms\n";

The following table shows the outcome:

Container / Timing (in ms) Insertion Searching
map 468 171
multimap 468 110
unordered_map 749 78
unordered_multimap 749 62

The timings shown are only indicative. On multiple runs, I have found map and multimap to always behave as shown. But, many times, unordered_map reached a searching speed the same as unordered_multimap, though it never surpassed an unordered multimap's speed.

Since this article is not about introduction to maps, I did not write about it, but in brief. For multimaps, lower_bound and upper_bound should be used to iterate for a given key, since multimaps store more than one value on a given key. For multimaps, the find method would always return an iterator to the first element in the given key. The method count can be used to find out how many elements are in a given key.

Hash, Buckets, and More...

This subsection explores the buckets, hash-function, and comparison functions applicable for all four unordered containers. All these concepts are similar for all containers, and if they differ, I would explicitly mention. Let's start with simple operations, and then we will dive into the details.

The following typedef and the declared variable would be used for this subsection, and I would mention whenever I change the typedef.

typedef unordered_set<int> Container;
Container cont;

Let's see the default count of buckets:

wcout << cont.bucket_count() << "\n";

Which would return 8, for all four types of unordered containers (MS implementation, others may vary). This means, there are a total of eight buckets in which elements would be placed. What about the element count in each bucket? Well, we can use max_load_factor to find that:

wcout << cont.max_load_factor() << "\n"; // Returns 1.0

When we add elements into the container, the statistics change:

for (int n=0; n<10; ++n)
{
   cont.insert(n);
}

wcout << cont.bucket_count() << "\n"; // Returns 64
wcout << cont.max_load_factor() << "\n"; // Returns 1

Which means there are now 64 buckets, and each bucket can (still) hold only 1 element. If we had just inserted 8 (or less) elements, the bucket-count would have remained the same (8). Likewise, when we insert 65 or more elements, the bucket-count changes to 512, and so on (still, each bucket would hold only one element). Since there can be less number of elements than buckets, not all buckets would be filled in (irrespective of bucket-size); a few buckets may have zero elements.

A few questions should arise:

  • How do you change the bucket size?
  • How to know which bucket has how many elements in it?
  • What is the average number of elements in all buckets (known as load-factor)?
  • What is the situation of all buckets (i.e., hash-table) after elements are deleted and inserted?

The manageability of buckets depend on the load-factor. You use another overload max_load_factor which sets the load-factor. It simply specifies the maximum number of elements to be put into a bucket. Thus, specifying a larger value would simply mean less buckets, since a bucket can now hold more values. For example:

cont.max_load_factor(500);

asks the hash-table to put a maximum of 500 elements in each bucket. Thus, when you insert 500 or less elements, only one bucket would be utilized - no! Remember, a load-factor is just the average of the number of elements in each bucket. You are just suggesting the container to utilize each bucket to hold more (or less) elements in each bucket. An example:

cont.max_load_factor(100);

for (int n=0;n<150;++n) // 150 elements
{
   cont.insert(n);
}

wcout << cont.bucket_count() << "\n";
wcout << cont.max_load_factor() << "\n";
wcout << cont.max_bucket_count() << "\n";
wcout << cont.load_factor() << "\n";

Which would give the following output:

8
100
8
18.75

The bucket count is 8, since these minimum buckets (each of capacity 100) are enough to carry all 150 elements. If you insert 800 or more elements, the number of buckets would raise to 64 (re-read the logic above). The following code, shows which bucket has how many elements:

for (int n = 0; n < cont.bucket_count(); ++n)
{
    wcout << "Bucket " << n <<
             " has " << cont.bucket_size(n) << " elements\n";
}

Which would display something like:

Bucket 0 has 19 elements
Bucket 1 has 18 elements
Bucket 2 has 18 elements
Bucket 3 has 19 elements
Bucket 4 has 19 elements
Bucket 5 has 19 elements
Bucket 6 has 19 elements
Bucket 7 has 19 elements

You see that the buckets are optimally filled up. Interestingly, with the same 150 values, all four unordered containers place the elements the same way.

You might be willing to know which elements are kept by a given bucket. Well, that's easy - you just need to use local iterators; unique type of iterators for unordered classes. They are supported via begin and end overloads. There are no cbegin/cend methods, since the type [const_]local_iterator represents only constant iterators. The following code iterates through bucket 0:

// Valid only if Container represents set
// otherwise l_iter would be pair<K,V>

Container::const_local_iterator l_iter;
for(l_iter = cont.begin(0); l_iter != cont.end(0); ++l_iter)
   wcout << *l_iter << "\t";

And this would render the following text:

144    136     128     120     112     104     96      88      80      72
64      56      48      40      32      24      16      8       0

Since the hashing scheme is the same, all four containers would depict the same values in the same sequence.

One last thing, in the same radius, is to find out on which bucket a particular element is put in. This is achieved through the method bucket. It returns a zero-based bucket position in which the element is put in (or not put in). Yes, using hashing, it returns the correct bucket number for a non-existent element! For example, with output:

wcout << "Element 144 is in: " << cont.bucket(144)
      <<"\nNon-existent 1131 is/would-be in: " << cont.bucket(1131);

/* Element 144 is in: 0
Non-existent 1131 is/would-be in: 5 */

By setting maximum load factors and inserting elements, you see that buckets count do increase appropriately. What if you want to set the bucket count yourself? Like instead of growing on its own from 8,64, 512, 4096..., you might want to give it a hint. Well, you can do it by simply calling the rehash method. This method can be called anytime, before inserting elements, after all elements have been inserted, midst of inserts, or after elements have been removed. The method rehash always rebuilds the hash-table. It will set the minimum hash size to 8, and then gradually double it until your request can be satisfied. For instance, if you specify 10, it will set to 16; if you specify 100, it would set size to 128. Remember, the max-load-factor still remains in effect.

Thus, when you call:

cont.max_load_factor(100);
cont.rehash(20);

for (int n=0;n<2500;++n)
{  cont.insert( n ); }

Each bucket would have a maximum of 100 elements, a total of 32 buckets, and around 78 elements in each bucket. When you add 5000 elements in it, it would exceed the load-factor (higher than 100), and thus the bucket count would increase to 256.

Sounds confusing? Remember, when the load-factor exceeds its limit, the hash-table would add more buckets, by 8x. That means, for the above example, 32 * 8 = 256. Had you specified 10 (i.e., 16) as the bucket size, the increment of buckets would go like this: 128, 1024, 8192, and so on. It totally depends on the given load factor.

More to go

...

Change history

  • Sep. 11, 2010 - Published first version. Constant Iterators, array, and tuple explained.
  • Sep. 12, 2010 - Eight functions of <algorithm> elaborated.
  • Sep. 13, 2010 - is_sorted and is_sorted_until illustrated.
  • Sep. 16, 2010 - Heaps, is_heap, and is_heap_until explicated.
  • Sep. 19, 2010 - <random> header, and its few classes.
  • Sep. 22, 2010 - More explication of the <random> header.
  • Sep. 25, 2010 - Unordered sets and maps section; maps/sets included in the Constant Iterator section.
  • Sep. 28, 2010 - Local links to sections; about rehash stuff.
  • The remaining concepts are in pipeline to add. Stay tuned!

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here