Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Languages / C

Technical Guide on C

4.76/5 (18 votes)
11 Feb 2015CPOL28 min read 27.7K  
A beginner's guide to provide an emphasis on critical features of C language.

Intended Audience

Readers of this document should hold a basic knowledge of C programming language concepts. Interested candidates can visit and follow C Programming Language Associate (CLA) certified associate - online training program driven by www.cppinstitute.org 

Background / History

C language, invented by Dennis Ritchie and Ken Thompson, has had several releases. So first of all, let’s be aware of the growth of this language and move ahead. The very first version that was released from AT&T bell labs by these legends is named as K&R C. Later in 1989, a committee named American National Standards Institute (ANSI) standardized this version of C, which is known as ANSI C or C89. This version of C language was later revised by ISO leading to C99 release. This release introduced new features such as inline functions, variable-length arrays, flexible array members, few more data types, improved support for floating-point, and variadic macros and support for one-line comments. However, since 2007 there were further amendments to C99, which later was released by ISO and currently considered as latest version called C11. This version introduced various new features to C language and the library, including type-generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions.

One might think C as an obsolete language to recommend for programming complex applications; however as per the assessment of programming language positioning and ratings being made by TIOBE, the software quality company (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html), C still stands at first position among all existing high-level languages today.

Pointers

Among all the features of C language, pointers, memory management (then be it heap or stack), and the library & linking strategies (static and shared) are important topics. If one wants be expert in this language, he should hold a strong command over the concept of pointers in C. To understand pointers, one can start with the precise information given in “The C Programming Language” by Ritchie & Kernighan. 5.1 Pointer is a group of cells (two or four) that can hold an address. So keeping this initial understanding in mind, we can go a bit deeper in concept of pointers. Let’s take a simple example as below,

C++
char *cPointer;

This is a simple pointer variable of type char *. Means a set of cells named cPointer, which is capable of holding an address of memory block, which indeed is supposed to hold the data of type Char. Can we say this declaration as complete? No. In case of other types of variables, simple declaration with no initialization would suffice but a pointer should always be initialized to valid address or at the least to NULL otherwise its treated as a Wild pointer. If we keep on continuing this declaration and unintentionally attempt on dereferencing or using this pointer, then whatever garbage address cPointer is holding, will be accessed. If cPointer is holding data that belonged to one of the transactions of previous process to which this memory was allocated by operating system, then that old (to be more precise, junk) data will be treated as a valid address by current process and it tries to pass control to or fetch data from that address in the act of dereferencing cPointer. However, it ends up in segmentation fault – memory access violation – i.e. current process is trying to access memory blocks out of its range. This error directly leads Operating system in terminating the process involuntarily. Now consider if this process was holding critical resources of the system while it was up and running; due to abrupt termination by system, all those resources haven’t been released to the resources pool thus resulting in race condition for resource utilization by other processes of system. Such a non-static pointer is treated in C world as a Wild Pointer.

So let’s complete above instruction here,

C++
// Always initialize a non-static pointer to NULL before use.
char *cPointer = NULL;

At this point, cPointer is treated as a null pointer; the macro NULL is assigned a value of zero in couple or three standard header files, which means cPointer is now stabilized.

There is another side to look at this concept. Let’s tweak this example further a little more and try to assign a valid address to cPointer as below,

C++
char cVariable;
cPointer = &cVariable;

There is nothing wrong in this set of instructions. However, one might accidentally store data of type int or double on the same address, instead of char, where cPointer is pointing. What impact would it bring in that case? cPointer is restricted to point to or recognize only those number of bytes equal to size of char. So, traditionally, size of one char is equal to one byte. However, size of type int and double are greater than that of char. No matter what data is stored in adjacent bytes, cPointer will only be able to scope itself for one byte of memory block i.e. cVariable. This again should ideally lead to memory access violation, but in general a program doesn’t end early enough to expose such an issue further at runtime. However, it would lead to data corruption of other variables which are given those adjacent set of memory blocks ultimately resulting in “Impact errors” per my say, which could lead a programmer on incorrect tracks of resolving unforeseen errors. Now this is where pointers start playing with us joyfully.

Let’s say the above instructions are enclosed in a function as shown in below code snippet –

Here, if execution control gets out of the function’s scope, the local cVariable’s memory also goes out of scope thereby changing cPointer’s behavior to Dangling Pointer. Since cPointer points to the same old memory location and is not reset back to NULL, next time that memory may be allocated to another variable in program and thus leading cPointer to produce unpredictable and severe results, whenever de-referenced beyond the function’s scope.

C++
#include <stdio.h>

char *cPointer = NULL;

void FunctionA( )
{
    char cVariable = 0;
    cPointer = &cVariable;

    /* Pointer operations here */    
}

int main()
{
    FunctionA( );

    /***
      * If cPointer is de-referenced here, 
      * it may lead to serious consequences.
      */

    return 0;
}

It is advised to make it a habit of practicing various types of examples exploring secure and fatal usage of C pointers. This is the only way one can learn and would be able to predict mysterious behaviour of pointers in complex C applications. Programmers should be able to distinguish root errors (the origin of bug arised due to incorrect or careless usage or implementation of pointer) from impact errors (the bugs that are introduced in other instructions due to occurance a root error) when it comes to detecting pointer issues as it saves a lot of time at first site itself instead of finding yourself on wrong tracks or dead ends.

Arrays are considered to be far more safer and reliable way of storing data over pointers, which on the other hands, leaves programmers with the hectic and delicate job of memory assignment, utilization, and release operations. In my opinion, arrays are “compile-time pointers” (or in legal terms, constant pointers) wherein if an array variable is declared the compiler finalizes its address, amount of memory blocks, and life of that array variable. That is not the case with true pointers since programmer can change its address, size and life at any point in time. This could be said as a precise and semantic difference between an array and a pointer. For a true programmer, pointer is a precious gift from C language, if utilized sensibly and carefully. Nonetheless, programmers can combine both these features for secure memory management implementations.

Memory Management

In the event of facilitating database-based applications (that’s what I think) with ease of organization and storage of application data, C came up with a framework which can collectively bundle different types of data in one packet, called as Structures. Unions are inherited from structures to better utilize memory shared among data members of a Union. C structures, when used in connection with pointers and a dynamic data structure called Linked list, facilitate on-the-fly (in-cache) storage of and navigation through data records. Linked list is a data structure consisting of group of nodes, each comprised of data and reference to next node. Both structures and unions are treated as user-defined data types in C. One can very well depict the differences between Arrays and Linked Lists in C, based upon two critical factors i.e. size of data structures and storage type of data stored in these data structures. As explained earlier, even a minor mistake in pointer usage would lead to serious issues; Linked list implementation using pointers needs special attention. Imagine a train as a linked list consisting of a number of bogies, where each bogie is represented as a node in linked list. Passengers in train are of differing age, sex, reservation number, etc. which would be collectively compiled into a structure. The nodes of linked list are sequentially linked using a pointer; every such node consists of a structure (of passenger data) and a reference to next node (a pointer to next bogie of train). Unless a reference of current node is initialized with valid address of next node, this reference would be treated as a wild pointer. Before going ahead in this example, we will have to adopt our knowledge of memory operations as Linked list implementation involves use of dynamic memory (heap).

Memory would be divided in two primary segments as static memory and dynamic memory. Static memory is managed by operating system in compilation and execution phases of C applications. In contrast, C programmers have been awarded to manage some part of system memory on their own which is called dynamic memory (Heap). Programmers using heap memory would be solely responsible for its management since C does not provide an in-built automatic garbage collection. This is the part where programmers would come across memory related issues such as memory leaks, memory corruption, segmentation faults etc. Failure to free an allocated set of memory after its use, would introduce memory leaks, leaving that amount of memory unused by system until its reboot; thus reducing total available memory for other processes in system. If such memory leak is being repeated in every run of a C application at a larger scale, at one point the amount of available memory will be exhausted and system will crash. This is how severely a memory leak would affect.

Memory corruption is found to be another serious issue with similar impact on the system. Trying to use a wild or a dangling pointer, as well as trying to free an already released pointer would lead to memory corruption issue.

The different segments of a C executable are text/code segment, initialized data segment, and BSS segment. Executable instructions reside in text/code segment. Initialized global and static variables go directly in initialized data segment. BSS (Block Started by Symbol) segment only holds statistics of un-initialized global variables i.e. the amount of space that BSS will require at runtime.

Local variables (be it initialized or un-initialized) don’t go in executable but are created at runtime. When operating system takes up program executable for its execution, it starts mapping the different sections of executable to system memory segments and additionally, starts using stack and heap memory as and whenever needed while the program is being executed. One can refer to section “Segments” and “Figure 6.2 How the segments of an executable are laid out in memory” from the book “Expert C Programming” by Peter Van Der Linden for more details on this.

Stack memory segment gets filled up from highest memory address and grows downwards in process address space. For example, whenever a function is invoked return address of next instruction will be pushed on stack followed by function attributes such as return type, parameters, and temps used while function is executed. Since this segment follows LIFO (last-in-first-out) order for reserving memory blocks, it is named as Stack segment. Freeing a block of stack is nothing more than adjusting stack pointer. This segment reminds me of recursive function which utilizes Stack segment for nested function calls. This is why I always keep on mentioning that the limit on recursion feature depends entirely on the size of Stack allocated for that particular process running recursive function. If a recursive function is invoked with a high number of fetches it would lead to abrupt process termination.

Unlike Stack and other segments, Heap is made available in the process address space only when there is a requirement to extend memory at runtime so that the memory blocks can be reserved from Heap whenever needed by corresponding process in execution. In case of Heap, it’s the programmer who is solely responsible for reserving memory blocks of Heap and as soon as the job is done, he needs to take care of releasing those particular set of memory blocks back to the pool of available memory blocks in Heap. Due to its simple stack-pointer navigation through memory blocks, Stack is considered to be faster in performance compared to Heap, which requires much more complex bookkeeping activities for assigning or releasing memory blocks. However, Heap segment has an advantage of run-time memory extension property over fixed size Stack allocation for a process.

Now that we have a fair enough understanding of dynamic memory, let’s get back to the Linked list example. Unlike the real-life train, which has sequential set of bogies, in Linked list example of train, the bogies may get random memory blocks across Heap. The only way of following sequence among these bogies would be reference pointers which if messed up with wrong address then remaining bogies of train will be lost in the ocean of Heap segment thereby introducing memory leaks.

As mentioned earlier, memory blocks on Stack are of fixed size, whereas any number of memory blocks can be reserved (that too at runtime) from Heap segment. Let’s try to understand the reason behind such sort of memory availability made possible under Heap segment. Unlike the way other segments are being arranged using contiguous memory blocks, Heap memory is considered to be a collection of currently available memory blocks across entire memory pool. The job of Heap is to efficiently manage memory and address space of a process for an application. Whenever a set of memory blocks are reserved using malloc( ) and related dynamic function calls, the size of process address space is increased correspondingly. In contrast, whenever this reserved memory is released using free( ), the size of respective process address space doesn’t shrink back. Typically, whenever heap memory is utilized by a process, it resides adjacent to (and as part of) Data segment i.e. immediately above the BSS area of Data segment and keeps on growing upward as opposed to Stack. Excessive use of Heap would also lead to a critical memory management issue called Memory Fragmentation. This could be explained with an example. Let’s suppose Heap has 20 memory blocks of which first 5 blocks are reserved for process A, and next 2 blocks are being utilized by process B. In some time, if process A releases these 5 blocks back to Heap, a total of 18 blocks will be available to other processes in system. However, if process C demands for 15 blocks of memory, Heap is unable to provide as these 18 available blocks are spread in two different sets of 13 and 5. So unless process B releases 2 blocks residing in-between two fragments of available memory blocks, system cannot satisfy process C’s request. This is just a small view of big picture where Heap needs to keep track of all such fragments of available memory blocks and keep on updating these statistics whenever memory is reserved or released. I hereby end our discussion on memory management since this is a huge and endless topic and moreover from C language perspective, the required knowledge has been shared to its extent.

Library and Linking Strategies

Let’s talk on compile and link phases of a C program before it is transformed to a binary or library file. For this document, we will stick to GNU Compiler Collection (GCC) only. Although GCC works on Linux and Unix platforms, Windows programmers can install and utilize tools such as MinGW to work with GCC on Windows Systems. Typically, any C program would have to undergo compilation phase to produce an object file (either a library object or a binary object file). If a programmer is more interested in showcasing certain set of reusable C functions for use by other programmers, such a C program can be transformed into a reusable library thereby compiling the program with “-C” (hyphen followed by letter C) option, which instructs compiler to produce a library object file from C program when compiled successfully. Omitting this option would lead compiler to generate an executable object file wherein if the main(  ) function is not defined anywhere in program, compiler would throw an error.

Speaking about libraries in C, there are two kinds of libraries that can be generated - Static and Shared. Whenever a program is compiled to produce a .o file, it can be used to link and invoke library functions by your application. But what if there are dozens of such .o library files which are needed to be linked while building application executable? One cannot simply list and link all these files while building your application binary. A better option would be to collect and bind these .o files in an archive file, called as Static Library, which alone can be linked to the application. You can compare such a library with real-time Library of books where all books in a Library room could be treated as individual object files. So then what is the difference between Static and Shared? Or why is there a need to have another kind of library called Shared Library? The same set of program object files can be combined together to form a Shared Library using one significant platform-specific option i.e. “–shared” (hyphen followed by keyword shared) which is applicable on Linux platforms and when used with Linux compilers, generates a Shared Library with .so extension.

The basic difference between these two kinds of libraries lies in their names wherein Static libraries, when linked in application build phase, become part of that executable. It’s like embedding an entire copy of library functions (which are invoked in main program) within the text/code segment of executable thus increasing size of binary file. Shared libraries however, when linked with executable, are dynamically linked at run-time without embedding functions within executable. Systems running dozens or hundreds of processes, code reuse at link-time solves only part of the bigger problem. However, with memory management implementations in modern Operating systems, it is also possible to share the code at run-time. This is done by loading the code in physical memory only once and reusing it in multiple processes via virtual memory. Libraries of this kind are called Shared libraries.

Furthermore, there are two ways of such a dynamic linking of shared library. First, during compilation or build phase of application, Shared library must be available and will be tied to executable. In a way, application executable has been ensured of library’s availability and location while in build phase itself but is not made part of executable. Another way is to inform executable that the library will be loaded and run-time linked at execution i.e. at run-time, without giving any beforehand information about library’s availability or location.

In brief, static libraries are link-edited and then loaded to run them whereas shared libraries are link-edited, loaded, and run-time linked to run them. At execution, before main( ) is called, the runtime loader brings the shared data objects into the process address space. It doesn’t resolve external function calls until the call is actually made, so there is no penalty to linking against a library that you may not call. Dynamic linking is the more modern approach and has the advantage of much smaller executable size. Multiple executables/applications, when linked dynamically to a shared library, can share a single copy of that library at run-time.

Typically, the path of libraries that your program is being linked to, can be mentioned using “–L” (hyphen followed by capital letter L) option followed by the name of libraries with another option “–l” (hyphen followed by small letter l) as depicted in following example. Note if –L option is not mentioned in linker command, system looks for availability of libraries under path set using LD_LIBRARY_PATH environment variable.

But, this is not all about the libraries and linking topic and this discussion should only be considered as a starter’s information that I have tried to present as far as possible and there are various other aspects to this topic which is not covered here as it has a wide range of its usage and significance which is not required in relation to this document.

File Handling

With the help of basic C file handling operations such as fopen, fread, fwrite, and fclose, programmers can build database applications. But have you ever wondered how these operations are supported and what is the backbone structure defined to provide file handling pointers? Whenever we start programming file handling code, the very first line we type is,

C++
FILE *fPointer;

This means fPointer is a pointer variable of type FILE, which indeed is a typedef variable of structure _iobuf, defined in stdio.h header file. Whenever fopen is invoked successfully, this structure will be filled up with data required to fetch system’s file open call. Remember, fopen is a C library function which internally executes system’s true open call. Similarly, other file handling operations are fetched at system level. Each process, dealing with file handling operations, will have its own file descriptor table and whenever a new file is being created or existing file is being opened, entry of that file information will be added to this table. Correspondingly, whenever a file is closed, the file descriptor table will be updated by removing that particular entry. Typically, as soon as a new process is launched in system and file descriptor table is created for this process, system generates three entries in this table automatically, as below –

  1. Standard Input (STDIN_FILENO), which is represented with a value of 0 and accepts data from keyboard.
  2. Standard Output (STDOUT_FILENO), represented by a value of 1 and sends data to screen.
  3. And lastly, Standard Error (STDERR_FILENO) reflects a value of 2 and sends data to screen which can be redirected to log file too.

If current process invokes fork( ) system call thereby creating a child process, a copy of this table is passed over to new process. At a high level, kernel manages per-process open files information using three different tables, process table, file table, and v-node/i-node table. 

Strings

Strings in C are arrays of characters ending with a null character (‘\0’). In case of string constants, compiler automatically appends a null character at the end of strings. There are various library functions provided in C to manage string handling functionality, such as strcpy, strcat, strstr, strtok, etc. all of which assume that strings are terminated by a null character. We can use both Arrays and Pointers to work upon Strings under C. One peculiar issue while storing a string in a character array is that it doesn’t automatically end with null terminator or ‘\0’ character which is why most of the string library functions fail leading to buffer-overflow or even segmentation errors. This is because C does not support automatic boundary-checks on contents of variable. I find all these string functions as blind workers in memory which (if not directed consciously) would introduce severe errors and may even force system to terminate process at runtime.

Also, there are other functions that can be used to work on strings in C programs such as memset, memcpy, memchr, memcmp, etc. One might be interested in knowing the differences among these set of functions as both drives same purpose. As an instance, let’s compare memcpy with strcpy. The library function strcpy continues to copy data from a given source location to destination location until it detects a NULL terminator in source string and additionally, it doesn’t even worry about number of characters it is copying to destination, even if there is no space left at destination. In contrast, memcpy copies the exact number of characters equal to the third parameter in function irrespective of null terminators. Even if it detects NULL character while copying data from source to destination, it doesn’t stop the copy operation and continues till given number of characters are copied to destination. With memset, one can easily flush or initialize prescribed number of memory blocks with any character, as required. Generally, we use memset to initialize a recently allocated set of memory blocks with null terminator so as to clean old/junk data that the memory was holding previously. I would prefer using this second set of functions as it gives a precise level of transparency to work with memory blocks when it comes to operating on strings in C programs.

Preprocessors

The name itself suggests it is a pre-processing act before program goes into compilation phase. Instructions that start with # character would be recognized and attended by this program before it submits the code to compiler program. So, why is this step really needed instead of letting compiler do this job on its own?

One of the first reasons would be to optimize code maintainability (which cannot be achieved easily at compilation time) wherein if certain data needs to be consistently updated at different times in compilation, #define macro does this work efficiently facilitating code changes only at one place instead of replicating these changes wherever this macro is being expanded in code.

Second and major use of preprocessor would be to bring in the portability feature to your C program. The conditional macro directive help add such a platform-specific flexibility where one can write sets of instructions that would be specific to a platform and need to be attended only when compilation is being driven on that particular platform. You may assume preprocessor as a filtering tool, which based upon platform, ensures the lines of code that need to be attended and others that need to be skipped by compiler before program gets attended by compiler.

Another key function of using preprocessor is file inclusion where program declarations could be isolated from current code file and only one line of code would be added using #include directive. Second way of utilizing file inclusion is to include a standard or user-defined library file, which contains set of defined C functions, in existing code file thus achieving code reusability. One such typical example would be #include <stdio.h>, which when included in our source file, facilitates programmer to use standard I/O streams functions.

#pragma directive is an extension to Standard C which enables programmer to add new preprocessor functionality or provide implementation-defined information to the compiler. #error is another new directive producing a compiler-time error message that includes the argument tokens, which are subject to macro expansion. #error will immediately terminate compilation.

Operators and Order of Evaluation

There are many operators that C has provided to carry out arithmetic, logical, relational, and many more types of operations. These operators have been given certain priority levels so as to ensure unambiguous evaluation of these operators when two or more operators are used in a single instruction.

We know that there is a difference between assignment (=) and equal to (==) comparison operators; however one can inadvertently write an assignment operation instead of comparison, such as –

if(a = b)
  executeMe();
else
  endProgram();

Here, instead of comparing the a’s value with b, a is assigned value of b and is compared against 0, so if value of a is non-zero, the code snippet passes the condition and executeMe( ) function is executed. The programmer never intended this. In similar fashion, few of the logical and bitwise operators would also be exchanged inadvertently i.e. & versus && and | versus || operators.

Along with the precedence of operators, one needs to be aware of the associativity factor. Cause compiler is programmed to evaluate certain set of operators in a particular direction. Since, opening and closing parentheses have highest precedence among all other operators, compiler internally divides an expression (containing multiple operators in one single instruction) using parentheses and then it starts evaluating such divided sets of expressions (enclosed within each pair of parentheses), in parallel as all such sets are made independent of each other. Only after evaluating such individual expressions, which results into a destination value, compiler carries out final evaluation on these values. Now, one more factor that compiler simultaneously keeps on looking for, while evaluating expressions individually (when it was divided using parentheses) as well as at last stage is type conversion i.e. up casting or down casting operations whenever it finds that two operands of an operator/expression has different types of data.

Let’s say, we have a simple expression as below –

C++
int a=10, c; 
float b=2.3;

c = a * 10 / b - 2 * a + 3;

So, how compiler acts on such an expression? As I said, it uses divide and conquer rule i.e. the expression will be further divided in smaller expressions as below –

C++
c = ((((a * 10) / b) - (2 * a)) + 3);

As shown here, fortunately all operators in this expression have a left to right associativity so the parentheses are used accordingly. First, multiply operator after variable ‘a’ is given highest priority, then division operator, followed by second multiply operator and then comes subtraction and addition respectively. As I mentioned earlier, compiler also keeps track of type casting activity too. Variable ‘b’ in this expression is float. So, in division process, output of (a*10) is promoted to float and then division takes place, which results in a float value. Simultaneously, expression (2*a) results in 20. Now that we have a float as left operand of subtraction operator, value 20 is promoted to float value, thus resulting in 23.48. Now, the final expression too follows up casting of value 3 to float resulting in a floating value of 26.48. The resultant variable ‘c’, in contrast, is capable of storing only integer value, down casting of right side is carried out by compiler, thus storing a value of 26 in resultant variable ‘c’.

By now, I hope you might have cleared your head of rough assumptions and clouded concepts of operator evaluation process that you were following before and grasped up compiler’s precise behaviour and basic rules of resolving operator expressions.

Processes

How processes are created? Moreover, what is the difference between process and program? A binary file i.e. an executable is treated as a program until it enters execution phase. A process, on the other hand, is a concept and is a logically given terminology for “a program under execution”. So how does operating system deal with processes? When we execute a binary file, we instruct the system to pick up the binary file from specified location on disk, map corresponding segments of binary with segments in primary memory, lookout for initial entry point in text segment and start execution of instructions in this segment while utilizing other segments as and whenever needed. As mentioned in Memory management section, the segments constitute a process address space and as soon as the instructions in text segment are executed, program turns into a live process which is being attended by operating system.

What if programmer wants to create a process instead of system doing this job on his behalf? Here comes the most common system call fork( ) which spawns existing process and gives birth to a child process. Now, using this system call, programmer can assign any job to this child process while asking parent process (the process which created child process) to continue executing other set of instructions. The newly created process gets an exact copy of parent’s process address space. Every process has a unique identifier called process id and a process control block, which corresponds to an entry in process table of kernel. What if one wants existing process to execute a new program on disk without creating a new process? This could be achieved using exec( ) system calls. There are different flavours of exec( ) system calls, which facilitate programmers to divert existing program’s execution with a new one.

So, is there a process which executes operating system? Yes, whenever we say system boots, init is the first process to be run in background (this is true at least in Unix/Linux environment), on top of which all other system/application processes are being spawned, executed, monitored, terminated, interrupted, and killed. In a parent-child form of processes, if parent dies before its child process, init becomes a direct ancestor of such a child process.

Unless you know about processes and its related facets, understanding the functional life of and system’s treatment to your C program would be very difficult while you try and chase a system-level bug.

References

[1] The C Programming Language, Second Edition (By Dennis Ritchie and Richard Kernighan)

[2] Expert C Programming, Deep C Secrets (By Peter Van Der Linden)

[3] C A Reference Manual, Fifth Edition (By Samuel P. Harbison III and Guy L. Steele Jr.)

[4] C Traps and Pitfalls, a White paper by Andrew Koenig.

 

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)