Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / desktop / QT

A Graphical Documentation System for C/C++ projects

4.93/5 (13 votes)
23 Aug 2012CPOL20 min read 56.4K   1.6K  
A concept-tool to create interactive documentations for C/C++ projects

Image 1

Index

Abstract

This article aims at presenting a new approach to create a software documentation and describing the structure, functionalities and internals of the gds application created to implement the concepts here exposed. Please notice that the application itself is not intended to be a commercial product or a company-tool (although it may serve as such) rather than a conceptual implementation of the following concepts.

The role of a software documentation

A common software documentation is usually made of one or several text and graphical documents which explain how the software is structured and how it works. There are many types of software documentation and even more methodologies to create one (e.g. Unified Process, Tropos - agent oriented, etc..) and most of them are standardized groups of steps, refinements and frameworks. Without diving into one specific documentation system it's obvious that each one has pros and cons depending on the task it's used on. Following a software documentation methodology means planning a number of initial steps and then proceeding to design the application's components which will eventually be translated into pure code. Although modeling languages, graphs and diagrams can be used to describe the resulting structure of a previous written code project, the standard modeling techniques recommend a series of steps and practices to be followed upon each software project opening. In particular modeling a wide software from the ground up it's a complex task that requires a deep knowledge of the underlying architecture, API and features available.

Key factors to understanding a code

Since having a perfect knowledge of a language doesn't mean you can always easily grasp the meaning or concepts expressed within a text (or a generic graphical representation) that uses that language - and this isn't just valid for programming languages -, it is perfectly normal to spend time on a source code file to understand what that code actually does and to "mentally link" those operations in the complex view of a greater application. Describing a software behavior and structure should help other people who are supposed to work with your code to understand how you structured your code (how many modules / did you follow a pattern / etc..) and what's the purpose of a specific part of your project. Extensively commenting a piece of code is a good programming practice although sometimes it may not be enough to completely replace a proper documentation. As long as you really want other people to understand your code it's your primary goal to give a complete insight of what your application does and how it works. A software company that hires a new programmer and put him to work on a specific part of a greater application is interested in providing him as much information as possible on how that module works, what is supposed to do and (possibly) why they designed it that way. The sooner the programmer grasps the code and gets familiar with it, the sooner he will be fully operative on that code and capable of modify it or expand it.

The ability to explain a concept clearly and as effectively as possible is a personal skill and varies from one to another. However there are practices and techniques which may greatly simplify a concept understanding. First of all: the level of complexity. If a module is very complex (i.e. it's formed by many other functions/modules or performs a great variety of operations strictly interconnected between them) it might be difficult to describe in a formal documentation. In general every complex element can be split into a number of other parts. Let's take for instance a simple program which asks for a matrix and returns each matrix's number square

C++
#include <iostream>
using namespace std;
class myMatrix
{
public:
    int rows, columns;
    double *values;
    myMatrix(int height, int width)
    {
        if (height == 0 || width == 0)
            throw "Matrix constructor has 0 size";
        rows = height;
        columns = width;
        values = new double[rows*columns];
    }
    ~myMatrix()
    {
        delete [] values;
    }
    // Allows matrix1 = matrix2
    myMatrix& operator= (myMatrix const& m)
    {
        if(m.rows != this->rows || m.columns != this->columns)
            throw "Size mismatch";
        memcpy(m.values,this->values,this->rows*this->columns*sizeof(double));
    }
    
    // Allows both matrix(3,3) = value and value = matrix(3,3)
    double& operator() (int row, int column)
    {
        if (row < 0 || column < 0 || row > this->rows || column > this->columns)
            throw "Size mismatch";
        return values[row*columns+column];
    }
};
int main(int argc, char* argv[])
{
    int dimX, dimY;
    cout << "Enter the matrix's X dimension: ";
    cin >> dimX;
    cout << "Enter the matrix's Y dimension: ";
    cin >> dimY;
    myMatrix m_newMatrix(dimY,dimX);
    // Insert all the matrix data
    for(int j=0; j<dimY; j++)
    {
        for(int i=0; i<dimX; i++)
        {
            cout << "Enter the (" << i << ";" << j << ")th element: " << endl;
            cin >> m_newMatrix(j,i);
        }
    }
    // Calculate the square of each number
    for(int j=0; j<dimY; j++)
    {
        for(int i=0; i<dimX; i++)
        {
            m_newMatrix(j,i) = m_newMatrix(j,i)*m_newMatrix(j,i);
        }
    }
    // Print out the new matrix
    cout << endl;
    for(int j=0; j<dimY; j++)
    {
        for(int i=0; i<dimX; i++)
        {
            cout << m_newMatrix(j,i) << "\t";
        }
        cout << endl;
    }
    return 0;
}

There are various levels of detail that could explain this simple program's tasks and modus operandi. Just like higher level languages can abstract more than lower level ones, a first high level may explain this program's purpose in a simple and concise way. A second level may expand on the first one and give insights on how the program is structured. An additional third level may continue the second level's work and additionally expand each description. This process can continue until the application's code, which acts like a last level complete of every information needed, is reached. The code provides a greater level of detail but is more complex with respect to the other levels.

The above program might have a level structure like the following

Image 2

In this case it was chosen to use three levels (plus the code level) to represent the same information at higher and lower (richer) detail, but the number of levels could have been greater. The concept of "greater detail" is a fundamental one in almost every software documentation and in every software designing process.

Another concept that needs to be taken into account when getting started with a new software code is the context where a chunk of code is inserted into. Most of the time spent searching for a specific part of the code where the program performs certain operations is needed by the programmer to create a "mental map" in which each block is categorized and its role in the overall architecture is well-defined.

Finally the execution order of the program's blocks isn't always obvious, especially when dealing with highly multi-threaded code. Sometimes only a careful reading can lead to understand the synchronizing mechanisms of the threads involved.

A new approach to software documentation: interactivity

Using a graphical and interactive approach to software documenting is a relatively new concept. Since a concrete example is worth a thousand words, in this section a small Qt C++ program will be presented along with its associated interactive documentation. The entire package (program sources + documentation directory) can be downloaded by the link on the top of this page.

The program we are going to examine through the help of an interactive documentation is a simple one: a basic linear function drawer in a restricted Cartesian graph area

Image 3

Since this is a sample (and simple) application, just a basic drawing feature has been implemented with a code that isn't definitely brilliant for error handling and modularity. Although its code is not hard to understand by reading it whole, if the application had been a more complex one a programmer would have spent a considerable amount of time trying to understand the structure, all the data types and their roles, the execution flux (as already said multithreading can hinder this process) and the overall cooperation between various modules.

The following video is a showcase of how the gds software produces an interactive documentation for the simple graph application

Image 4 Youtube Demo Video of the GDS Application for the sample Cartesian Graph App

A closer look at the gds concept application

GDS stands for "Graphical Documentation System" and it's a concept experimental application designed to provide an interactive and extremely intuitive overlook of a new software code. If used properly gds allows a programmer to create a high detailed documentation of its code for others to use and understand.

The concepts presented a few sections above have led the gds app designing and realization. In this section the application usage is briefly presented, afterwards the application's structure and code organization will be presented. Note that gds uses openGL rendering and requires openGL extensions 3.3 or higher to run properly. It also needs Microsoft Visual C++ 2010 Redistributable x86 package installed (you can freely download it from here).

The application has two main operative modes:

  • View Mode - this mode provides a virtual tour of the three-level documentation and it's recommended for first-time code users
  • Edit Mode - this mode allows to create a new documentation (if the documentation directory where all the database files are stored isn't present) or to edit an existing one

The user is prompted for a choice upon the application's start

Image 5

The view mode is quite intuitive, there's a code pane (not visible in level one - everyone should be able to understand it), a central diagram pane and a right documentation pane. There's also a navigation pane that allows three actions

  • Zoom out to the previous level (level 1 is the maximum a user can zoom out)
  • Zoom in in the selected node (level 3 is the maximum a user can zoom in)
  • Select next block - this is useful to navigate inside the code and get a precise order of how things happen in the code logic. Blocks order can be set in edit mode (we'll see how quite soon).

The following screens show the gds app in view mode respectively in level 1, 2 and 3

Image 6

Image 7

Image 8

The edit mode allows a programmer to modify or create a new or existing documentation. When there's no documentation (i.e. there's no gdsdata directory in the application's path) no graph is available and gds tries to recreate it. This could mean that the documentation has been moved elsewhere (and gds can't find it) or that there's no documentation yet.

The following is the gds app in edit mode with no documentation found

Image 9

With the "Add Child Block" button nodes can be added (or root nodes if there is no graph) to the documentation along with a label (a block name), an index and the documentation. The index field is used by the view mode to navigate through blocks in the right order. If this index is set duplicate or wrong, the view mode will simply navigate through the wrong order. In each level it is possible to delete an element (whatever it is: root/child/parent) or swap its content with another one.

Pressing the "Next Level" button while having one node selected will cause the gds to create a sub-level for that node: that means the block needs additional details on how it works. The gds automatically saves modified nodes when navigating through levels or closing the application.

The code pane on the left is visible only in level 2 and 3 and lets a user select a codefile and highlight lines into it

Image 10

A level 2 block may not have a codefile associated, so there's a "Clear" button. Notice that gds is meant to reside in a fixed location inside your code project's root directory. Every path to codefiles is stored as relative to gds directory. There are simple correction algorithms to retrieve the right line of code if it has been moved, however gds should be used to document files that are meant to be released and "ready". Obviously a documentation file might also be deleted and the associated node would get the "Documentation file not found" error then allowing the user to define a new documentation file. As already stated this is a concept and experimental application to convoy a new documentation method, a commercial tool that would embrace this philosophy should integrate these functionalities into a proper version control system (which would also keep track of different and moved files).

Since gds has been conceived to be an easy-to-use application there's nothing else needed to know to use it as a normal user.

The following sections will describe in greater detail the programming logic behind gds, so it's mainly targeted to a programming audience or to someone who is interested in modifying gds (gds is opensource).

An OpenGL graphic widget

The following section will require some basic openGL programming knowledge - reader advised.

The QGLDiagramWidget class is the main widget of the entire gds application. It's the central pane which displays the 3D graph and allows the tree diagram to be rendered. Since the application uses Qt libraries, the widget is a subclass of the QGLWidget class that provides functionalities to draw openGL graphics by providing three main virtual functions that can be reimplemented:

  • paintGL() - this is the function where the openGL scene is rendered and where most of the widget's code resides
  • resizeGL() - called whenever the widget is resized
  • initializeGL() - sets up an openGL rendering context, called once before paintGL or resizeGL

The diagram widget also uses overpainting (see the Qt documentation for more information) which basically means the block name is painted over the openGL rendered scene, the code that calls for a repaint and then performs the overpainting is the following

 // Paint event, it's called every time the widget needs to be redrawn and
// allows overpainting over the GL rendered scene
void QGLDiagramWidget::paintEvent(QPaintEvent *event)
{
    makeCurrent();
    // QPainter initialization changes a lot of stuff in the openGL context
    glPushAttrib(GL_ALL_ATTRIB_BITS);
    glMatrixMode(GL_PROJECTION);
    glPushMatrix();
    glMatrixMode(GL_MODELVIEW);
    glPushMatrix();
    // Calls base class which calls initializeGL ONCE and then paintGL each time it's needed
    QGLWidget::paintEvent(event);
    // Don't paint anything if the data isn't ready yet
    if(dataDisplacementComplete)
    {
        // Time for overpainting
        QPainter painter( this );
        painter.setPen(QPen(Qt::white));
        painter.setFont(QFont("Arial", 10, QFont::Bold));
        // Draw the text on the rendered screen
        // -----> x
        // |
        // |
        // | y
        // v
        // Use the bounding box features to create a perfect bounding rectangle to include all the necessary text
        QRectF rect(QPointF(10,this->height()-25),QPointF(this->width()-10,this->height()));
        QRectF neededRect = painter.boundingRect(rect, Qt::TextWordWrap, "Current Block: " + m_selectedItem->m_label);
        if(neededRect.bottom() > this->height())
        {
            qreal neededSpace = qAbs(neededRect.bottom() - this->height());
            neededRect.setTop(neededRect.top()-neededSpace-10);
        }
        painter.drawText(neededRect, Qt::TextWordWrap , "Current Block: " + m_selectedItem->m_label);
        painter.end();
    }
    // Actually draw the scene, double rendering
    swapBuffers();
    // Restore previous values
    glMatrixMode(GL_MODELVIEW);
    glPopMatrix();
    glMatrixMode(GL_PROJECTION);
    glPopMatrix();
    glPopAttrib();
    if(!m_readyToDraw) // If there's still someone waiting to send data to us, awake him
    {
        m_readyToDraw = true;
        qWarning() << "GLWidget ready to paint data";
        if(m_associatedWindowRepaintScheduled)
        {
            qWarning() << "m_associatedWindowRepaintScheduled is set";
            if(m_gdsEditMode)
            {
                qWarning() << "Calling the deferred painting method now..";
                ((MainWindowEditMode*)m_referringWindow)->deferredPaintNow();
            }
            else
            {
                qWarning() << "Calling the deferred painting method now..";
                ((MainWindowViewMode*)m_referringWindow)->deferredPaintNow();
            }
        }
    }
} 

The code is extensively commented, but a few words are worth spending since may give a useful insight on what's going on.

The QGLDiagramWidget uses double buffering, that means the scene rendered on the openGL context isn't showed until a swapBuffers() call is made. This prevents flickering between colorpicking modes (we'll explain this shortly) and animated transitions.

The initializeGL() function takes care of initializing all the resources needed by the openGL scene, i.e. VBOs (Virtual Buffer Objects, these are buffers needed to store data for the elements to be drawn like vertex, uv texture coordinates, normals and indices), textures and shaders.

// Load, compile and link two shader programs ready to be bound, one with the normal
// gradient, the other with the selected gradient
loadShadersFromResources("VertexShader1.vert", "FragmentShader1.frag", &ShaderProgramNormal);
loadShadersFromResources("VertexShader2Picking.vert", "FragmentShader2Picking.frag", &ShaderProgramPicking);
// Clear-up VBOs (if VBO don't exist, this simply ignores them)
freeBlockBuffers();
// Initialize VBOs for rounded blocks
initBlockBuffers();
// Clear-up texture buffers (if they don't exist, this simply ignores them)
freeBlockTextures();
// Initialize texture buffers for rounded blocks
initBlockTextures();

The GLwidget uses a simple 3D model whose vertices, UV texture coordinates, normals and indices are stored into the "roundedRectangle.h" file. The widget also renders it and applies a phong lighting model through the compiled shader programs (there are two pairs of vertex and fragment shaders, the first pair is used to normally draw elements, the second pair is used to initialize a colorpicking scene and to perform simple operations like connectors drawing).

The vertex shader used to normally render objects is the following

 #version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 aVertexPosition;
layout(location = 1) in vec2 aTextureCoord;
layout(location = 2) in vec3 aVertexNormal;
// Values that stay constant for the whole mesh.
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat3 uNMatrix;
out vec2 vTextureCoord;
out vec3 vTransformedNormal;
out vec4 vPosition;
void main()
{    
    // Pass along the position of the vertex (used to calculate point-to-vertex light direction),
    // no perspective here since we need absolute position (we used absolute position for the light point too)
    vPosition = uMVMatrix * vec4(aVertexPosition, 1.0);
    // Set the complete (Perspective*model*view) position of the vertex
    gl_Position =  uPMatrix * vPosition;
    
    // Save the uv attributes
    vTextureCoord = aTextureCoord;
    
    // Pass along the modified normal matrix * vertex normal (the uNMatrix is
    // necessary otherwise normals would point in a wrong direction and
    // they would not be modulo 1 vectors), this matrix ensures direction and
    // modulo 1 preservation while converting their coords to absolute coordinates
    vTransformedNormal = uNMatrix * aVertexNormal;
} 

There are attributes used to receive model's vertices, UV texture coordinates and normal versors, uniforms to receive perspective, modelview (composed by model - this matrix is set to the element's position and view - this is set by default to the root element but can be changed with directional arrows on the keyboard) and a normal matrix needed to preserve direction and modulo of the unit normal vector (if you're interested in why a special normal matrix needs to be passed to the fragment shader to adjust lighting take a look at Eric Lengyel's "Mathematics for 3D Game Programming and Computer Graphics"). The vertex shader basically just calculates the new vertex position and passes it along with uv coordinates and the trasformed normal to the fragment shader.

The fragment shader takes care of calculating the light direction (these shaders use per-fragment light that comes from a point) and the light weighting vector that will be used to weight the light color components. Finally it renders the texture (using the UV texture coordinates) keeping in mind the light weighting.

#version 330 core
// u,v values
in vec2 vTextureCoord;
in vec3 vTransformedNormal;
in vec4 vPosition;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
uniform vec3 uAmbientColor;
uniform vec3 uPointLightingLocation;
uniform vec3 uPointLightingColor;
void main()
{
    vec3 lightWeighting;
    // Get the light direction vector
    vec3 lightDirection = normalize(uPointLightingLocation - vPosition.xyz);
    // Simple dot product between normal and light direction (cannot be lower than zero - no light)
    float directionalLightWeighting = max(dot(normalize(vTransformedNormal), lightDirection), 0.0);
    // Use the phong model
    lightWeighting = uAmbientColor + uPointLightingColor * directionalLightWeighting;
    
    // Weight the texture color with the light weight
    vec4 fragmentColor;
    fragmentColor = texture2D(myTextureSampler, vec2(vTextureCoord.s, vTextureCoord.t));
    
    gl_FragColor = vec4(fragmentColor.rgb * lightWeighting, fragmentColor.a);
} 

The paintGL() function is where most of the graphic work is done. After initializing the viewport and several other default values (e.g. glClearColor) the function can switch into two modes:

  • A color picking one
  • The normal rendering one

Color picking is a graphic technique often used with openGL to identify objects clicked in the scene. Is a more recent technique than SELECT picking and ensures a perfect integration with programmable pipelines (on the other hand SELECT picking relies on fixed pipelines).

Basically each object is stored as a "dataToDraw" object and is granted a unique color

 // Initialize static variable to the first color available
unsigned char dataToDraw::gColorID[3] = {0, 0, 0};
// Set a static dark blue background (51;0;123)
float QGLDiagramWidget::m_backgroundColor[3] = {0.2f, 0.0f, 0.6f};
// Constructor to initialize the unique color
dataToDraw::dataToDraw()
{
    m_colorID[0] = gColorID[0];
    m_colorID[1] = gColorID[1];
    m_colorID[2] = gColorID[2];
    gColorID[0]++;
    if(gColorID[0] > 255)
    {
        gColorID[0] = 0;
        gColorID[1]++;
        if(gColorID[1] > 255)
        {
            gColorID[1] = 0;
            gColorID[2]++;
        }
    }
    // Background color is reserved, so don't assign it
    if(gColorID[0] == (QGLDiagramWidget::m_backgroundColor[0]*255.0f)
            && gColorID[1] == (QGLDiagramWidget::m_backgroundColor[1]*255.0f)
                               && gColorID[2] == (QGLDiagramWidget::m_backgroundColor[2]*255.0f))
    {
        // Next time we would have picked right this color, change it
        gColorID[0]++;
    }
} 

When the user clicks on an object the mouse position is recorded and, since raw openGL doesn't recognize objects as entities, the entire scene is rendered with each mesh' unique color. Afterwards the point where the mouse was clicked is read back from the framebuffer and its color is compared against each object's color to identify the object the user clicked on.

The code snipper that performs this work is the following

if(m_pickingRunning)
   {
       //**************************************************************************************************//
       //////////////////////////////////////////////////////////////////////////////////////////////////////
       //                      Picking running, load just what's needed to do it                           //
       //////////////////////////////////////////////////////////////////////////////////////////////////////
       //**************************************************************************************************//
       ////////////////////////////////////////////////////////////////////////////////////////////////////
       //  Phase 1: prepare all shaders uniforms, attribute arrays and data to draw rounded rectangles   //
       ////////////////////////////////////////////////////////////////////////////////////////////////////
       // Do the colorpicking first
       currentShaderProgram = ShaderProgramPicking;
       if(!currentShaderProgram->bind())
       {
           qWarning() << "Shader Program Binding Error" << currentShaderProgram->log();
       }
       // Picking mode, simple shaders
       uMVMatrix = glGetUniformLocation(currentShaderProgram->programId(), "uMVMatrix");
       uPMatrix = glGetUniformLocation(currentShaderProgram->programId(), "uPMatrix");
       // Send our transformation to the currently bound shader,
       // in the right uniform
       float gl_temp_data[16];
       for(int i=0; i<16; i++)
       {
           // Needed to convert from double (on non-ARM architectures qreal are double)
           // to float
           gl_temp_data[i]=(float)gl_projection.data()[i];
       }
       glUniformMatrix4fv(uPMatrix, 1, GL_FALSE, &gl_temp_data[0]);
       gl_modelView = gl_view * gl_model;
       // Bind the array buffer to the one with our data
       glBindBuffer(GL_ARRAY_BUFFER, blockVertexBuffer);
       // Give the vertices to the shader at attribute 0
       glEnableVertexAttribArray(0);
       // Fill the attribute array with our vertices data
       glVertexAttribPointer(
                   0,                                  // attribute. Must match the layout in the shader.
                   3,                                  // size
                   GL_FLOAT,                           // type
                   GL_FALSE,                           // normalized?
                   sizeof (struct vertex_struct),      // stride
                   (void*)0                            // array buffer offset
                   );
       // Bind the element array indices buffer
       glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, blockIndicesBuffer);
       // First draw all normal elements, so bind texture normal
       glDisable(GL_TEXTURE_2D);
       glDisable(GL_FOG);
       glDisable(GL_LIGHTING);

       /////////////////////////////////////////////////////////////////////////
       // Phase 2: iterate through all rounded blocks to draw and set for each one its transformation matrix
       /////////////////////////////////////////////////////////////////////////

       // **************
       // WARNING: DISPLACEMENTS ARE ASSUMED TO BE CALCULATED BY NOW, IF NOT THE RENDERING MIGHT CRASH
       // **************
       // Adjust the view by recalling how it was displaced (by the user with the mouse maybe) last time
       adjustView();
       // Draw the precalculated-displacement block elements
       postOrderDrawBlocks(m_diagramData, gl_view, gl_model, uMVMatrix, TextureID);
       // Save the view for the next passing
       gl_previousUserView = gl_view;

       // At this point the picking should be ready (all drawn into the backbuffer)
       // At the end of paintGL swapBuffers is automatically called
       // Picking still running, identify the object the mouse was pressed on
       // Get color information from frame buffer
       unsigned char pixel[3];
       glReadPixels(m_mouseClickPoint.x(), m_mouseClickPoint.y(), 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixel);
       // If the background was clicked, do nothing
       if(pixel[0] == (unsigned char)(m_backgroundColor[0]*255.0f)
               && pixel[1] == (unsigned char)(m_backgroundColor[1]*255.0f)
               && pixel[2] == (unsigned char)(m_backgroundColor[2]*255.0f))
       {
           m_goToSelectedRunning = false;
           qWarning() << "background selected..";
           m_pickingRunning = false; // Color picking is over
           this->setFocus(); // The event filter will take care of the keyboard hook
       }
       else
       {
           // Something was actually clicked!
           // Now our picked screen pixel color is stored in pixel[3]
           // so we search through our object list looking for the object that was selected
           QVector<dataToDraw*>::iterator itr = m_diagramDataVector.begin();
           while(itr != m_diagramDataVector.end())
           {
               if((*itr)->m_colorID[0] == pixel[0] && (*itr)->m_colorID[1] == pixel[1] && (*itr)->m_colorID[2] == pixel[2])
               {
                   // Flag object as selected
                   // qWarning() << "SELECTED ELEMENT: " << (*itr)->m_label;
                   // Save the selected element
                   m_selectedItem = (*itr);
                   this->setFocus();
                   m_pickingRunning = false; // Color picking is over
                   // Signal our referring class that the selection has changed

The selected element's pointer is then passed back to the associated window (view mode or edit mode) to signal the user has clicked on the graph on another element (or the same).

Drawing the tree isn't trivial if you have no experience with binary and n-ary trees. The algorithms that perform the tree-displacement are the following

// This function is fundamental, calculates the displacement of each element of the tree
// to show a "nice" n-ary tree on the screen
void QGLDiagramWidget::calculateDisplacement()
{
    // Let's find the tree's maximum depth
    long depthMax = findMaximumTreeDepth(m_diagramData);
    // and set up the various depth levels
    m_depthIntervals.resize(depthMax+1);
    // Now traverse the tree and update displacements, add every found element to the m_diagramDataVector too for an easy access
    postOrderTraversal(m_diagramData);
    // Data is ready to be painted
    dataDisplacementComplete = true;
    if(!m_swapInProgress)
        repaint();
}
void QGLDiagramWidget::postOrderTraversal(dataToDraw *tree)
{
    // Traverse each children of this node
    for(int i=0; i<tree->m_nextItems.size(); i++)
    {
        postOrderTraversal(tree->m_nextItems[i]);
    }
    if(tree->m_nextItems.size() == 0)
    {
        // I found a leaf, displacement-update
        // Y are easy: take this leaf's depth and put it on its Y coord * Y_space_between_blocks
        tree->m_Ydisp = - (tree->m_depth * MINSPACE_BLOCKS_Y);
        // X are harder, I need to put myself at least min_space_between_blocks_X away
        tree->m_Xdisp = m_maximumXreached + MINSPACE_BLOCKS_X;
        // Let's add this leaf to the respective depth list, this will be useful for its parent node
        m_depthIntervals[tree->m_depth].m_allElementsForThisLevel.append(tree->m_Xdisp);
        // Update the maximumXreached with my value
        if(tree->m_Xdisp > m_maximumXreached)
            m_maximumXreached = tree->m_Xdisp;
    }
    else
    {
        // Parent node, displacement-update
        // Y are easy: take this leaf's depth and put it on its Y coord * Y_space_between_blocks
        tree->m_Ydisp = - (tree->m_depth * MINSPACE_BLOCKS_Y);
        // X are harder, I need to put the parent in the middle of all its children
        if(tree->m_nextItems.size() == 1)
        {
            // Just one child, no need for a middle calculation, let's just take the children's X coord
            tree->m_Xdisp = tree->m_nextItems[0]->m_Xdisp;
        }
        else
        {
            // Find minimum and maximum for all this node's children (in the X axis)
            qSort(m_depthIntervals[tree->m_depth+1].m_allElementsForThisLevel.begin(), m_depthIntervals[tree->m_depth+1].m_allElementsForThisLevel.end());
            long min = m_depthIntervals[tree->m_depth+1].m_allElementsForThisLevel[0];
            long max = m_depthIntervals[tree->m_depth+1].m_allElementsForThisLevel[m_depthIntervals[tree->m_depth+1].m_allElementsForThisLevel.size()-1];
            // Let's put this node in the exact middle
            tree->m_Xdisp = (max+min)/2;
        }
        // Let's add this node to the respective depth list, this will be useful for its parent node
        m_depthIntervals[tree->m_depth].m_allElementsForThisLevel.append(tree->m_Xdisp);
        // Update the maximumXreached with my value (if necessary)
        if(tree->m_Xdisp > m_maximumXreached)
            m_maximumXreached = tree->m_Xdisp;
        // Delete the sublevel under my depth, my children are done and they won't be bothering other nodes
        m_depthIntervals[tree->m_depth+1].m_allElementsForThisLevel.clear();
    }
    // However add this node to the m_diagramDataVector
    m_diagramDataVector.append(tree);
}
long QGLDiagramWidget::findMaximumTreeDepth(dataToDraw *tree)
{
    // Simply recurse in post-order inside the tree to find the maximum depth value
    if(tree->m_nextItems.size() == 0)
        return 0;
    else
    {
        int maximumSubTreeDepth = 0;
        for(int i=0; i<tree->m_nextItems.size(); i++)
        {
            long subTreeDepth = findMaximumTreeDepth(tree->m_nextItems[i]);
            if(subTreeDepth > maximumSubTreeDepth)
                maximumSubTreeDepth = subTreeDepth;
        }
        return maximumSubTreeDepth+1; // Plus this node
    }
} 

The steps are:

  1. Explore the memory-stored tree in post-order (children first -> parents after)
  2. Use the depth information for the Y coordinate and the children information for the X

Image 11

3. The father is always centered between its children

The result is a tree displacement (screen of the application in its early stage)

Image 12

There are other things that could be said on the GLWidget but what has just been mentioned it's more than enough to understand the code.

Edit and View mode QMainWindow(s)

Another big unit of the project is the edit mode window, mainly because of its number of controls and widgets incorporated. The code is highly commented here too, so we'll just focus on the parts of the code that are relevant to a complete comprehension. The view window code is rather similar although there are a great number of small differences that would make a unique refactoring a living hell (that's why two classes have been created).

By default the edit mode window's constructor starts in level-one mode. Each level is identified by an enum type and each object (i.e. each block) has a dbDataStructure associated with it. The core structures declarations can be found in the "gdsdbreader.h" header

// This file contains core structures, classes and types for the entire gds app
// WARNING: DO NOT MODIFY UNTIL IT'S STRICTLY NECESSARY
#include <QDir>
#include "diagramwidget/qgldiagramwidget.h"
#define GDS_DIR "gdsdata"
enum level {LEVEL_ONE, LEVEL_TWO, LEVEL_THREE};
// The internal structure of the db to store information about each node (each level)
// this will be serialized before being written to file
class dbDataStructure
{
public:
    QString label;
    quint32 depth;
    quint32 userIndex;
    QByteArray data;    // This is COMPRESSED data, optimize ram and disk space, is decompressed
                        // just when needed (to display the comments)
    // The following ID is used to create second-third level files
    quint64 uniqueID;
    // All the next items linked to this one
    QVector<dbDataStructure*> nextItems;
    // Corresponding indices vector (used to store data)
    QVector<quint32> nextItemsIndices;
    // The father element (or NULL if it's root)
    dbDataStructure* father;
    // Corresponding indices vector (used to store data)
    quint32 fatherIndex;
    bool noFatherRoot; // Used to tell if this node is the root (so hasn't a father)
    // These fields will be useful for levels 2 and 3
    QString fileName; // Relative filename for the associated code file
    QByteArray firstLineData; // Compressed first line data, this will be used with the line number to retrieve info
    QVector<quint32> linesNumbers; // First and next lines (next are relative to the first) numbers
    // -- Generic system data not to be stored on disk
    void *glPointer; // GL pointer
    // These operator overrides prevent the glPointer and other non-disk-necessary data serialization
    friend QDataStream& operator<<(QDataStream& stream, const dbDataStructure& myclass)
    // Notice: this function has to be "friend" because it cannot be a member function, member functions
    // have an additional parameter "this" which isn't in the argument list of an operator overload. A friend
    // function has full access to private data of the class without having the "this" argument
    {
        // Don't write glPointer and every pointer-dependent structure
        return stream << myclass.label << myclass.depth << myclass.userIndex << qCompress(myclass.data)
                         << myclass.uniqueID << myclass.nextItemsIndices << myclass.fatherIndex << myclass.noFatherRoot
                            << myclass.fileName << qCompress(myclass.firstLineData) << myclass.linesNumbers;
    }
    friend QDataStream& operator>>(QDataStream& stream, dbDataStructure& myclass)
    {
        //Don't read it, either
        stream >> myclass.label >> myclass.depth >> myclass.userIndex >> myclass.data
                      >> myclass.uniqueID >> myclass.nextItemsIndices >> myclass.fatherIndex >> myclass.noFatherRoot
                         >> myclass.fileName >> myclass.firstLineData >> myclass.linesNumbers;
        myclass.data = qUncompress(myclass.data);
        myclass.firstLineData = qUncompress(myclass.firstLineData);
        return stream;
    }
}; 

the structure provides fields to store each object's data (label, user index, unique index to create the documentation structure, rich text compressed data, etc..) along with data that is not meant to be stored on disk, that's why there are two stream operators overrides that take care of what should be written to disk and what should not.

Three of the main functions of this unit are

void MainWindowEditMode::tryToLoadLevelDb(level lvl, bool returnToElement)
void MainWindowEditMode::saveCurrentLevelDb()
void MainWindowEditMode::saveEverythingOnThePanesToMemory() 

their code is quite large but they perform roughly the 70% of the work of the gds storing system.

The tryToLoadLevelDb() function takes care of the database files loading from the gds default directory (defined in "gdbsreader.h") depending on the level we want to explore. The "returnToElement" parameter specify whether the function should select the previous zoom-ed element when returning from a deeper level backwards.

The saveCurrentLevelDb() and saveEverythingOnThePanesToMemory() respectively save all the items data on disk and on memory (by re-constructing an updated version of the dbDataStructure tree).

All the elements are stored in a dynamic QVector<dbDataStructure*> vector

// All elements for the current active graph (and relative GL pointers)
    QVector<dbDataStructure*> m_currentGraphElements;
    // The selected element index for the current active graph (this is updated by the openGL widget through a function)
    dbDataStructure* m_selectedElement;
    // These pointers help in finding/creating the next database file while browsing zoom levels
    quint64 m_currentLevelOneID;
    quint64 m_currentLevelTwoID; 

the vector is used to store just the pointers to the elements, the connection between them (parent->children) are stored in their dbDataStructure object.

The two m_currentLevelOneID and m_currentLevelTwoID variables are used to keep track of the current element where a zoom is active in the first and second level (the third level hasn't an additional zoom property).

Both edit and view windows use Qt's ui templates (similar to Visual Studio's DLGTEMPLATEEX wysiwyg editor) managed by Qt Creator.

The right rich text area is a textEditorWin object, which in turn is a subclass of a QMainWindow base class. This is necessary to add toolbars, actions and complex controls to the base widget - a mere QTextEdit rich text editor. The code is rather straightforwarding and, except for a number of small changes, resembles the rich text editor widget of the Qt SDK so we won't bother describing it further.

The code area (for both edit and view windows) is a CodeEditorWidget (QTextEdit subclass) with a CppHighlighter (QSyntaxHighlighter subclass) object associated to its document() and set with a standard C/C++ syntax highlight configuration. Along with the initialization settings a system of signals and slots (Qt's exclusive) provides a convenient way to link a line counter widget to the scrollbar events

// Initialization settings
    setReadOnly(true);
    setAcceptRichText(false);
    setLineWrapMode(QTextEdit::NoWrap);
    // Signal to redraw the line counter when our text has changed
    connect(this, SIGNAL(textChanged()), this, SLOT(updateFriendLineCounter()));
    // And to scroll too
    QScrollBar *scroll1 = this->verticalScrollBar();
    QScrollBar *scroll2 = m_lineCounter->verticalScrollBar();
    connect((const QObject*)scroll1, SIGNAL(valueChanged(int)), (const QObject*)scroll2, SLOT(setValue(int)));
    connect(this, SIGNAL(updateScrollBarValueChanged(int)), (const QObject*)scroll2, SLOT(setValue(int))); 

The mouseReleaseEvent() override takes care of intercepting the block (equivalent to line in a plain-text context) where the user clicked (if in edit mode) to highlight a specific line of code whose number will be stored in the

QVector<quint32> m_selectedLines 

vector. Each node's associated code (if any) is stored in the following fields

// These fields will be useful for levels 2 and 3
    QString fileName; // Relative filename for the associated code file
    QByteArray firstLineData; // Compressed first line data, this will be used with the line number to retrieve info
    QVector<quint32> linesNumbers; // First and next lines (next are relative to the first) numbers 

It has to be noticed that each code block is identified by its first line compressed data and next line numbers (lines after the first have their numbers stored relatively to the first). This is a simple approach to bear with the lack of a proper version control system which should instead check for differences and try to merge versions. Inserting marking comment tags in the code could have been another solution but since we believe the code shouldn't be messed up with, the above approach was chosen.

A tricky part exclusive of the main edit window is the following:

// This method is called by the openGL widget every time the selection is changed on the graph
void *MainWindowEditMode::GLWidgetNotifySelectionChanged(void *m_newSelection)
{
    // First save the right/left pane data for the old selected element
    if(!m_lastSelectedHasBeenDeleted)
    {
        qWarning() << "GLWidgetNotifySelectionChanged.saveEverythingOnThePanesToMemory()";
        saveEverythingOnThePanesToMemory();
    }
    if(m_swapRunning)
    {
        // There's a swap running, swap the selected element with the new selected element
        GLDiagramWidget->m_swapInProgress = true;
        dbDataStructure *m_newSelectedElement;
        // Select our new element
        long m_newSelectionIndex = -1;
        long m_selectedElementIndex = -1;
        int foundBoth = 0;
        for(int i=0; i<m_currentGraphElements.size(); i++)
        {
            if(foundBoth == 2)
                break;
            if(m_currentGraphElements[i]->glPointer == m_newSelection)
            {
                m_newSelectedElement = m_currentGraphElements[i];
                m_newSelectionIndex = i;
                foundBoth++;
            }
            if(m_currentGraphElements[i]->glPointer == m_selectedElement->glPointer)
            {
                m_selectedElementIndex = i;
                foundBoth++;
            }
        }
        // Swap these two structure's data
        QByteArray m_temp = m_newSelectedElement->data;
        m_newSelectedElement->data = m_selectedElement->data;
        m_selectedElement->data = m_temp;
        QString m_temp2 = m_newSelectedElement->fileName;
        m_newSelectedElement->fileName = m_selectedElement->fileName;
        m_selectedElement->fileName = m_temp2;
        m_temp2 = m_newSelectedElement->label;
        m_newSelectedElement->label = m_selectedElement->label;
        m_selectedElement->label = m_temp2;
        long m_temp3 = m_newSelectedElement->userIndex;
        m_newSelectedElement->userIndex = m_selectedElement->userIndex;
        m_selectedElement->userIndex = m_temp3;
        m_temp = m_newSelectedElement->firstLineData;
        m_newSelectedElement->firstLineData = m_selectedElement->firstLineData;
        m_selectedElement->firstLineData = m_temp;
        QVector<quint32> m_temp4 = m_newSelectedElement->linesNumbers;
        m_newSelectedElement->linesNumbers = m_selectedElement->linesNumbers;
        m_selectedElement->linesNumbers = m_temp4;

        // Avoid a recursive this-method recalling when selected element changes: set swapInProgress and avoid repainting
        GLDiagramWidget->clearGraphData();
        updateGLGraph();
        // Data insertion ended, calculate elements displacement and start drawing data
        GLDiagramWidget->calculateDisplacement();
        GLDiagramWidget->m_swapInProgress = false;
    }
    else
    {
        // Select our new element
        for(int i=0; i<m_currentGraphElements.size(); i++)
        {
            if(m_currentGraphElements[i]->glPointer == m_newSelection)
            {
                m_selectedElement = m_currentGraphElements[i];
                break;
            }
        }
    }
    qWarning() << "New element selected: " + m_selectedElement->label;
    // Load the selected element data in the panes, but first clear them
    qWarning() << "GLWidgetNotifySelectionChanged.clearAllPanes() and loadSelectedElementDataInPanes()";
    clearAllPanes();
    loadSelectedElementDataInPanes();
    if(m_swapRunning)
    {
        m_swapRunning = false;
        ui->swapBtn->toggle();
        // Return to the painting widget the new element to be selected (data has been modified)
        return m_selectedElement->glPointer;
    }
    else
        return NULL;
} 

when an object is selected (colorpicking mode - GLWidget) on the openGL diagram, the widget notifies its parent window (edit or view mode) that the selection has been changed. The edit window, however, provides another functionality: elements swapping. When the user pressed the "Swap Element" button (which is a toggle button), the system records the first swap item as the current selected item. In turn, when the user selects another element, it is marked as "second swap item" and the swap begins. Since the GLWidget simply ignores all this, the swap logic is handled by the edit window itself and that's exactly what happens in the code function above. If the swap mode isn't active the selected graph's element is retrieved in the dbDataStructure, the selected element is saved into memory and the panes are reloaded with the new selected element's data.

Other tricky functions:

void MainWindowEditMode::on_deleteSelectedElementBtn_clicked() 

this slot handles the "Delete Selected Element" action in three different ways:

  • If the selected element is the root, deletes all the graph
  • If the selected element isn't the root and hasn't children, is eliminated
  • If the selected element isn't the root but has children, the user is prompted whether the system should delete the children along with their parent or assign the children to their parent's parent through a pointers system
void MainWindowEditMode::on_addChildBlockBtn_clicked() 

this slot handles the "Add Children Block" action. If there's a variable called m_firstTimeGraphInCurrentLevel set, the graph is empty and a root element must be created (no father), otherwise a child element is created and the selected element is set as father.

Finally, the GLWidget provides functions to control the drawing process without the hassle of dealing with painting events

//// This method is called by the openGL widget when it's ready to draw and if there's a scheduled painting pending
void MainWindowViewMode::deferredPaintNow()
{
    // Sets the swapping value to prevent screen flickering (it disables repaint events)
    bool oldValue = GLDiagramWidget->m_swapInProgress;
    GLDiagramWidget->m_swapInProgress = true;
    GLDiagramWidget->clearGraphData();
    updateGLGraph();
    // Data insertion ended, calculate elements displacement and start drawing data
    GLDiagramWidget->calculateDisplacement();
    // Restore the swapping value to its previous
    GLDiagramWidget->m_swapInProgress = oldValue;
    GLDiagramWidget->changeSelectedElement(m_selectedElement->glPointer);
    // We selected an element for the first time (the graph has been loaded), we need to recharge this item's data
    // Load the selected element data in the panes
    loadSelectedElementDataInPanes();
} 

First a call to the clearGraphData() is made, this clears the blocks vectors in the GLWidget and calls for a repaint, then a calculateDisplacement() call occurs to initialize the post-order traversal and displacement calculation, eventually a changeSelectedElement() (if the element needed to be selected is different from the root) is called to select another element that, in turn, will instruct the painting function to use a different gradient texture to render the selected element.

All the connection between elements are automatically created in the drawConnectionLinesBetweenBlocks() function of the GLWidget so there's no need for the main windows to explicitly call it

void QGLDiagramWidget::drawConnectionLinesBetweenBlocks()
{
    // This function is going to draw simple 2D lines with the programmable pipeline
    // The picking shaders are simple enough to let us draw a colored line, we'll use them
    ShaderProgramPicking->bind();
    // NOTICE: since each element's model matrix will be multiplied by the vertex inserted in the vertex array
    // to the shader, this uMVMatrix is actually going to be filled with JUST the VIEW matrix. The result will be
    // the same to the shader
    GLuint uMVMatrix = glGetUniformLocation(ShaderProgramPicking->programId(), "uMVMatrix");
    GLuint uPMatrix = glGetUniformLocation(ShaderProgramPicking->programId(), "uPMatrix");
    // Send our transformation to the currently bound shader,
    // in the right uniform
    float gl_temp_data[16];
    for(int i=0; i<16; i++)
    {
        // Needed to convert from double (on non-ARM architectures qreal are double)
        // to float
        gl_temp_data[i]=(float)gl_projection.data()[i];
    }
    glUniformMatrix4fv(uPMatrix, 1, GL_FALSE, &gl_temp_data[0]);
    for(int i=0; i<16; i++)
    {
        // Needed to convert from double (on non-ARM architectures qreal are double)
        // to float
        gl_temp_data[i]=(float)gl_view.data()[i]; // AGAIN: just the view matrix in the uMVMatrix, the result will be the same
    }
    // Set a color for the lines
    glUniformMatrix4fv(uMVMatrix, 1, GL_FALSE, &gl_temp_data[0]);
    GLuint uPickingColor = glGetUniformLocation(ShaderProgramPicking->programId(), "uPickingColor");
    glUniform3f(uPickingColor, 1.0f,0.0f,0.0f);
    // If there's just one element (root and no connections), exit
    if(m_diagramDataVector.size() == 0 || m_diagramDataVector.size() == 1)
        return;
    // Scroll the diagramDataVector and create the connections for each element
    QVector<dataToDraw*>::iterator itr = m_diagramDataVector.begin();
    // Create a structure to contain all the points for all the lines
    struct Point
    {
        float x,y,z;
        Point(float x,float y,float z)
                        : x(x), y(y), z(z)
        {}
    };
    // This will contain all the point-pairs to draw lines
    std::vector<Point> vertexData;
    while(itr != m_diagramDataVector.end())
    {
        // Set the origin coords (this element's coords)
        QVector3D baseOrig(0.0,0.0,0.0);
        // Adjust them by porting them in world coordinates (*model matrix)
        QMatrix4x4 modelOrigin = gl_model;
        modelOrigin.translate((qreal)(-(*itr)->m_Xdisp),(qreal)((*itr)->m_Ydisp),0.0);
        baseOrig = modelOrigin * baseOrig;
        // Get each children of this node (if any)
        for(int i=0; i< (*itr)->m_nextItems.size(); i++)
        {
            dataToDraw* m_temp = (*itr)->m_nextItems[i];
            // Create destination coords
            QVector3D baseDest(0.0, 0.0, 0.0);
            // Adjust the destination coords by porting them in world coordinates (*model matrix)
            QMatrix4x4 modelDest = gl_model;
            modelDest.translate((qreal)(-m_temp->m_Xdisp),(qreal)(m_temp->m_Ydisp),0.0);
            baseDest = modelDest * baseDest;
            // Add the pair (origin;destination) to the vector
            vertexData.push_back( Point((float)baseOrig.x(), (float)baseOrig.y(), (float)baseOrig.z()) );
            vertexData.push_back( Point((float)baseDest.x(), (float)baseDest.y(), (float)baseDest.z()) );
        }
        itr++;
    }
    // We have everything we need to draw all the lines
    GLuint vao, vbo; // VBO is just a memory buffer, VAO describes HOW the data should be interpreted in the VBO
    // Generate and bind the VAO
    glGenVertexArrays(1, &vao);
    glBindVertexArray(vao);
    // Generate and bind the buffer object
    glGenBuffers(1, &vbo);
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
    // Fill VBO with data
    size_t numVerts = vertexData.size();
    glBufferData(GL_ARRAY_BUFFER,                           // Select the array buffer on which to operate
                 sizeof(Point)*numVerts,                    // The total size of the VBO
                 &vertexData[0],                            // The initial data of the VBO
                 GL_STATIC_DRAW);                           // STATIC_DRAW mode
    // set up generic attrib pointers
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0,                                // Attribute 0 in the shader
                          3,                                // Each vertex has 3 components: x,y,z
                          GL_FLOAT,                         // Each component is a float
                          GL_FALSE,                         // No normalization
                          sizeof(Point),                    // Since it's a struct, x,y,z are defined first, then the constructor
                          (char*)0 + 0*sizeof(GLfloat));    // No initial offset to the data

    // Call the shader to render the lines
    glDrawArrays(GL_LINES, 0, numVerts);
    // "unbind" VAO and VBO
    glBindVertexArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, 0);
} 

Connectors are simply drawn by linking basic color picking shaders (without textures or lighting calculations) and setting a VAO (Vertex Array Object) and a VBO (Vertex Buffer Object) to store the vertices to connect with lines. The role of the VBO is to store the memory needed to perform the operation (which will be accomplished by the associated shaders) while the VAO specify how data are stored into the VBO. However these are basic openGL actions.

Conclusions

This paper's goal was to present a new software documentation approach implemented through an experimental concept application - gds. Software engineering methodologies are techniques relatively new compared to other engineering fields so there might be a lot of improvements and changes in the future.

To be completely honest this work also helped me to learn openGL and strengthen my Qt knowledge, beside realizing an old idea I've been thinking on for a long time.

References and Links

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)