Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

The Console Reinvented

0.00/5 (No votes)
26 Mar 2009 3  
A multi-view console written in C# and DirectX.

2.jpg

Contents

Introduction

The venerable console (also known as a terminal) has been with us for quite some time, and is never going away – that's why for every language, Visual Studio gives an option to create a console project. The console itself, however, is a relic – it's slow and ugly, not to mention being inflexible and impossible to extend. This article describes my attempt to make a better one.

To compile the examples in this article, you need Visual Studio 2008 and the DirectX SDK. To run it, you need to make sure your version of DirectX supports the MDX extensions. If not, download the latest version.

Problem Statement

In terms of functional requirements, I wanted my console to have the following traits:

  • Speed – the current console is atrociously slow, and sometimes I need to write lots of data to the console quickly, so fast rendering is essential.
  • Multiple sources – it's nice that the Windows console is thread-safe, but it still writes everything to one list. I want to be able to write to different locations of the console (e.g., in a multi-column layout) from different sources.
  • Customization and rudimentary typography – the console is not a desktop publishing system, but I still want basic options like being able to choose fonts and colours.

And here are the implementation traits:

  • Thread-safety – just imagine how bad life would be if the original console wasn't thread-safe. I want to keep this.
  • Hardware acceleration – since speed is so important, I want graphics hardware acceleration. No GDI allowed.
  • Flexibility – I want an architecture that allows easy creation and composition of console elements.

Technology

When choosing what to code in, I had two requirements. One was to have hardware acceleration, which meant that I had to choose an API that allowed low-level interaction with the graphics card and permitted me to render nice 2D images. In today's world, this is a fairly binary choice (i.e., DirectX vs. OpenGL).

The other requirement was that it would be easy to use from .NET – possibly usable to supplant the typical Console.Write/Line calls by simply injecting a using statement somewhere.

The intersection of these two requirements resulted in me choosing C# as the programming language (why deny yourself?) and Managed DirectX as the API to program with. Now, Managed DirectX is, for most people, an utterly dead technology, but my choices were somewhat restricted in this fashion – I didn't want to have to work with XNA, and my experiments with SlimDX1 didn't even get me past the device-creating stage.

Consequently, I went with what is (or used to be) called MDX2 (stands for Managed DirectX). MDX is essentially a technology that provides .NET wrappers around the DirectX API. It's no longer supported by Microsoft, but it works and is part of the DirectX runtime distribution. With hindsight, I can say I've had no problems with it, probably because I didn't use it for anything advanced.

Architecture

My take on architecture was predicated by the fact that I was annoyed by the Windows console having just one buffer and just one location where to write the text. I wanted something more flexible that would allow me to, say, present the real-time output of 10 different types of analyses running and outputting to different parts of the same console. Consequently, I came up with three concepts – buffer, viewport, and console:

1.jpg

Buffer

The console buffer is just some memory storing text, right? So in theory, it would be very simple to code – just make a char[,] and you're done. Well, in reality, there are a couple of problems.

I wanted word wrapping in the console to make textual output somewhat neater. So, I implemented code that checked for whether the provided text fits on a line, and tried to break up the words if it didn't. Of course, there are border cases such as where the line consists only of spaces (e.g., when the user wants to use spaces as padding). In this case, I leave it be.

Another problem with the buffer is overflow, i.e., when you run out of lines. Rather than reposition several lines when a new one pushes the old one out, I came up with the idea of an insertion point to keep the starting line of the buffer. Then, when an overflow occurs, the new line overwrites the oldest line, and we simply do one loop around the buffer, starting and ending at the insertion point.

Viewport

As the eyes are said to be a window into the soul, so is a Viewport a window into the buffer – it can either show a portion of the buffer (starting from any legal x-y buffer location), or all of it. Given a buffer, it provides an indexer property to get a character in the viewport's co-ordinates:

public char this[int x, int y]
{
  get
  {
    return Buffer[x + bufferLocation.X, 
      (Buffer.StartLine + y + bufferLocation.Y) % Buffer.Size.Height];
  }
  protected internal set
  {
    Buffer[x + bufferLocation.X, 
      (Buffer.StartLine + y + bufferLocation.Y) % Buffer.Size.Height] = value;
  }
}

Of course, our console doesn't use the viewport directly – there is another layer of indirection present, in which a character is turned into a texture. Let's take a look at how this is implemented.

Texture Manager

The TextureManager class creates textures for each of the letters3 based on the parameters specified. The parameters include the font, foreground and background colours. Each combination of these forms a preset. Presets are stored in the following field:

private readonly List<Texture[]> presets = new List<Texture[]>();

As you can see, we effectively have an int to Texture mapping. Presets are created with the AddPreset() method, which uses GDI to render a character onto a DirectX texture:

public short AddPreset(Font font, Color bgColor, Color fgColor)
{
  // create textures for each individual character
  var textures = new Texture[charCount];
  using (var bmp = new Bitmap(charSize.Width, charSize.Height, 
                              PixelFormat.Format32bppArgb))
  using (Graphics g = Graphics.FromImage(bmp))
  using (Brush brush = new SolidBrush(fgColor))
    for (int i = 0; i < CharCount; ++i)
    {
      g.TextRenderingHint = TextRenderingHint.ClearTypeGridFit;
      g.Clear(bgColor);
      g.DrawString(string.Format("{0}", (char) i), font, brush, 0.0f, 0.0f);
      Texture t = Texture.FromBitmap(device, bmp, 0, Pool.Managed);
      Debug.Assert(t != null);
      textures[i] = t;
    }
  presets.Add(textures);
  return (short) (presets.Count - 1);
}

Since the user isn't likely to switch from presets often, we keep the preset in a property rather than have it as an indexer parameter. Thus, to get the texture from the texture manager, we can use the following indexer:

public Texture this[int letter]
{
  get { return presets[CurrentPreset][letter]; }
}

You'll notice that the letter parameter is an int – this is so because array indexers are typically int types. Of course, to use the texture manager, you would pass a char, e.g., var myTex = myTexMgr['a'];.

Console

We are now getting to the ‘core' of our console architecture – the Console class itself! This class basically creates the console window, and drives the rendering of the separate viewports. I won't go through the whole process of creating an MDX window, but will rather show the few interesting features as well as the core rendering mechanism.

The first problem I encountered was with device creation. How do you create a Direct3D device that is compatible with the end user's hardware? Well, my take on it is to try several options: fastest first, slowest last:

// here are a couple of pairs to try
var pairsToTry = new[]
{
  new Pair<DeviceType, CreateFlags>(DeviceType.Hardware, 
                       CreateFlags.HardwareVertexProcessing),
  new Pair<DeviceType, CreateFlags>(DeviceType.Hardware, 
                       CreateFlags.SoftwareVertexProcessing),
  new Pair<DeviceType, CreateFlags>(DeviceType.Software, 
                       CreateFlags.SoftwareVertexProcessing),
  new Pair<DeviceType, CreateFlags>(DeviceType.Reference, 
                       CreateFlags.SoftwareVertexProcessing),
};
for (int i = 0; i < pairsToTry.Length; i++)
{
  Pair<DeviceType, CreateFlags> p = pairsToTry[i];
  try
  {
    device = new Device(0, p.First, this, p.Second, pp);
    break;
  }
  catch
  {
    continue;
  }
}
if (device == null)
  throw new ApplicationException("Could not create device.");

This approach allowed me to fallback to software mode if I had to, but I quickly realized that, even on the fastest machines, software mode is so slow it is practically unusable. I guess this restricts the usability of this console to machines having hardware acceleration.

Having created my device, I went on to try and set device states so that my letters wouldn't be scaled, i.e., so that a 10×14 texture would be rendered on a 10×14 rectangle without distortion:

private void ResetDeviceStates()
{
  device.RenderState.CullMode = Cull.None;
  device.RenderState.Lighting = false;
  device.RenderState.ZBufferEnable = false;
  device.SetSamplerState(0, SamplerStageStates.MinFilter, 0);
  device.SetSamplerState(0, SamplerStageStates.MagFilter, 0);
}

This didn't quite work however – my rendered textures would still seem ‘off' somehow. In fact, they were off by exactly 0.5 pixel both vertically and horizontally because I forgot to compensate for the pixel-texel mismatch4. Thus, code had to be revised to move each texture co-ordinate by exactly 0.5 pixel:

private void OnCreateVertexBuffer(object sender, EventArgs e)
{
  // co-ordinates are explicitly shifted by half a pixel each way
  // to compensate for pixel/texel mismatch
  var v = (CustomVertex.PositionTextured[]) vb.Lock(0, 0);
  v[0].X = 0.0f + 0.5f;
  v[0].Y = 0.0f + 0.5f;
  v[0].Z = 0.0f;
  v[0].Tu = 0.0f;
  v[0].Tv = 1.0f;
  ⋮
}

I went through a lot of pain getting the matrices to present how they should, i.e., mapping a 2D plane onto 3D correctly. In the end, I used the following (somewhat obvious, in hindsight) code to project my console correctly:

private void SetupMatrices()
{
  device.Transform.World = Matrix.Identity;
  device.Transform.View = Matrix.LookAtLH(
    new Vector3(0.0f, 3.0f*distanceToObject, -5.0f*distanceToObject),
    new Vector3(0.0f, 0.0f, 0.0f),
    new Vector3(0.0f, 1.0f, 0.0f));
  device.Transform.Projection = Matrix.OrthoRH(
    ClientSize.Width, ClientSize.Height, -100.0f, 100.0f);
}

When it got to actually rendering the console, I implemented the following algorithm:

  • Go through each of the console's X and Y co-ordinates.
  • Go through each of the buffers; if the buffer needs to render something to this location,
  • Set the texture and draw it.
  • After each character, translate the view matrix by the width of the character.
  • After each row of characters, move the translate the view matrix one line down and to the left of the console.

Once again, I won't present the code here; however, I want to show the line setting the texture to be rendered:

device.SetTexture(0, TexManager[v[x - v.ScreenLocation.X, y - v.ScreenLocation.Y]]);

Here, you can see the interplay between the texture manager and the viewport (the v variable). Essentially, we get the right character from the viewport, then pass it to the texture manager, which yields the corresponding Texture object.

Usage

The simplest use of the console is as follows:

using (Console c = Console.NewConsole(30, 20)) {
  c.Show();
  // your code here
  while (c.Created) {
    c.Render();
    Application.DoEvents();
  }
}

A more complex example is provided in the source code.

Known Bugs

While programming the console, I tried desperately to find authoritative information on how to handle device resets. However, every single resource on the net only offered a small, often useless piece of the puzzle, so I must admit that I have failed to write code that handles them correctly5. Consequently, if you run the console in windowed mode and resize the window, the glyphs will become garbled. At the time of writing, I have no clue on how to fix it.

Conclusion

What I have shown here is only a tip of the iceberg in terms of what can be done with a console. Being DirectX-driven, you could add lots of animation and additional features to the console's API without severely taxing the CPU. However, the functionality presented here is enough for my needs of redirecting several long-running analytical sources to a single console.

References

  1. ^SlimDX is another managed DirectX framework. More information can be found at http://code.google.com/p/slimdx/.
  2. ^It is still called that, but so is Microsoft's OLAP query language (see, e.g., the Wikipedia entry), which might cause some confusion.
  3. ^As you may have guessed, it won't be able to work efficiently with extended character sets (e.g., Japanese) because there would be too many textures to make. This is a design decision – I'm only interested in Latin, so there's no problem.
  4. ^This mismatch is caused by the fact that texture co-ordinates really do point to their locations, so if you place a texel (texture pixel) at (0, 0), you are effectively placing its center at this location instead of placing its top-left corner there.
  5. ^But I suspect that most MDX developers have failed in this respect, too.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here