This code is not actively maintained. It's simply too much work to keep the entire series up to date. That means if you want the latest GFX get it from the following article:
GFX Forever: The Compelete Guide to GFX for IoT
Introduction
The next article in the series is here.
GFX is a sprawling work, with a lot of features exposed behind an API that's at times superficially simple, but very deep. However, in order to facilitate its many features while providing reasonable performance, GFX was designed using a different programming paradigm than most graphics libraries.
Most graphics libraries for C++ are object oriented. GFX has objects, but they aren't of core importance. GFX exposes its functionality using an API that's based around generic programming. Fortunately, you don't have to be an expert at it to use GFX, but it does have a little bit of an upfront learning curve associated with it. This article is provided as the first part of a series that hopes to facilitate mastery of GFX, from the basics to the advanced.
Building this Mess
You'll need Visual Studio Code with the Platform IO extension installed. You'll need an ESP32 with a connected ILI9341 LCD display. It is possible to modify the code to use a different driver if you really want to.
I recommend the Espressif ESP-WROVER-KIT development board which has an integrated ILI9341 display and several other pre-wired peripherals, plus an integrated debugger and a superior USB to serial bridge with faster upload speeds. They can be harder to find than a standard ESP32 devboard, but I found them at JAMECO and Mouser for about $40 USD. They're well worth the investment if you do ESP32 development. The integrated debugger, though very slow compared to a PC, is faster than you can get with an external JTAG probe attached to a standard WROVER devboard.
Most of you however, will be using the generic esp32 configuration. At the bottom of the screen in the blue bar of VS Code, there is a configuration switcher. It should be set at Default to start, but you can change it by clicking on default. A list of both configurations will drop down from the top of the screen. From there, you can choose which setup you have.
In order to wire all this up, refer to wiring_guide.txt which has display wirings for SPI displays. Keep in mind some display vendors name their pins with non-standard names. For example, on some displays MOSI
might be labelled as DIN
or A0
. You make have to do some Googling to find out the particulars for your device.
Note: The Platform IO IDE is kind of cantankerous sometimes. The first time you open the project, you'll probably need to go to the Platform IO icon on the left side - it looks like an alien. Click it to open up the sidebar and look under Quick Access|Miscellaneous for Platform IO Core CLI. Click it, and then when you get a prompt, type pio run
to force it to download necessary components and build. You shouldn't need to do this again, unless you start getting errors again while trying to build. Also, for some reason, whenever you switch a configuration, you have to go and refresh (the little circle-arrow next to "PROJECT TASKS") before it will take.
Conceptualizing this Mess
For a complete treatment of GFX from high level to code, see this linked article, which serves as its primary documentation. Here, we'll be drilling down and focusing on the most basic of drawing primitives, but adding alpha blending and an offscreen frame buffer to spice things up a bit and keep it interesting.
Pixelriffic!
GFX has a novel way of representing pixels and colors. They are of an arbitrarily defined color model (RGB, YUV, grayscale, etc.), an arbitrarily defined bit depth/resolution (1-bit, 16-bit, 24-bit, etc), and with an arbitrarily defined number of named channels which are related to and define the color model.
Essentially, different media has different formats. A JPEG represents its pixels in 24-bit Y'CbCr BT.601 format, while a typical color IoT display device is 16-bit RGB (or even 18-bit RGB represented in 24-bits with padding, and sometimes even indexed color with a palette/CLUT. In addition, some media supports an alpha channel with the capability of representing semi-transparent colors.
All of this is dizzying to say the least. How do you begin to manage it?
For starters, GFX seamlessly converts between pixel formats while doing alpha blending when necessary so that you typically don't have to concern yourself with converting to and from different formats explicitly.
Secondly, the pixel<>
template provides a rich API yet with a very few core members you have to worry about. Despite this, a pixel provides a rich template interface to allow you to specify the details of each channel of the pixel, from which it calculates all of the rest of the information at compile time. You don't usually have to define pixels explicitly this way because they'll either come to you predeclared through some draw target's pixel_type
member, or when you can't do that there are wrappers for declaring common pixel formats very simply as summarized below.
Consider these examples: To declare a 32-bit RGBA pixel like used in .NET, you simply use rgba_pixel<32>
. To declare a 16-bit RGB pixel like used with many IoT displays, use rgb_pixel<16>
. To declare an 8-bit grayscale pixel, use gsc_pixel<8>
. To declare a monochroome pixel, you can use gsc_pixel<1>
. To declare a pixel in JPEG's Y'CbCr format, you can use ycbcr_pixel<24>
.
Next, pixels are always represented by the concept of channels. Channels are named and indexed values that correspond to the color and display information for a pixel, as well as its binary layout. For example, a 16-bit pixel declared with rgb_pixel<16>
will have three color channels - R
, G
and B
. R
is 5 bits, G
is 6 bits, and B
is 5 bits. Green takes the remaining bit because most pixel formats assign any extra bits to green due to the fact that our eyes discern green more than other colors. Different bit depths on those channels yield different ranges for the values of the channel. For example, R
and B
, having 5 bits each, have an effective range of 0-31, while G
, having 6 bits, has an effective range of 0-63. These are details you don't really have to worry about though, because you can always get the value as a floating point number scaled between 0 and 1. You can access channels by name or index. Getting a channel is basically the_pixel.channel<
{name or index}>()
for the integer or the_pixel.channelr<
{name or index}>()
for the scaled floating point number (real number) value. Setting them is similar: the_pixel.channel<
{name or index}>(
{new value})
for the integer or the_pixel.channelr<
{name or index}>(
{new value})
for the scaled floating point number (real number) value. You can also get the pixel's whole value as a word containing all of the channel data by using the_pixel.value()
and you set it using the_pixel.value(
{new value})
. The machine order word can be accessed using the_pixel.native_value
. Don't worry, as we'll see it in action a bit later on.
Any time a pixel has an alpha channel (channel_name::A
), GFX will attempt to blend its color with whatever color is beneath it based on the value of the alpha channel. For example, if the alpha channel is 0.75
and the draw color is red
, red will be mixed/blended with whatever color is underneath it at a ratio of 3/4 favoring red, and 1/4 whatever is underneath.
It would be a shame to have to manually declare all of your colors by setting each channel of the pixel every time you need white, for example. Fortunately, GFX provides all of the standard X11 named colors as predefined colors in any pixel format you require. Don't ask how this magic works, yet. It's cool, but also a tangent we don't need right now. The bottom line is you can go, using my_colors = color<rgb_color<16>>;
and then use my_colors::antique_white
or my_colors::steel_blue
wherever you need it. If you need the 16-bit value in big endian format, you can do my_colors::white.value()
, which would yield 0xFFFF.
There is also a rich API for determining which channels a pixel has, how many there are, what order they are in, and even comparing two pixel types to see if they share a color model. Most of that you'll never need, so we won't be covering it here. You'd use it if you want to extend GFX to be able to convert between additional color models and RGB. This rabbit hole goes deep, indeed.
Most of the drawing methods take a pixel that indicates the color to use for the drawing operation. For example, draw::line<>()
takes a pixel as the color with which to draw the line. It doesn't matter what kind of pixel you want to feed to the draw::
methods. They'll consume anything and do the necessary magic to make it work. For example, if you pass a 32-bit RGBA pixel to a 16-bit RGB bitmap, the pixel will automatically be downsampled to 16-bit and alpha blended with the underlying pixel in the bitmap. It will even convert between certain common color models like Y'UV, grayscale and RGB. This is usually how you'll facilitate format conversion and alpha blending.
Pixels are a deceptively powerful little tool. The above may make them seem complicated. The truth is, they really are. However, again you don't need to bother with most of that complexity until you need it, which is unlikely for day to day use of GFX. When we get to the code, you'll see that using pixels is pretty straightforward.
Take Your Position
Location is everything in real estate, including screen real estate. We need ways to specify the location and often the dimensions of our draw operations. The positioning API provides all the tools you should ever need for this.
Points
A point is simply a 2D coordinate. It consists of an x
and y
value, and depending on the type of point those values may be signed or unsigned. There are some members for offsetting a point, and seeing if a point intersects with another point. Also like most location objects points may be cast between signed and unsigned versions. You'll usually use point16
for the unsigned version, which uses 16 bit unsigned integers for the coordinates, or spoint16
which uses signed 16 bit integers. Any manipulation methods for points except those ending in _inplace
return a new point.
Sizes
A size indicates the dimensions of something in 2D space. It consists of a width
and a height
member, plus a member for getting a bounding rectangle based on the size and members for casting to and from signed (ssize16
) and unsigned (size16
) versions. Any manipulation methods for sizes except those ending in _inplace
return a new size.
Rectangles
Rectangles are the real workhorse of the location faculties. They consist of two 2D coordinates represented by x1
, y1
, x2
and y2
. Rectangles provide a battery of methods for retrieving information and manipulating them, including centering, inflating, flipping and normalization, and more. You'll usually use rect16
for the unsigned version or srect16
for the signed version. As with points and sizes, all of the manipulation methods except those ending in _inplace
return a new rectangle.
Paths
Paths specify a series of connected line segments represented by points. Indicated by begin()
, operator[]
and size()
, similar to the STL containers. They are a bit of an outlier in terms of how they operate due to requiring an external buffer/array of points to be passed in. The reason for this is to avoid unnecessary banging on the heap. GFX is generally loath to do implicit heap allocations, which is part of why bitmaps and paths take pointers to external buffers rather than creating their own. As such, the only manipulation methods are _inplace
and there is no facility for automatically creating a new path from an existing path. They are very much unlike the other positioning objects in that regard. Usually to use one, you create a set of points (in clockwise order if making a polygon) and then offset_inplace()
on it to move it where you need it. You'll usually use spath16
since that's what the drawing operations take.
Right on Target
Draw targets are sources or destinations that can be used for drawing operations. Sources are things that can be read from, and destinations are things that can be written to. Some things are both. Draw targets are things like devices (such as an LCD display) or a bitmap. All drawing operations require a draw target in the form of a draw destination. Some also require a second draw target - a draw source.
It's a Draw!
Using the above tools, we can define and draw points lines and shapes pretty much anywhere. To do this, we use the draw
class which essentially takes the form of draw::
{object}(destination,
{position},
{pixel/color},
{other options}...)
.
{object} indicates what type of object we're going to draw like arc
or filled_rectangle
.
{position} is the coordinates of the draw operation and usually is a srect16
but may be an spoint16
or an spath16
depending on {object}.
{pixel/color} indicates the color to use for the draw operation. Any format of pixel will be accepted and necessary conversions will take place. Alpha channels are respected but they have to be done on a draw destination that is also a draw source - in other words, one that supports being read from. It's recommended to use a bitmap as the destination when you need alpha blending because the number of reads and writes necessary makes it extremely slow to do over something like an SPI bus. Therefore, alpha blending directly to a display is not recommended. In the future, GFX will automatically use a temporary intermediary bitmap to facilitate faster blending but for now avoid alpha blending directly on a display.
{other options} is zero or more arguments and depends on {object}.
And Now For Something Completely Different
As a sort of sidebar here, I am going to cover double buffering, since we use it in the demo. Double buffering prevents the appearance of "tearing" when drawing to a display, and is a common technique when doing animation. Tearing causes the display to appear to flicker as the animations are being drawn. Unfortunately, while using double buffering solves this, it requires keeping an offscreen bitmap that holds an entire frame of display pixels. That's 150kB @ 320x240x16bpp, which may not seem like much, but it is asking quite a lot to try to find a contiguous free block of memory of that size on a little IoT system.
To solve this issue of no contiguous memory block on the heap, we simply use non-contiguous memory, which is to say we use several blocks of memory and present it as a single bitmap. This is done with the large_bitmap<>
class which we use in the demo. We declare it such that each line of the large bitmap is a regular bitmap the size of a single line. The large bitmap manages those 240 320x1 (in this case) bitmaps to provide a single seamless draw target.
Double buffering solves two other problems for us as well. The first is that currently you cannot read from an ILI9341, though this will change in the future. Due to the fact that you can't read from it, you also can't properly alpha blend on it. Even if you could, it would be terribly slow due to all of the bus traffic it's forced to generate. By drawing to our offscreen buffer we regain alpha blending capability, since bitmaps are readable. We also sidestep the performance issue of trying to alpha blend over the bus. Instead, we send the offscreen bitmap to the screen periodically, which happens relatively quickly, especially compared to the alternative.
It should be noted that while large_bitmap<>
improved performance in this case, it is significantly less performant than a native bitmap because it can't be directly blted nor does it implement copy_from<>()
or copy_to<>()
, at least in the current version of GFX. It also can't be asynchronously transferred even if the destination supports it because it's not a "real" bitmap. Still, here using it gives us a big win.
Coding this Mess
If the above seems complicated, the actual drawing code is pretty simple. However, here we're going to divide it into the ESP32 specific parts, and the GFX parts so that you can understand the relationship, and which part of the code is directly portable to other platforms.
Setting Up The ILI9341 Display
ESP32 Specific
Installing the Foundation
From the top:
#include "spi_master.hpp"
#include "ili9341.hpp"
using namespace espidf;
The ILI9341 operates on an SPI bus, and the demo is configured to use the standard pins for the HSPI bus. MOSI is 23. MISO is 19. SCLK is 18. I like to use #define
s for these to make them easy to modify. Note that this step is required to use any and all SPI based devices, regardless of which one. If you have multiple devices on the HSPI bus, you only need this code once, but before any devices are initialized:
spi_master spi_host(nullptr,
LCD_HOST, PIN_NUM_CLK, PIN_NUM_MISO, PIN_NUM_MOSI, GPIO_NUM_NC, GPIO_NUM_NC, 4104, DMA_CHAN);
I literally copy and paste that stuff from above into new projects. It's always the same for the same HSPI bus, unless you're using custom pins.
Now we can configure the driver. Unlike the above, the driver's configuration is specified using template arguments. This is actually more efficient unless you multiple ILI9341 displays attached.
We're using the CS pin of 5. As for the remainder, DC is 2, RST is 4 and the backlight is pin15.
using lcd_type = ili9341<LCD_HOST, PIN_NUM_CS, PIN_NUM_DC, PIN_NUM_RST, PIN_NUM_BCKL>; lcd_type lcd;
Congratulations! You've now created a draw destination called lcd
that represents your display. All of that was specific to the ESP32, and not actually part of GFX, although the driver is GFX aware and has a dependency on it.
GFX Specific
Setting the Table
The rest of what we're doing is platform agnostic, and is dependent on the GFX library but not any particular driver code.
...
#include "gfx_cpp14.hpp"
using namespace gfx;
...
Now let's start using it by declaring the following right below the declarations above:
using lcd_color = color<typename lcd_type::pixel_type>;
The purpose of this is to make it easy to select compatible colors for the LCD. While you can pass pixels of any format to the drawing functions, using native pixels is much more efficient because no conversion is necessary. See how we're passing in the lcd_type
's pixel_type
? That's so the color enumeration will give us a color represented by a pixel in the appropriate format. All draw targets expose a pixel_type
which indicates the pixel format that they natively support.
Now we can do:
typename lcd_type::pixel_type px = lcd_color::yellow;
Or even more simply, if you're one of the people that doesn't mind auto
:
auto px = lcd_color::yellow;
Enter the Frame Buffer, Stage Left
Anyway, now that we've covered setting up your headers, drivers and colors we can configure the frame buffer. Keep in mind most of the time you won't be using a frame buffer at all. It's useful for certain things, but unless you're making games and things, you really don't want to "spend" the significant amount of memory it takes to carry one. Here, we do need one for the reasons I outlined earlier, so let's set it up:
using fb_type = large_bitmap<typename lcd_type::pixel_type>;
fb_type fb(lcd.dimensions(),1);
Like with lcd_type
, we declare a type for our frame buffer as well. Pretty much any time you introduce a new type of draw target, you'll want to declare an alias for that type. The large_bitmap<>
takes a pixel type as its single template argument. The pixel type dictates the in memory layout of the bitmap, and also dictates the size of the memory needed to hold the bitmap. Monochome bitmaps for example, pack 8 pixels into one byte, but our lcd_type::pixel_type
is 16-bit RGB which requires 2 bytes per pixel so it needs 16 times the memory of monochrome. Obviously we want color, and in this case, we want it to be the same color model and resolution as the display. That's why we used lcd_type::pixel_type
above.
Once we've declared the type, we create an instance of that type, passing the desired dimensions, which are the same as our lcd
as well as 1
, which indicates how many lines are in each segment of the bitmap. Here, we use one line per segment, which requires 240 segments of 320x1x16bpp, meaning 640 bytes per segment. That should be easy to allocate on the heap. Indeed it works fine when we initialize it. If it didn't, it wouldn't throw an exception because gfx is exception free, but initialized()
would be false and any attempt to use it will return gfx_result::out_of_memory
.
Anyway, once it's created we just draw to fb
instead of the lcd
. When we're finished drawing, we send the whole fb
to the lcd
all at once with this code:
draw::bitmap(lcd,(srect16)lcd.bounds(),fb,fb.bounds());
There you can see we're drawing the entire fb
onto lcd
.
Note that we had to cast lcd
's bounding rectangle to a signed version, srect16
. Draw methods take signed values for their destination positioning so that it's possible to draw partially offscreen. However, it doesn't make sense for bounding rectangles to be signed, nor the source rectangle to be signed. A simple cast fixes the "polarity mismatch" above.
You also might have noticed our destination comes first. It works like memcpy()
in that way, but also it makes more sense since all draw methods take a draw destination as their first argument but not all draw methods take a draw source. Most don't, in fact.
From here on in the demo, we'll be drawing to fb
.
Breaking Out the Crayons
...or pixels, in this case. Let's start with something simple:
draw::filled_rectangle(fb,(srect16)fb.bounds(),lcd_color::black);
for(int y = 0;y<lcd.dimensions().height;y+=16) {
for(int x = 0;x<lcd.dimensions().width;x+=16) {
if(0!=((x+y)%32)) {
draw::filled_rectangle(fb,
srect16(
spoint16(x,y),
ssize16(16,16)),
lcd_color::white);
}
}
}
Here, we fill the entire frame buffer with lcd_color::black
. Now you can see where our earlier color<>
alias comes in handy. Then we loop through and every other block of 16 pixels we draw a white square that's 16x16.
That's a nice little pattern to show off alpha blending on top of, but we'll get there.
... And Doing Something Other than Eating Them
Let's make a random color. We'll do this by setting the individual channels of a pixel to random values.
rgba_pixel<32> px;
px.channel<channel_name::R>(rand()%256);
px.channel<channel_name::G>(rand()%256);
px.channel<channel_name::B>(rand()%256);
px.channel<channel_name::A>((256-92)+(rand()%92));
There's another, slightly faster way to do this if we want to completely randomize every channel:
px.native_value = (unsigned int)rand();
Whatever you do, that gets you some random values for your color.
Notice we have that alpha channel, as indicated by channel_name::A
. The fact that we do means that alpha blending is in play. Now when we use px
, we're essentially drawing with a semi-transparent crayon, meaning the underlying colors will bleed through when we draw. The amount that bleeds through depends on the value of the alpha channel. Higher values are more opaque. This works as long as the draw destination supports reading (is also a draw source in other words). fb
does support reading, but lcd
does not. We're drawing to fb
so everything is copacetic.
Most of our drawing functions for primitives, like ellipse<>()
take the same arguments as filled_rectangle<>()
.
The only interesting thing in the demo code is that we're randomizing it:
draw::filled_ellipse(fb,
srect16(
rand()%lcd.dimensions().width,
rand()%lcd.dimensions().height,
rand()%lcd.dimensions().width,
rand()%lcd.dimensions().height),
px);
That will draw a filled ellipse of random location and bounds, with the random semi-transparent color that we created prior.
Since the other methods are the same, let's move on to something more interesting.
Several Valid Points Make An Argument Polygon
Polygons are described using paths, which we briefly touched on. Paths again, are a series of points. We want to declare our polygon points in a clockwise fashion as a rule. Usually, we'll make our path, and then we'll move the whole thing by offsetting it until it is in the correct location:
const float scalex = (rand()%101)/100.0;
const float scaley = (rand()%101)/100.0;
const uint16_t w = lcd.dimensions().width,
h = lcd.dimensions().height;
spoint16 path_points[] = {
spoint16(0,scaley*h),
spoint16(scalex*w/2,0),
spoint16(scalex*w,scaley*h)
};
spath16 poly_path(3,path_points);
poly_path.offset_inplace(rand()%w,rand()%h);
draw::filled_polygon(fb,poly_path,px);
Here, we're drawing a triangle that takes up the entire screen, except we're scaling it down by a random amount. We then use offset_inplace()
by random amounts to position the triangle somewhere on the screen. Note that we probably should have allowed it to go negative so it could draw off the top and left but for the demo I wanted to keep it as simple as we can.
Reduce, Reuse and Recycle Your Code
You probably will want to create methods that do some sort of drawing for you. In the demo, we have draw_happy_face()
which surprisingly draws a happy face:
template<typename Destination>
void draw_happy_face(Destination& bmp,float alpha,const srect16& bounds)
Notice it's a template function. We didn't strictly need that for the demo. We could have declared it like this:
void draw_happy_face(large_bitmap<typename lcd_type::pixel_type>& bmp ...
However, if we did that we couldn't pass lcd
to it. It would only take one type of draw destination - a large bitmap with 16-bit RGB color. Declaring a template that takes a Destination
and then taking Destination& destination
as an argument will allow your routine to work on any draw destination. Similarly, if you need to take a draw source, do the same thing with a Source
parameter.
Wrapping Up
The demo code demonstrates the techniques outlined above. You'll not the frame rate of the demo is abysmal. That's part of alpha blending. Every pixel must be read, blended, and then rewritten, leaving any bulk pixel moving code out of play and falling back on the slowest possible way to both read and write to a draw target. Without some sort of hardware acceleration for that, there's little to be done. It is possible to blend using SIMD instructions over a source, but not with this library, since all of that is very platform specific.
Despite the slowness of it, the concepts are there, and you'll note if you remove the alpha blending things speed up quite a bit.
Hopefully, this leaves you with a better idea of how to use GFX. Enjoy!
History
- 3rd June, 2021 - Initial submission