Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / IoT

Stateful Screen Rendering on Little Devices

5.00/5 (7 votes)
26 Feb 2023MIT6 min read 14K   55  
How to make very responsive, flicker free interactive screens without using a lot of memory
Making responsive, rich graphics applications on IoT devices can be challenging. Here are some techniques that make it possible.

Introduction

Bear with me, as the problem we're solving ultimately is quite involved. It's the house that Jack built so even describing the issue takes some doing.

Espressif sort of backed me into a corner with their latest Arduino framework implementations, and limitations in the ESP-IDF on the ESP32 line of MCUs. Essentially, you can only read back from the display's frame buffer in very limited scenarios, and often with complications, even in the cases where you can at all.

Essentially, this means that things like anti-aliased True Type fonts are not supported when drawing directly to the screen. The bottom line is in order to alpha-blend or anti-alias drawing operations, you must draw to a bitmap and then spit that bitmap to the display.

This is fine for small drawing operations, but it creates issues when building an entire screen this way because you usually won't have enough RAM available to keep the entire screen in a bitmap.

However, what if we could store our drawing operations, and then redraw parts of the screen as it changes? We do not need a bitmap for the entire frame buffer because we could redraw a portion of the screen on demand. Accordingly, we can draw the screen in segments.

Doing things this way, you never need to read back from the frame buffer. Anti-aliasing and alpha-blending are supported because you'll always be drawing to an in memory bitmap that is a subset of the display area, and you can redraw the items underneath where you're blending on demand.

By creating a widget/control library, you can solve these issues.

Prerequisites

I've used an ESP Display S3 Parallel w/ Touch by Makerfabs. You can use a different unit, but you'll have to update substantial portions of the code to match your display and touch driver.

You'll need Visual Studio Code with the Platform IO extension installed.

Understanding this Mess

The big deal here is rendering the controls. Each control has an on_paint() callback that draws the control's visual elements to a draw destination in htcw_gfx with a particular clipping rectangle. The control can use this information to render itself. We'll be rendering to a bitmap.

Each screen handles its own rendering, and contains a list of control<> pointers. You can create controls and then register them with the screen. The controls are registered and stored in z-order, back to front.

We keep track of areas of the screen<> that are dirty with the concept of dirty rectangles. A control<> has an invalidate() method that will add its bounding area to the dirty rectangle list. Any time a control's state changes, it will typically invalidate itself, forcing an eventual redraw of all or part of the control.

As we render, we traverse the dirty rectangles list. When we find one, we compute the size of the bitmap we need, constrained to the size of the available write buffer. This might be less than the size of the dirty rectangle, so we render parts of the dirty rectangle in sub-rectangles at a time, top to bottom.

In order to render a sub-rectangle, for each sub-rectangle, we enumerate the screen's controls to find the ones that intersect it. When we find one, we create a translation window over the bitmap such that the area of the bitmap to be drawn to is translated coordinate-wise such that the control's draws always start at (0,0). We also compute a clipping rectangle the control's on_paint() routine can use to determine the parts of itself that need redrawing. When we've rendered all the controls that intersect the sub-rectangle we send that bitmap for that sub-rectangle to the display.

Now, it should be noted that this routine can use two buffers. If so, each time it sends a bitmap to the display it switches the active buffer it's using. This is so DMA capable systems can send in the background while the routine continues drawing for maximum performance.

Using this Mess

If you've ever used the WinForms designer in .NET, the setup code for the controls will probably look somewhat familiar compared to the InitializeComponent() routine in designer generated code. Basically, you create the control, set a bunch of control properties, and then add it to the controls collection. We'll be doing that here. Unfortunately, we don't have a visual designer at this time, so it's all manual.

Let's take a look at the control and screen initialization:

C++
// set the screen
main_screen.background_color(color_t::white);
main_screen.on_flush_callback(uix_flush,nullptr);
main_screen.on_touch_callback(uix_touch,nullptr);

// set the label
test_label.bounds(srect16(spoint16(10,10),ssize16(200,60)));
test_label.text_color(color32_t::blue);
test_label.text_open_font(&text_font);
test_label.text_line_height(50);
test_label.text_justify(uix_justify::center);
test_label.round_ratio(NAN);
test_label.padding({8,8});
test_label.text("Hello");    
main_screen.register_control(test_label);

// set the button
test_button.bounds(srect16(spoint16(25,25),ssize16(200,100)));
// we'll do alpha blending, so
// set the opacity to 50%
auto bg = color32_t::light_blue;
bg.channelr<channel_name::A>(.5);
test_button.background_color(bg,true);
test_button.border_color(color32_t::black);
test_button.text_color(color32_t::black);
test_button.text_open_font(&text_font);
test_button.text_line_height(25);
test_button.text_justify(uix_justify::center);
test_button.round_ratio(NAN);
test_button.padding({8,8});
test_button.text("Released");
test_button.on_pressed_changed_callback(
    [](bool pressed,void* state) {
        test_button.text(pressed?"Pressed":"Released");
    });
main_screen.register_control(test_button);

There's nothing here that's too complicated, there's just a lot of it. The most complicated thing we do is alpha blend the background of the button by setting the A/alpha channel to 50%.

In loop(), we simply call main_screen.update() to give it a chance to render and process touch.

Coding this Mess

Now let's get to the fun stuff and dive into how it all works.

First, we'll cover the control framework in /lib/htcw_uix/include/uix_core.hpp:

C++
template<typename PixelType,typename PaletteType = gfx::palette<PixelType,PixelType>>
class control {
public:
    using type = control;
    using pixel_type = PixelType;
    using palette_type = PaletteType;
    using control_surface_type = control_surface<pixel_type,palette_type>;
private:        
    srect16 m_bounds;
    const PaletteType* m_palette;
    bool m_visible;
    invalidation_tracker& m_parent;
    control(const control& rhs)=delete;
    control& operator=(const control& rhs)=delete;
    
protected:
    control(invalidation_tracker& parent, const palette_type* palette = nullptr) : 
            m_bounds({0,0,49,24}),
            m_palette(palette),
            m_visible(true),
            m_parent(parent) {
        
    }
    void do_move_control(control& rhs) {
        m_bounds = rhs.m_bounds;
        m_palette = rhs.m_palette;
        m_visible = rhs.m_visible;
        m_parent = rhs.m_parent;
    }
public:
    control(control&& rhs) {
        do_move_control(rhs);
    }
    control& operator=(control&& rhs) {
        do_move_control(rhs);
    }
    const palette_type* palette() const {return m_palette;}
    ssize16 dimensions() const {
        return m_bounds.dimensions();
    }
    srect16 bounds() const {
        return m_bounds;
    }
    virtual void bounds(const srect16& value) {
        if(m_visible) {
            m_parent.invalidate(m_bounds);
            m_parent.invalidate(value);
        }
        m_bounds = value;
    }
    virtual void on_paint(control_surface_type& destination,const srect16& clip) {
    }
    virtual void on_touch(size_t locations_size,const spoint16* locations) {
    };
    virtual void on_release() {
    };
    bool visible() const {
        return m_visible;
    }
    void visible(bool value) {
        if(value!=m_visible) {
            m_visible = value;
            return this->invalidate();
        }
    }
    uix_result invalidate() {
        return m_parent.invalidate(m_bounds);
    }
    uix_result invalidate(const srect16& bounds) {
        srect16 b = bounds.offset(this->bounds().location());
        if(b.intersects(this->bounds())) {
            b=b.crop(this->bounds());
            return m_parent.invalidate(b);
        }
        return uix_result::success;
    }
};

You can see there are some basic functions for reporting touch, invalidating the control, the properties, and then on_paint().

The template parameters could use some explaining. htcw_gfx is display agnostic and renders in the display's native pixel format. This may be 4-bit grayscale for black, white and gray e-paper or even a small palette for color e-paper. This could even be a 24-bit color display in some instances, but will usually be 16-bit RGB565. The arguments passed to the template indicate the display's native format, and palette type if any. Whatever these arguments are dictates the format of the bitmap data that is generated from the screens rendering process.

The invalidation routines add dirty rectangles to the screen. One call simply adds the entire control window while the other invalidates a specific rectangle within the control.

Calling on_paint() from the render routine requires some special consideration. The callee/control expects its upper left coordinates to begin at (0,0) and extend to the dimensions of the control. We can remap the draw operations by offsetting them a specific amount in order to accomplish this.

To do so requires creating a custom draw target in htcw_gfx. It's not actually complicated. You just need to implement a handful of methods, most of which are fairly straightforward. We'll do just that by creating a control_surface<> template class which remaps all draw operations on a bitmap<>:

C++
template<typename PixelType,typename PaletteType = gfx::palette<PixelType,PixelType>>
class control_surface final {
public:
    using type = control_surface;
    using pixel_type = PixelType;
    using palette_type = PaletteType;
    using bitmap_type= gfx::bitmap<pixel_type,palette_type>;
    using caps = gfx::gfx_caps<false,false,false,false,false,true,false>;
private:
    bitmap_type& m_bitmap;
    srect16 m_rect;
    void do_move(control_surface& rhs) {
        m_bitmap = rhs.m_bitmap;
        m_rect = rhs.m_rect;
    }
    control_surface(const control_surface& rhs)=delete;
    control_surface& operator=(const control_surface& rhs)=delete;
public:
    control_surface(control_surface&& rhs) : m_bitmap(rhs.m_bitmap) {
        do_move(rhs);
    }
    control_surface& operator=(control_surface&& rhs) {
        do_move(rhs);
        return *this;
    }
    control_surface(bitmap_type& bmp,const srect16& rect) : m_bitmap(bmp) {
        m_rect = rect;
    }
    const palette_type* palette() const {
        return m_bitmap.palette();
    }
    size16 dimensions() const {
        return (size16)m_rect.dimensions();
    }
    rect16 bounds() const {
        return rect16(point16::zero(),dimensions());
    }
    gfx::gfx_result point(point16 location, pixel_type* out_pixel) const {
        location.offset_inplace(m_rect.x1,m_rect.y1);
        return m_bitmap.point(location,out_pixel);
    }
    gfx::gfx_result point(point16 location, pixel_type pixel) {
        spoint16 loc = ((spoint16)location).offset(m_rect.x1,m_rect.y1);
        return m_bitmap.point((point16)loc,pixel);
        return gfx::gfx_result::success;
    }
    gfx::gfx_result fill(const rect16& bounds, pixel_type pixel) {
        if(bounds.intersects(this->dimensions().bounds())) {
            srect16 b = ((srect16)bounds);
            b=b.offset(m_rect.x1,m_rect.y1);
            if(b.intersects((srect16)m_bitmap.bounds())) {
                b=b.crop((srect16)m_bitmap.bounds());
                return m_bitmap.fill((rect16)b,pixel);
            }
        }
        return gfx::gfx_result::success;
    }
    gfx::gfx_result clear(const rect16& bounds) {
        return fill(bounds,pixel_type());
    }
};

It's actually not that complicated except in fill where it has to crop. It's just forwarding offset coordinates to the underlying bitmap based on the rect you gave it.

Now let's move on to the update() method of screen<>. We'll only be covering parts of it because it's a large class and a lot of it will be explained as part of what we cover. The file is /lib/htcw_uix/include/uix_screen.hpp:

C++
uix_result update(bool full = true) {
    uix_result res = update_impl();
    if(res!=uix_result::success) {
        return res;
    }
    while(full && m_it_dirties!=nullptr) {
        res = update_impl();
        if(res!=uix_result::success) {
            return res;
        }   
    }
    return uix_result::success;
}

You can see the real meat of this is update_impl() which we call once unconditionally, and then repeatedly until a m_it_dirties is null which tells us the rendering process is complete.

Basically, update_impl() is a coroutine that handles touch processing and rendering. When it renders, it breaks up the rendering process into sub-rectangles and renders one of them on each call. In between renders, it processes touch input. We'll get to the actual rendering now as we cover update_impl():

C++
uix_result update_impl() {
    // if not rendering, process touch
    if(m_it_dirties==nullptr&& m_on_touch_callback!=nullptr) {
        point16 locs[2];
        spoint16 slocs[2];
        size_t locs_size = sizeof(locs);
        m_on_touch_callback(locs,&locs_size,m_on_touch_callback_state);
        if(locs_size>0) {
            // if we currently have a touched control
            // forward all successive messages to that control
            // even if they're outside the control bounds.
            // that way we can do dragging if necessary.
            // this works like MS Windows.
            if(m_last_touched!=nullptr) {
                // offset the touch points to the control and then 
                // call on_touch for the control
                for(int i = 0;i<locs_size;++i) {
                    slocs[i].x = locs[i].x-(int16_t)m_last_touched->bounds().x1;
                    slocs[i].y = locs[i].y-(int16_t)m_last_touched->bounds().y1;
                }
                m_last_touched->on_touch(locs_size,slocs);
    
            } else {
                    // loop through the controls in z-order back to front
                // find the last/front-most control whose bounds()
                // intersect the first touch point
                control_type* target = nullptr;
                for(control_type** ctl_it = m_controls.begin();ctl_it!=m_controls.end();++ctl_it) {
                    control_type* pctl = *ctl_it;
                    if(pctl->visible() && pctl->bounds().intersects((spoint16)locs[0])) {
                        target = pctl;
                    }
                }
                // if we found one make it the current control, offset the touch
                // points to the control and then call on_touch for the control
                if(target!=nullptr) {
                    m_last_touched = target;
                    for(int i = 0;i<locs_size;++i) {
                        slocs[i].x = locs[i].x-(int16_t)target->bounds().x1;
                        slocs[i].y = locs[i].y-(int16_t)target->bounds().y1;
                    }
                    target->on_touch(locs_size,slocs);
            
                }
            }
        } else {
            // released. if we have an active control let it know.
            if(m_last_touched!=nullptr) {
                m_last_touched->on_release();
                m_last_touched = nullptr;

            }
        }
    }
    // rendering process
    // note we skip this until we have a free buffer
    if(m_on_flush_callback!=nullptr && 
            m_flushing<(1+(m_buffer2!=nullptr)) && 
            m_dirty_rects.size()!=0) {
        if(m_it_dirties==nullptr) {
            // m_it_dirties is null when not rendering
            // so basically when it's null this is the first call
            // and we initialize some stuff
            m_it_dirties = m_dirty_rects.cbegin();
            size_t bmp_stride = bitmap_type::sizeof_buffer(size16(m_it_dirties->width(),1));
            m_bmp_lines = m_buffer_size/bmp_stride;
            if(bmp_stride>m_buffer_size) {
                return uix_result::out_of_memory;
            }
            m_bmp_y = 0;
        } else {
            // if we're past the current 
            // dirty rectangle bounds:
            if(m_bmp_y+m_it_dirties->y1+m_bmp_lines>m_it_dirties->y2) {
                // go to the next dirty rectangle
                ++m_it_dirties;
                if(m_it_dirties==m_dirty_rects.cend()) {
                    // if we're at the end, shut it down
                    // and clear all dirty rects
                    m_it_dirties = nullptr;
                    return validate_all();
                }
                // now we compute the bitmap stride (one line, in bytes)
                size_t bmp_stride = bitmap_type::sizeof_buffer(size16(m_it_dirties->width(),1));
                // now we figure out how many lines we can have in these
                // subrects based on the total memory we're working with
                m_bmp_lines = m_buffer_size/bmp_stride;
                // if we don't have enough space for at least one line,
                // error out
                if(bmp_stride>m_buffer_size) {
                    return uix_result::out_of_memory;
                }
                // start at the top of the dirty rectangle:
                m_bmp_y = 0;
            } else {
                // move down to the next subrect
                m_bmp_y+=m_bmp_lines;
            }
        }
        // create a subrect the same width as the dirty, and m_bmp_lines high
        // starting at m_bmp_y within the dirty rectangle
        srect16 subrect(m_it_dirties->x1,m_it_dirties->y1+m_bmp_y,m_it_dirties->x2, m_it_dirties->y1+m_bmp_lines+m_bmp_y-1);
        // make sure the subrect is cropped within the bounds
        // of the dirties. sometimes the last one overhangs.
        subrect=subrect.crop((srect16)*m_it_dirties);
        // create a bitmap for the subrect over the write buffer
        bitmap_type bmp((size16)subrect.dimensions(),m_write_buffer,m_palette);
        // fill it with the screen color
        bmp.fill(bmp.bounds(),m_background_color);
        // for each control
        for(control_type** ctl_it = m_controls.begin();ctl_it!=m_controls.end();++ctl_it) {
            control_type* pctl = *ctl_it;
            // if it's visible and intersects this subrect
            if(pctl->visible() && pctl->bounds().intersects(subrect)) {
                // create the offset surface rectangle for drawing
                srect16 surface_rect = pctl->bounds();
                surface_rect.offset_inplace(-subrect.x1,-subrect.y1);
                // create the clip rectangle for the control
                srect16 surface_clip = pctl->bounds().crop(subrect);
                surface_clip.offset_inplace(-pctl->bounds().x1,-pctl->bounds().y1);
                // create the control surface
                control_surface_type surface(bmp,surface_rect);
                // and paint
                pctl->on_paint(surface,surface_clip);
            }
        }
        // tell it we're flushing and run the callback
        ++m_flushing;
        m_on_flush_callback((point16)subrect.top_left(),bmp,m_on_flush_callback_state);
        // the above may return immediately before the 
        // transfer is complete. To take advantage of
        // this, rather than wait, we swap out to a
        // second buffer and continue drawing while
        // the transfer is in progress.
        switch_buffers();
    }
    return uix_result::success;
}

As you can see, I've commented this to outline the approach. Since it's much easier to look at the description of what the code is doing right next to the code itself, and since I described the process previously, there's not much else to cover here.

Happy coding!

History

  • 26th February, 2023 - Initial submission
  • 27th February, 2023 - Bug fix

License

This article, along with any associated source code and files, is licensed under The MIT License