Full Screen White VGA Signal with Bitluni ESP32 Library

Over two and a half years ago, I had the idea to repurpose a failed monitor into a color-controllable lighting panel. I established it was technically feasible to generate a solid color full screen VGA signal with a PIC, then I promptly got distracted by other projects and never dug into it. I still have the weirdly broken monitor and I still want a large diffuse light source, but now I have to concede I’m unlikely to dedicate the time to build my own custom solution. In the interest of expediency, I will fall back to leveraging someone else’s work. Specifically, Bitluni’s work to generate VGA signals with an ESP32.

Bitluni’s initial example has since been packaged into Bitluni’s ESP32Lib Arduino library, making the software side very easy. On the hardware side, I dug up one of my breadboards already populated with an ESP32 and plugged in the VGA cable I had cut apart and modified for my earlier VGAX experiment. Bitluni’s library is capable of 14-bit color with the help of a voltage-dividing resistor array, but I only cared about solid white and maybe a few other solid colors. The 3-bit color mode, which did not require an external resistor array, would suffice.

I loaded up Bitluni’s VGAHelloWorld example sketch and… nothing. After double-checking my wiring to verify it is as expected, I loaded up a few other sketches to see if anything else made a difference. I got a picture from the VGASprites example, though it had limited colors as it is a 14-bit color demo and I had only wired up 3-bit color. Simplifying code in that example step by step, I narrowed down the key difference to be the resolution used: VGAHelloWorld used MODE320x240 and VGASprites used MODE200x150. I changed VGAHelloWorld to MODE200x150 resolution, and I had a picture.

This was not entirely a surprise. The big old malfunctioning monitor had a native resolution of 2560×1600. People might want to display a lower resolution, but that’s still likely to be in the neighborhood of high-definition resolutions like 1920×1080. There was no real usage scenario for driving such a large panel with such low resolutions. The monitor’s status bar said it was displaying 800×600, but 200×150 is one-sixteenth of that. I’m not sure why this resolution, out of many available, is the one that worked.

I don’t think the problem is in Bitluni’s library, I think it’s just idiosyncrasies of this particular monitor. Since I resumed this abandoned project in the interest of expediency, I didn’t particular care to chase down why. All I cared about was that I could display solid white, so resolution didn’t matter. But timing mattered, because VGAX output signal timing was slightly off and could not fill the entire screen. Thankfully Bitluni’s code worked well with this monitor’s “scale to fit screen” mode, expanding the measly 200×150 pixels to its full 2560×1600. An ESP32 is overkill for just generating a full screen white VGA signal, but it was the most expedient way for me to turn this monitor into a light source.

#include <ESP32Lib.h>

//pin configuration
const int redPin = 14;
const int greenPin = 19;
const int bluePin = 27;
const int hsyncPin = 32;
const int vsyncPin = 33;

//VGA Device
VGA3Bit vga;

void setup()
{
  //initializing vga at the specified pins
  vga.init(vga.MODE200x150, redPin, greenPin, bluePin, hsyncPin, vsyncPin); 

  vga.clear(vga.RGB(255,255,255));
}

void loop()
{
}

UPDATE: After I had finished this project, I found ESPVGAX: a VGA signal generator for the cheaper ESP8266. It only has 1-bit color depth, but that would have been sufficient for this. However there seem to be a problem with timing, so it might not have worked for me anyway. If I have another simple VGA signal project, I’ll look into ESPVGAX in more detail.

Analog TV Tuning Effect with ESP_8_BIT_Composite

After addressing my backlog of issues with the ESP_8_BIT_Composite video output library, I felt that I have “eaten my vegetables” and earned some “have a dessert” fun time with my own library. On Twitter I saw that Emily Velasco had taken a programming bug and turned it into a feature, and I wanted to try taking that concept further.

When calling Adafruit GFX library’s drawBitmap() command, we have to pass in a pointer to bytes that make up the bitmap. Since that is only a buffer of raw bytes, we also have to tell drawBitmap() how to interpret those bytes by sending in dimensions for width and height. If we accidentally pass in a wrong width value, the resulting output would be garbled. If I had seen that behavior, I would have thought “Oops, my bad, let me fix the bug” and moved on. But not Emily, instead she saw a fun effect to play with.

This is pretty amazing, using the wrong width value messes up the stride length used to copy image data, and it does vaguely resemble tuning an analog TV when it is just barely out of horizontal sync. Pushing the concept further, she added a vertical scrolling offset to emulate going out of vertical sync.

However, applying the tuning effect to animations required an arduous image conversion workload and complex playback code. I was quite surprised when I learned this, as I had wrongly assumed she used the animated GIF support I had added to my library. In hindsight I should have remembered drawBitmap() was only monochrome and thus incompatible.

Hence this project: combine my animated GIF support with Emily’s analog TV tuning effect in order to avoid tedious image conversion and complex playback.

I started with my animated GIF example, which uses Larry Bank’s AnimatedGIF library to decode directly into ESP_8_BIT_Composite back buffer. For this tuning effect, I needed to decode animation frames into an intermediate buffer, which I could then selectively copy into ESP_8_BIT_Composite back buffer with the appropriate horizontal & vertical offsets to simulate tuning effect. Since I am bypassing drawBitmap() to copy memory myself, I switched from the Adafruit GFX API to the lower-level raw byte buffer API exposed by my library.

For my library I allocated the frame buffer in 15 separate 4KB allocations, which was a tradeoff between “ability to fit in fragmented memory spaces” and “memory allocation overhead”. Dividing up buffer memory was possible because rendering is done line by line and it didn’t matter if one line was contiguous with the next in memory or not. However, this tuning effect will be copying data across line boundaries, so I had to allocate the intermediate buffer as one single block of memory.

My original example also asked the AnimatedGIF library to handle the wait time in between animation frames. However, since that delay could vary in an animation, and I have a user-interactive component. In order to remain responsive to knob movement faster than animation frame rate, I took over frame timing in a non-blocking fashion. Now every run of loop() reads the potentiometer knob position and update horizontal/vertical offsets without having to wait for the next frame of the animation, resulting in more immediate feedback to the user.


Animated GIF Tuner Effect with Cat and Galactic Squid is publicly available on GitHub.

ESP_8_BIT_Composite Version 1.3.1

Over a year ago I released my first Arduino library, not knowing if anyone would care. The good news is that they do: people have been using ESP_8_BIT_Composite to drive composite video devices. The bad news is that they have been filing issues for me to fix. This backlog has piled up over several months and long overdue for me to go in and get things fixed up.


Two of the issues were merely compiler warnings, but I should still address them to minimize noise. What was weird to me that I didn’t see either of those warnings myself in the Arduino IDE. I had to switch over to using PlatformIO under Visual Studio Code, where I learned I could edit my platformio.ini file to add build_flags = […] to enable warnings of my choosing. Issue #24 was a printf() formatting issue that I couldn’t see until I added -Wformat, and issue #35 was invisible to me until I added -Wreturn-type.

Since I was on the subject anyway, I executed a build with all warnings turned on. (-Wall) This gave me far too many warnings to review. Not only did this slow down compilation to a snail’s pace, most of the hits were outside my code. Of items in my code, some appear to be overzealous rules giving me false positives. But I did see a few valid complaints of unused variables (-Wunused-variable) and I removed them.


Issue #27 took a bit more work, mostly because I started out “knowing” things that were later proven to be wrong. I had added support for setRotation() and I tested it with some text drawn via the AdafruitGFX library. (This test code became my example project GFX_RotatedText) I didn’t explicitly test drawing rectangles because when I reviewed code for Adafruit_GFX::drawChar() I saw that they use writePixel() for text size 1 and fillRect() for text sizes greater than one. So when my rotated text sample code worked correctly, I inferred that meant fillRect() was correct as well.

That was wrong, and because I didn’t know it was wrong, I kept looking in wrong places. Not realizing that my coordinate transform math for fillRect() (and drawRect()) were fundamentally broken. These APIs passed in X/Y coordinates for the rectangle’s upper-left corner, and my mistake was forgetting that drawing commands are always in the original non-rotated orientation. When the rectangles are rotated, their upper-left corner is no longer the upper-left for the actual low-level drawing operations.

My incorrect foundation blinded me to the real problem, even though I saw failures across multiple test programs. Test programs evolved until one drew four rectangles every frame, one in each supported orientation, and cycle through modifying one of four parameters in a one-second-long animation. Only then could I see a pattern in the error and realize my mistake. This test code became my new example project GFX_RotatedRect.


Finally, I had no luck with issue #23. I was not able to reproduce the compilation error myself and therefore I could not diagnose it. I reluctantly closed it out as “unable to reproduce” before tagging version 1.3.2 for release.

Window Shopping LovyanGFX

One part of having an open-source project is that anyone can offer their contribution for others to use in the future. Most of them were help that I was grateful to accept, such as people filling gaps in my Sawppy documentation. But occasionally, a proposed contribution unexpectedly pops out of left field and I needed to do some homework before I could even understand what’s going on. This was the case for pull request #30 on my ESP_8_BIT_composite Arduino library for generating color composite video signals from an ESP32. The author “riraosan” says it merged LovyanGFX and my library, to which I thought “Uh… what’s that?”

A web search found https://github.com/lovyan03/LovyanGFX which is a graphics library for embedded controllers, including ESP32. But also many others that ESP_8_BIT_composite does not support. While the API mimics AdafruitGFX, this library adds features like sprite support and palette manipulation. It looks like a pretty nifty library! Based on the README of that repository, the author’s primary language is Japanese and they are a big fan of M5Stack modules. So in addition to the software technical merits, LovyanGFX has extra appeal to native Japanese speakers who are playing with M5Stack modules. Roughly two dozen display modules were listed, but I don’t think I have any of them on hand to play with LovyanGFX myself.

Given this information and riraosan’s Instagram post, I guess the goal was to add ESP_8_BIT composite video signal generation as another supported output display for LovyanGFX. So I started digging into how the library was architected to enable support for different displays. I found that each supported display unit has corresponding files in the src/lgfx/v1/panel subdirectory. Each of which has a class that derives from the Panel_Device base class, which implements the IPanel interface. So if we want to add a composite video output capability to this library, that’s the code I expected to see. With this newfound knowledge, I returned to my pull request to see how it was handled. I saw nothing of what I expected. No IPanel implementation, no Panel_Device derived class. That work is in the contributor’s fork of LovyanGFX. The pull request for me has merely the minimal changes needed to ESP_8_BIT_composite to be used in that fork.

Since those changes are for a specialized usage independent of the main intention of my library, I’m not inclined to incorporate such changes. I suggested to riraosan that they fork the code and create a new LovyanGFX-focused library (removing AdafruitGFX support components) and it appears that will be the direction going forward. Whatever else happens, I now know about LovyanGFX and that knowledge would not have happened without a helpful contributor. I am thankful for that!

Problems Making ESP32 Hold GPIO While Asleep

I had several motivations for using an ESP32 for my next exercise. In addition to those outlined earlier, I also wanted to explore using these microcontrollers to control things. Not just report a measurement. In other words, I wanted to see if they can be output nodes as well as data input nodes. This should be a straightforward use of GPIO pins, except for another twist: I also want the ESP32 to be asleep most of the time to save power.

The ESP32 has several sleep modes available, and I decided to go straight for the most power-saving deep sleep as my first experiment. It was straightforward to call esp_deep_sleep() at the end of my program, and this was the easiest sleep mode because I don’t have to do much configuration or handling different cases of things that might happen during sleep. When an ESP32 wakes up from deep sleep, my program starts from the beginning as if it had just been powered up. This gives me a clean slate. I don’t have to worry about testing to see if a connection is still good and maybe reconnecting if not: I always have to start from scratch.

So what are the states of GPIO pins while an ESP32 is asleep? Reading the documentation, I thought I could command digital output pins to be held either high or low while the ESP32 was in deep sleep. However, my program calling gpio_deep_sleep_hold_en() didn’t actually hold output state like I thought it would. I think my program is missing a critical step somewhere along the line.

Some research later, I haven’t figured out what I am missing, but I have learned I’m not alone in getting confused. I found ESP-IDF issue #3370, which was resolved as a duplicate of ESP32 Arduino Core issue #2712. Even though it was marked as resolved, it is still getting traffic from people confused about why GPIO states aren’t held during sleep.

As a workaround, I can use an IO expander chip like the PCF8574. Letting that hold output pin state high or low while the ESP32 is asleep. As a relatively simple chip, I expect the PCF8574 wouldn’t use a lot of power to do what it does. But it would still be an extra chip adding extra power draw. I intend to figure out ESP32 sleep mode GPIO at some point, but for now the project is both moving on. Well, at least in software, the hardware side is taking a step back to ESP8266.


[Source code for this project (flaws and all) is publicly available on GitHub]

Switching to ESP32 For Next Exercise

After deciding to move data processing off of the microcontroller, it would make sense to repeat my exercise with an even cheaper microcontroller. But there aren’t a lot of WiFi-capable microcontrollers cheaper than an ESP8266. So I looked at the associated decision to communicate via MQTT instead, because removing requirement for an InfluxDB client library meant opening up potential for other development platforms.

I thought it’d be interesting to step up to ESP8266’s big brother, the ESP32. I could still develop with the Arduino platform with an ESP32 but for the sake of practice I’m switching to Espressif’s ESP-IDF platform. There isn’t an InfluxDB client library for ESP-IDF, but it does have a MQTT library.

ESP32 has more than one ADC channel, and they are more flexible than the single channel on board the ESP8266. However, that is not a motivate at the moment as I don’t have an immediate use for that advantage. I thought it might be interesting to measure current as well as voltage. Unfortunately given how noisy my amateur circuits have proven to be, I doubt I could build a circuit that can pick up the tiny voltage drop across a shunt resistor. Best to delegate that to a dedicated module designed by people who know what they are doing.

One reason I wanted to use an ESP32 is actually the development board. My Wemos D1 Mini clone board used a voltage regulator I could not identify, so I powered it with a separate MP1584EN buck converter module. In contrast, the ESP32 board I have on hand has a regulator clearly marked as an AMS1117. The datasheet for AMS1117 indicated a maximum input voltage of 15V. Since I’m powering my voltage monitor with a lead-acid battery array that has a maximum voltage of 14.4V, in theory I could connect it directly to the voltage input pin on this module.

In practice, connecting ~13V to this dev board gave me an audible pop, a visible spark, and a little cloud of smoke. Uh-oh. I disconnected power and took a closer look. The regulator now has a small crack in its case, surrounded by shiny plastic that had briefly turned liquid and re-solidified. I guess this particular regulator is not genuine AMS1117. It probably works fine converting 5V to 3.3V, but it definitely did not handle a maximum of 15V like real AMS1117 chips are expected to do.

Fortunately, ESP32 development boards are cheap, counterfeit regulators and all. So I chalked this up to lesson learned and pulled another board out of my stockpile. This time voltage regulation is handled by an external MP1584EN buck converter. I still want to play with an ESP32 for its digital output pins.

Arduino Library Versioning For ESP_8_BIT_Composite

I think adding setRotation() support to my ESP_8_BIT_Composite library was a good technical exercise, but I made a few mistakes on the administrative side. These are the kind of lessons I expected to learn when I decided to publish my project as an Arduino library, but they are nevertheless a bit embarrassing as these lessons are happening in public view.

The first error was not following sematic versioning rules. Adding support for setRotation() is an implementation of missing functionality, it did not involve any change in API surface area. The way I read versioning rules, the setRotation() update should have been an increase in patch version number from v1.2.0 to v1.2.1, not an increase in minor version from v1.2.0 to v1.3.0. I guess I thought it deserved the minor version change because I changed behavior… but by that rule every bug fix is a change in behavior. If every bug fix is a minor version change, then when would we ever increase the patch number? (Never, as far as I can tell.)

Unfortunately, since I’ve already made that mistake, I can’t go back. Because that would violate another versioning rule: the numbers always increase and never decrease.

The next mistake was with a file library.properties in the repository, which describes my library for the Arduino Library Manager. I tagged and released v1.3.0 on GitHub but I didn’t update the version number in library.properties to match. With this oversight, the automated tools for Arduino library update didn’t pick up v1.3.0. To fix this, I updated library.properties to v1.3.1 and re-tagged and re-released everything as v1.3.1 on GitHub. Now v1.3.1 shows up as an updated version in a way v1.3.0 did not.

Screen Rotation Support for ESP_8_BIT_Composite Arduino Library

I’ve had my head buried in modern LED-illuminated digital panels, so it was a good change of pace to switch gears to old school CRTs for a bit. Several months have passed since I added animated GIF support to my ESP_8_BIT_Composite video out Arduino library for ESP32 microcontrollers. I opened up the discussion forum option for my GitHub repository and a few items have been raised, sadly I haven’t been able to fulfill the requests ranging from NTSC-J support (I don’t have a corresponding TV) to higher resolutions (I don’t know how). But one has just dropped in my lap, and it was something I can do.

Issue #21 was a request for the library to implement Adafruit GFX capability to rotate display orientation. When I first looked at rotation, I had naively thought Adafruit GFX would handle that above drawPixel() level and I won’t need to write any logic for it. This turned out to be wrong: my code was expected to check rotation and alter coordinate space accordingly. I looked at the big CRT TV I had sitting on my workbench and decided I wasn’t going to sit that beast on its side, and then promptly forgot about it until now. Whoops.

Looking into Adafruit’s generic implementation of drawPixel(), I saw a code fragment that I could copy:

  int16_t t;
  switch (rotation) {
  case 1:
    t = x;
    x = WIDTH - 1 - y;
    y = t;
    break;
  case 2:
    x = WIDTH - 1 - x;
    y = HEIGHT - 1 - y;
    break;
  case 3:
    t = x;
    x = y;
    y = HEIGHT - 1 - t;
    break;
  }

Putting this into my own drawPixel() was a pretty straightforward way to handle rotated orientations. But I had overridden several other methods for the sake of performance, and they needed to be adapted as well. I had drawFastVLine, drawFastHLine, and fillRect, each optimized for their specific scenario with minimal overhead. But now the meaning of a vertical or horizontal line has become ambiguous.

Looking over at what it would take to generalize the vertical or horizontal line drawing code, I realized they have become much like fillRect(). So instead of three different functions, I only need to make fillRect() rotation aware. Then my “fast vertical line” routine can call into fillRect() with a width of one, and similarly my “fast horizontal line” routine calls into fillRect() with a height of one. This invokes some extra computing overhead relative to before, but now the library is rotation aware and I have less code to maintain. A tradeoff I’m willing to make.

While testing behavior of this new code, I found that Adafruit GFX library uses different calls when rendering text. Text size of one uses drawPixel() for single-pixel manipulation. For text sizes larger than one, they switch to using fillRect() to draw more of the screen at a time. I wrote a program to print text at all four orientations, each at three different sizes, to exercise both code paths. It has been added to the collection of code examples as GFX_RotatedText.

Satisfied that my library now supports screen rotation, I published it as version 1.3.0. But that turned out to be incomplete, as I neglected to update the file library.properties.

Cat and Galactic Squid

Emily Velasco whipped up some cool test patterns to help me diagnose problems with my port of AnimatedGIF Arduino library example, rendering to my ESP_8_BIT_composite color video out library. But that wasn’t where she first noticed a problem. That honor went to the new animated GIF she created upon my request for something nifty to demonstrate my library.

This started when I copied an example from the AnimatedGIF library for the port. After I added the code to copy between my double buffers to keep them consistent, I saw it was a short clip of Homer Simpson from The Simpsons TV show. While the legal department of Fox is unlikely to devote resources to prosecute authors of an Arduino library, I was not willing to take the risk. Another popular animated GIF is Nyan Cat, which I had used for an earlier project. But despite its online spread, there is actual legal ownership associated with the rainbow-pooping pop tart cat. Complete with lawsuits enforcing that right and, yes, an NFT. Bah.

I wanted to stay far away from any legal uncertainties. So I asked Emily if she would be willing to create something just for this demo as an alternate to Homer Simpson and Nyan Cat. For the inspirational subject, I suggested a picture she posted of her cat sleeping on her giant squid pillow.

A few messages back and forth later, Emily created Cat and Giant Squid complete with a backstory of an intergalactic adventuring duo.

Here they are on an seamlessly looping background, flying off to their next adventure. Emily has released this art under the CC BY-SA (Creative Commons Attribution-ShareAlike) 4.0 license. And I have happily incorporated it into ESP_8_BIT_composite library as an example of how to show animated GIFs on an analog TV. When I showed the first draft, she noticed a visual artifact that I eventually diagnosed to missing X-axis offsets. After I fixed that, the animation played beautifully on my TV. Caveat: the title image of this post is hampered by the fact it’s hard to capture a CRT on camera.

Finding X-Offset Bug in AnimatedGIF Example

Thanks to a little debugging, I figured out my ESP_8_BIT_composite color video out Arduino library required a new optional feature to make my double-buffering implementation compatible with libraries that rely on a consistent buffer such as AnimatedGIF. I was happy that my project, modified from one of the AnimatedGIF examples, was up and running. Then I swapped out its test image for other images, and it was immediately clear the job is not yet done. These test images were created by Emily Velasco and released under Creative Commons Attribution-ShareAlike 4.0 license (CC BY-SA 4.0).

This image resulted in the flawed rendering visible as the title image of this post. Instead of numbers continously counting upwards in the center of the screen, various numbers are rendered at wrong places and not erased properly in the following screens. Here is another test image to get more data

Between the two test images and observing where they were on screen, I narrowed the problem. Animated GIF files might only update part of the frame and when that happens, the frame subset is to be rendered at a X/Y offset relative to the origin. The Y offset was accounted for correctly, but the X offset went unused meaning delta frames were rendering against the left edge rather than the correct offset. This problem was not in my library, but inherited from the AnimatedGIF example. Where it went unnoticed because the trademark-violating animated GIF used by that example didn’t have an X-axis offset. Once I understood the problem, I went digging into AnimatedGIF code. Where I found the unused X-offset, and added it into the example where it belonged. These test images now display correctly, but they’re not terribly interesting to look at. What we need is a cat with galactic squid friend.

Animated GIF Decoder Library Exposed Problem With Double Buffering

Once I resolved all the problems I knew existed in version 1.0.0 of my ESP_8_BIT_composite color video out Arduino library, I started looking around for usage scenarios that would unveil other problems. In that respect, I can declare my next effort a success.

My train of thought started with ease of use. Sure, I provided an adaptation of Adafruit’s GFX library designed to make drawing graphics easy, but how could I make things even easier? What is the easiest way for someone to throw up a bit of colorful motion picture on screen to exercise my library? The answer came pretty quickly: I should demonstrate how to display an animated GIF on an old analog TV using my library.

This is a question I’ve contemplated before in the context of the Hackaday Supercon 2018 badge. Back then I decided against porting a GIF decoder and wrote my own run-length encoding instead. The primary reason was that I was short on time for that project and didn’t want to risk losing time debugging an unfamiliar library. Now I have more time and can afford the time to debug problems porting an unfamiliar library to a new platform. In fact, since the intent was to expose problems in my library, I fully expected to do some debugging!

I looked around online for an animated GIF decoder library written in C or C++ code with the intent of being easily portable to microcontrollers. Bonus if it has already been ported to some sort of Arduino support. That search led me to the AnimatedGIF library by Larry Bank / bitbank2. The way it was structured made input easy: I don’t have to fuss with file I/O or SPIFFS, I can feed it a byte array. The output was also well matched to my library, as the output callback renders the image one horizontal line at a time, a great match for the line array of ESP_8_BIT.

Looking through the list of examples, I picked ESP32_LEDMatrix_I2S as the most promising starting point for my test. I modified the output call from the LED matrix I2S interface to my Adafruit GFX based interface, which required only minor changes. On my TV I can almost see a picture, but it is mostly gibberish. As the animation progressed, I can see deltas getting rendered, but they were not matching up with their background.

After chasing a few dead ends, the key insight was noticing my noisy background of uninitialized memory was flipping between two distinct values. That was my reminder I’m performing double-buffering, where I swap between front and back buffers for every frame. AnimatedGIF is efficient about writing only the pixels changed from one frame to the next, but double buffering meant each set of deltas was written over not the previous frame, but two frames prior. No wonder I ended up with gibberish.

Aside: The gibberish amusingly worked in my favor for this title image. The AnimatedGIF example used a clip from The Simpsons, copyrighted material I wouldn’t want to use here. But since the image is nearly unrecognizable when drawn with my bug, I can probably get away with it.

The solution is to add code to keep the two buffers in sync. This way libraries minimizing drawing operations would be drawing against the background they expected instead of an outdated background. However, this would incur a memory copy operation which is a small performance penalty that would be wasted work for libraries that don’t need it. After all of my previous efforts to keep API surface area small, I finally surrendered and added a configuration flag copyAfterSwap. It defaults to false for fast performance, but setting it to true will enable the copy and allow using libraries like AnimatedGIF. It allowed me to run the AnimatedGIF example, but I ran into problems playing back other animated GIF files due to missing X-coordinate offsets in that example code.

TIL Some Video Equipment Support Both PAL and NTSC

Once I sorted out memory usage of my ESP_8_BIT_composite Arduino library, I had just one known issue left on the list. In fact, the very first one I filed: I don’t know if PAL video format is properly supported. When I pulled this color video signal generation code from the original ESP_8_BIT project, I worked to keep all the PAL support code intact. But I live in NTSC territory, how am I going to test PAL support?

This is where writing everything on GitHub paid off. Reading my predicament, [bootrino] passed along a tip that some video equipment sold in NTSC geographic regions also support PAL video, possibly as a menu option. I poked around the menu of the tube TV I had been using to develop my library, but didn’t see anything promising. For the sake of experimentation I switched my sketch into PAL mode just to see what happens. What I saw was a lot of noise with a bare ghost of the expected output, as my TV struggled to interpret the signal in a format it could almost but not quite understand.

I knew the old Sony KP-53S35 RPTV I helped disassemble is not one of these bilingual devices. When its signal processing board was taken apart, there was an interface card to host a NTSC decoder chip. Strongly implying that support for PAL required a different interface card. It also implies newer video equipment have a better chance of having multi-format support, as they would have been built in an age when manufacturing a single worldwide device is cheaper than manufacturing separate region-specific hardware. I dug into my hardware hoard looking for a relatively young piece of video hardware. Success came in the shape of a DLP video projector, the BenQ MS616ST.

I originally bought this projector as part of a PC-based retro arcade console with a few work colleagues, but that didn’t happen for reasons not important right now. What’s important is that I bought it for its VGA and HDMI computer interface ports so I didn’t know if it had composite video input until I pulled it out to examine its rear input panel. Not only does this video projector support composite video in both NTSC and PAL formats, it also had an information screen where it indicates whether NTSC or PAL format is active. This is important, because seeing the expected picture isn’t proof by itself. I needed the information screen to verify my library’s PAL mode was not accidentally sending a valid NTSC signal.

Further proof that I am verifying a different code path was that I saw a visual artifact at the bottom of the screen absent from NTSC mode. It looks like I inherited a PAL bug from ESP_8_BIT, where rossumur was working on some optimizations for this area but left it in a broken state. This artifact would have easily gone unnoticed on a tube TV as they tend to crop off the edges with overscan. However this projector does not perform overscan so everything is visible. Thankfully the bug is easy to fix by removing an errant if() statement that caused PAL blanking lines to be, well, not blank.

Thanks to this video projector fluent in both NTSC and PAL, I can now confidently state that my ESP_8_BIT_composite library supports both video formats. This closes the final known issue, which frees me to go out and find more problems!

[Code for this project is publicly available on GitHub]

Allocating Frame Buffer Memory 4KB At A Time

Getting insight into computational processing workload was not absolutely critical for version 1.0.0 of my ESP_8_BIT_composite Arduino library. But now that the first release is done, it was very important to get those tools up and running for the development toolbox. Now that people have a handle on speed, I turned my attention to the other constraint: memory. An ESP32 application only has about 380KB to work with, and it takes about 61K to store a frame buffer for ESP_8_BIT. Adding double-buffering also doubled memory consumption, and I had actually half expected my second buffer allocation to fail. It didn’t, so I got double-buffering done, but how close are we skating to the edge here?

Fortunately I did not have to develop my own tools here to gain insight into memory allocation, ESP32 SDK already had one in the form of heap_caps_print_heap_info() For my purposes, I called it with the MALLOC_CAP_8BIT flag because pixels are accessed at the single byte (8 bit) level. Here is the memory output running my test sketch, before I allocated the double buffers. I highlighted the blocks that are about to change in red:

Heap summary for capabilities 0x00000004:
  At 0x3ffbdb28 len 52 free 4 allocated 0 min_free 4
    largest_free_block 4 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffb8000 len 6688 free 5872 allocated 688 min_free 5872
    largest_free_block 5872 alloc_blocks 5 free_blocks 1 total_blocks 6
  At 0x3ffb0000 len 25480 free 17172 allocated 8228 min_free 17172
    largest_free_block 17172 alloc_blocks 2 free_blocks 1 total_blocks 3
  At 0x3ffae6e0 len 6192 free 6092 allocated 36 min_free 6092
    largest_free_block 6092 alloc_blocks 1 free_blocks 1 total_blocks 2
  At 0x3ffaff10 len 240 free 0 allocated 128 min_free 0
    largest_free_block 0 alloc_blocks 5 free_blocks 0 total_blocks 5
  At 0x3ffb6388 len 7288 free 0 allocated 6784 min_free 0
    largest_free_block 0 alloc_blocks 29 free_blocks 1 total_blocks 30
  At 0x3ffb9a20 len 16648 free 5784 allocated 10208 min_free 284
    largest_free_block 4980 alloc_blocks 37 free_blocks 5 total_blocks 42
  At 0x3ffc1f78 len 123016 free 122968 allocated 0 min_free 122968
    largest_free_block 122968 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffe0440 len 15072 free 15024 allocated 0 min_free 15024
    largest_free_block 15024 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffe4350 len 113840 free 113792 allocated 0 min_free 113792
    largest_free_block 113792 alloc_blocks 0 free_blocks 1 total_blocks 1
  Totals:
    free 286708 allocated 26072 min_free 281208 largest_free_block 122968

I was surprised at how fragmented the memory space already was even before I started allocating memory in my own code. There are ten blocks of available memory, only two of which are large enough to accommodate an allocation for 60KB. Here is the memory picture after I allocated the two 60KB frame buffers (and two line arrays, one for each frame buffer.) With the changed sections highlighted in red.

Heap summary for capabilities 0x00000004:
  At 0x3ffbdb28 len 52 free 4 allocated 0 min_free 4
    largest_free_block 4 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffb8000 len 6688 free 3920 allocated 2608 min_free 3824
    largest_free_block 3920 alloc_blocks 7 free_blocks 1 total_blocks 8
  At 0x3ffb0000 len 25480 free 17172 allocated 8228 min_free 17172
    largest_free_block 17172 alloc_blocks 2 free_blocks 1 total_blocks 3
  At 0x3ffae6e0 len 6192 free 6092 allocated 36 min_free 6092
    largest_free_block 6092 alloc_blocks 1 free_blocks 1 total_blocks 2
  At 0x3ffaff10 len 240 free 0 allocated 128 min_free 0
    largest_free_block 0 alloc_blocks 5 free_blocks 0 total_blocks 5
  At 0x3ffb6388 len 7288 free 0 allocated 6784 min_free 0
    largest_free_block 0 alloc_blocks 29 free_blocks 1 total_blocks 30
  At 0x3ffb9a20 len 16648 free 5784 allocated 10208 min_free 284
    largest_free_block 4980 alloc_blocks 37 free_blocks 5 total_blocks 42
  At 0x3ffc1f78 len 123016 free 56 allocated 122880 min_free 56
    largest_free_block 56 alloc_blocks 2 free_blocks 1 total_blocks 3
  At 0x3ffe0440 len 15072 free 15024 allocated 0 min_free 15024
    largest_free_block 15024 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffe4350 len 113840 free 113792 allocated 0 min_free 113792
    largest_free_block 113792 alloc_blocks 0 free_blocks 1 total_blocks 1
  Totals:
    free 161844 allocated 150872 min_free 156248 largest_free_block 113792

The first big block, which previously had 122,968 bytes available, became the home of both 60KB buffers leaving only 56 bytes. That is a very tight fit! A smaller block, which previously had 5,872 bytes free, now had 3,920 bytes free indicating that’s where the line arrays ended up. A little time with the calculator with these numbers arrived at 16 bytes of overhead per memory allocation.

This is good information to inform some decisions. I had originally planned to give the developer a way to manage their own memory, but I changed my mind on that one just as I did for double buffering and performance metrics. In the interest of keeping API simple, I’ll continue handling the allocation for typical usage and trust that advanced users know how to take my code and tailor it for their specific requirements.

The ESP_8_BIT line array architecture allows us to split the raw frame buffer into smaller pieces. Not just a single 60KB allocation as I have done so far, it can accommodate any scheme all the way down to allocating 240 horizontal lines individually at 256 bytes each. That will allow us to make optimal use of small blocks of available memory. But doing 240 instead of 1 allocation for each of two buffers means 239 additional allocations * 16 bytes of overhead * 2 buffers = 7,648 extra bytes of overhead. That’s too steep of a price for flexibility.

As a compromise, I will allocate in the frame buffer in 4 kilobyte chunks. These will fit in seven out of ten available blocks of memory, an improvement from just two. Each frame would consist of 15 chunks. This works out to an extra 14 allocations * 16 bytes of overhead * 2 buffers = 448 bytes of overhead. This is a far more palatable price for flexibility. Here are the results with the frame buffers allocated in 4KB chunks, again with changed blocks in red:

Heap summary for capabilities 0x00000004:
  At 0x3ffbdb28 len 52 free 4 allocated 0 min_free 4
    largest_free_block 4 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffb8000 len 6688 free 784 allocated 5744 min_free 784
    largest_free_block 784 alloc_blocks 7 free_blocks 1 total_blocks 8
  At 0x3ffb0000 len 25480 free 724 allocated 24612 min_free 724
    largest_free_block 724 alloc_blocks 6 free_blocks 1 total_blocks 7
  At 0x3ffae6e0 len 6192 free 1004 allocated 5092 min_free 1004
    largest_free_block 1004 alloc_blocks 3 free_blocks 1 total_blocks 4
  At 0x3ffaff10 len 240 free 0 allocated 128 min_free 0
    largest_free_block 0 alloc_blocks 5 free_blocks 0 total_blocks 5
  At 0x3ffb6388 len 7288 free 0 allocated 6776 min_free 0
    largest_free_block 0 alloc_blocks 29 free_blocks 1 total_blocks 30
  At 0x3ffb9a20 len 16648 free 1672 allocated 14304 min_free 264
    largest_free_block 868 alloc_blocks 38 free_blocks 5 total_blocks 43
  At 0x3ffc1f78 len 123016 free 28392 allocated 94208 min_free 28392
    largest_free_block 28392 alloc_blocks 23 free_blocks 1 total_blocks 24
  At 0x3ffe0440 len 15072 free 15024 allocated 0 min_free 15024
    largest_free_block 15024 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffe4350 len 113840 free 113792 allocated 0 min_free 113792
    largest_free_block 113792 alloc_blocks 0 free_blocks 1 total_blocks 1
  Totals:
    free 161396 allocated 150864 min_free 159988 largest_free_block 113792

Instead of almost entirely consuming the block with 122,968 bytes leaving just 56 bytes, the two frame buffers are now distributed among smaller blocks leaving 28,329 contiguous bytes free in that big block. And we still have anther big block free with 113,792 bytes to accommodate large allocations.

Looking at this data, I could also see allocating in smaller chunks would have led to diminishing returns. Allocating in 2KB chunks would have doubled the overhead but not improved utilization. Dropping to 1KB would double the overhead again, and only open up one additional block of memory for use. Therefore allocating in 4KB chunks is indeed the best compromise, assuming my ESP32 memory map is representative of user scenarios. Satisfied with this arrangement, I proceeded to work on my first and last bug of version 1.0.0: PAL support.

[Code for this project is publicly available on GitHub]

Lightweight Performance Metrics Have Caveats

Before I implemented double-buffering for my ESP_8_BIT_composite Arduino library, the only way we know we’re overloaded is when we start seeing visual artifacts on screen. After I implemented double-buffering, when we’re overloaded we’ll see the same data shown for two or more frames because the back buffer wasn’t ready to be swapped. A binary good/no-good feedback is better than nothing but it would be frustrating to work with and I knew I could do better. I wanted to collect some performance metrics a developer can use to know how close they’re running to the edge before going over.

This is another feature I had originally planned as some type of configurable data. My predecessor ESP_8_BIT handled it as a compile-time flag. But just as I decided to make double-buffering run all the time in the interest of keeping the Arduino API easy to use, I’ve decided to collect performance metrics all the time. The compromise is that I only do so for users of the Adafruit GFX API, who have already chosen ease of use over maximum raw performance. The people who use the raw frame buffer API will not take the performance hit, and if they want performance metrics they can copy what I’ve done and tailor it to their application.

The key counter underlying my performance measurement code goes directly down to a feature of the Tensilica CPU. CCount, which I assume to mean cycle count, is incremented at every clock cycle. When the CPU is running at full speed of 240MHz, it increments by 240 million within each second. This is great, but the fact it is a 32-bit unsigned integer limits its scope, because that means the count will overflow every 232 / 240,000,000 = 17.895 seconds.

I started thinking of ways to keep a 64-bit performance counter in sync with the raw CCount, but in the interest of keeping things simple I abandoned that idea. I will track data through each of these ~18 second periods and, as soon as CCount overflows, I’ll throw it all out and start a new session. This will result in some loss of performance data but it eliminates a ton of bookkeeping overhead. Every time I notice an overflow, statistics from the session is output to logging INFO level. The user can also query the running percentage of the session at any time, or explicitly terminate a session and start a new one for the purpose of isolating different code.

The percentage reported is the ratio of of clock cycles spent in waitForFrame() relative to the amount of time between calls. If the drawing loop does no work, like this:

void loop() {
  videoOut.waitForFrame();
}

Then 100% of the time is spent waiting. This is unrealistic because it’s not useful. For realistic drawing loops that does more work, the percentage will be lower. This number tells us roughly how much margin we have to spare to take on more work. However, “35% wait time” does not mean 35% CPU free, because other work happens while we wait. For example, the composite video signal generation ISR is constantly running, whether we are drawing or waiting. Actual free CPU time will be somewhere lower than this reported wait percentage.

The way this percentage is reported may be unexpected, as it is an integer in the range from 0 to 10000 where each unit is a percent or a percent. The reason I did this is because the floating-point unit on an ESP32 imposes its own overhead that I wanted to avoid in my library code. If the user wants to divide by 100 for a human-friendly percentage value, that is their choice to accept the floating-point performance overhead. I just didn’t want to force it on every user of my library.

Lastly, the session statistics include frames rendered & missed, and there is an overflow concern for those values as well. The statistics will be nonsensical in the ~18 second session window where either of them overflow, though they’ll recover by the following session. Since these are unsigned 32-bit values (uint32_t) they will overflow at 232 frames. At 60 frames per second, that’s a loss of ~18 seconds of data once every 2.3 years. I decided not to worry about it and turn my attention to memory consumption instead.

[Code for this project is publicly available on GitHub]

Double Buffering Coordinated via TaskNotify

Eliminating work done for pixels that will never been seen is always a good change for efficiency. Next item on the to-do list is to work on pixels that will be seen… but we don’t want to see them until they’re ready. Version 1.0.0 of ESP_8_BIT_composite color video out library used only a single buffer, where code is drawing to the buffer at the same time the video signal generation code is reading from the buffer. When those two separate pieces of code overlap, we get visual artifacts on screen ranging from incomplete shapes to annoying flickers.

The classic solution to this is double-buffering, which the precedent ESP_8_BIT did not do. I hypothesize there were two reasons for this: #1 emulator memory requirements did not leave enough for a second buffer and #2 emulators sent its display data in horizontal line order, managing to ‘race ahead” of the video scan line and avoid artifacts. But now both of those are gone. #1 no longer applies because emulators had been cut out, freeing memory. And we lost #2 because Adafruit GFX is concentrated around vertical lines so it is orthogonal to scan line and no longer able to “race ahead” of it resulting in visual artifacts. Thus we need two buffers. A back buffer for the Adafruit GFX code to draw on, and a front buffer for the video signal generation code to read from. At the end of each NTSC frame, I have an opportunity to swap the buffers. Doing it at that point ensures we’ll never try to show a partially drawn frame.

I had originally planned to make double-buffering an optional configurable feature. But once I saw how much of an improvement this was, I decided everyone will get it all of the time. In the spirit of Arduino library style guide recommendations, I’m keeping the recommended code path easy to use. For simple Arduino apps the memory pressure would not be a problem on an ESP32. If someone wants to return to single buffer for memory needs, or maybe even as a deliberate artistic decision to have flickers, they can take my code and create their own variant.

Knowing when to swap the buffer was easy, video_isr() had a conveniently commented section // frame is done. At that point I can swap the front and back buffers if the back buffer is flagged as ready to go. My problem was that I didn’t know how to signal the drawing code they have a new back buffer and they can start drawing the next frame. The existing video_sync() (which I use for my waitForFrame() API) forecasts the amount of time to render a frame and uses vTaskDelay() which I am somewhat suspicious of. FreeRTOS documentation has the disclaimer that vTaskDelay() has no guarantee that it will resume at the specified time. The synchronization was thus inferred rather than explicit, and I wanted something that ties the two pieces of code more concretely together. My research eventually led to vTaskNotifyGiveFromISR() I can use in video_isr() to signal its counterpart ulTaskNotifyTake() which I will use for a replacement implementation of video_sync(). I anticipate this will prove to be a more reliable way for the application code to know they can start working on the next frame. But how much time do they have to spare between frames? That’s the next project: some performance metrics.

[Code for this project is publicly available on GitHub]

The Fastest Pixels Are Those We Never Draw

It’s always good to have someone else look over your work, they find things you miss. When Emily Velasco started writing code to run on my ESP_8_BIT_composite library, her experiment quickly ran into flickering problems with large circles. But that’s not as embarrassing as another problem, which triggered ESP32 Core Panic system reset.

When I started implementing a drawing API, I specified X and Y coordinates as unsigned integers. With a frame buffer 256 pixels wide and 240 pixels tall, it was a great fit for 8-bit unsigned integers. For input verification, I added a check to make sure Y did not exceed 240 and left X along as it would be a valid value by definition.

When I put Adafruit’s GFX library on top of this code, I had to implement a function with the same signature as Adafruit used. The X and Y coordinates are now 16-bit numbers, so I added a check to make sure X isn’t too large either. But these aren’t just 16-bit numbers, they are int16_t signed integers. Meaning coordinate values can be negative, and I forgot to check that. Negative coordinate values would step outside the frame buffer memory, triggering an access violation, hence the ESP32 Core Panic and system reset.

I was surprised to learn Adafruit GFX default implementation did not have any code to enforce screen coordinate limits. Or if they did, it certainly didn’t kick in before my drawPixel() override saw them. My first instinct is to clamp X and Y coordinate values within the valid range. If X is too large, I treat it as 255. If it is negative, I treat it as zero. Y is also clamped between 0 and 239 inclusive. In my overrides of drawFastHLine and drawFastVLine, I also wrote code to gracefully handle situations when their width or heights are negative, swapping coordinates around so they remain valid commands. I also used the X and Y clamping functions here to handle lines that were partially on screen.

This code to try to gracefully handle a wide combination of inputs added complexity. Which added bugs, one of which Emily found: a circle that is on the left or right edge of the screen would see its off-screen portion wrap around to the opposite edge of the screen. This bug in X coordinate clamping wasn’t too hard to chase down, but I decided the fact it even exists is silly. This is version 1.0, I can dictate the behavior I support or not support. So in the interest of keeping my code fast and lightweight, I ripped out all of that “plays nice” code.

A height or a width is negative? Forget graceful swapping, I’m just not going to draw. Something is completely off screen? Forget clamping to screen limits, stuff off-screen are just not going to get drawn. Lines that are partially on screen still need to be gracefully handled via clamping, but I discarded all of the rest. Simpler code leaves fewer places for bugs to hide. It is also far faster, because the fastest pixels are those that we never draw. These optimizations complete the easiest updates to make on individual buffers, the next improvement comes from using two buffers.

[Code for this project is publicly available on GitHub]

Overriding Adafruit GFX HLine/VLine Defaults for Performance

I had a lot of fun building a color picker for 256 colors available in the RGB332 color space, gratuitous swoopy 3D animation and all. But at the end of the day it is a tool in service of the ESP_8_BIT_composite video out library. Which has its own to-do list, and I should get to work.

The most obvious work item is to override some Adafruit GFX default implementations, starting with the ones explicitly recommended in comments. I’ve already overridden fillScreen() for blanking the screen on every frame, but there are more. The biggest potential gain is the degenerate horizontal-line drawing method drawFastHLine() because it is a great fit for ESP_8_BIT, whose frame buffer is organized as a list of horizontal lines. This means drawing a horizontal line is a single memset() which I expect to be extremely fast. In contrast, vertical lines via drawFastVLine() would still involve a loop iterating over the list of horizontal lines and won’t be as fast. However, overriding it should still gain benefit by avoiding repetitious work like validating shared parameters.

Given those facts, it is unfortunate Adafruit GFX default implementations tend to use VLine instead of the HLine that would be faster in my case. Some defaults implementations like fillRect() were easy to switch to HLine, but others like fillCircle() is more challenging. I stared at that code for a while, grumpy at lack of comments explaining what it is doing. I don’t think I understand it enough to switch to HLine so I aborted that effort.

Since VLine isn’t ESP_8_BIT_composite’s strong suit, these default implementations using VLine did not improve as much as I had hoped. Small circles drawn with fillCircle() are fine, but as the number of circles increase and/or their radius increase, we start seeing flickering artifacts on screen. It is actually a direct reflection of the algorithm, which draws the center vertical line and fills out to either side. When there is too much to work to fill a circle before the video scanlines start, we can see the failure in the form of flickering triangles on screen, caused by those two algorithms tripping over each other on the same frame buffer. Adding double buffering is on the to-do list, but before I tackle that project, I wanted to take care of another optimization: clipping off-screen renders.

[Code for this project is publicly available on GitHub]

Brainstorming Ways to Showcase RGB332 Palette

It’s always good to have another set of eyes to review your work, a lesson reinforced when Emily pointed out I had forgotten not everyone would know what 8-bit RGB332 color is. This is a rather critical part of using my color composite video out Arduino library, assembled with code from rossumur’s ESP_8_BIT project and Adafruit’s GFX library. I started with the easy part of amending my library README to talk about RGB332 color and list color codes for a few common colors to get people started, but I want to give them access to the rest of the color palette as well.

Which led to the next challenge: What’s the best way to present this color palette? There is a fairly fundamental part of this challenge: there are three dimensions to a RGB color value: Red, Green, and Blue. But a chart on a web page only has two dimensions. Dictating that diagrams illustrating color spaces can only show a two-dimensional slice out of a three dimension volume.

For this challenge, being limited to just 256 colors became an advantage. It’s a small enough number that I could show all 256 colors by putting a few such slices side by side. The easiest one is to slice among the blue axis, since it only had two bits in RGB332 so there are only four slices of blue. Each slice of blue shows all combinations of the red and green channels. They had three bits each for 23 = 8 values, and combining them means 8 * 8 = 64 colors in each slice of blue. The header image for this post arranges the four blue slices side-by-side. This is a variant of my Arduino library example RGB332_pulseB which changes the blue channel over time instead of laying them out side by side.

But even though this was a complete representation of the palette, Emily and I were unsatisfied with this layout. Too many similar colors were separated by this layout. Intuitively it feels like there should be a potential arrangement for RGB332 colors that would allow similar colors to always be near each other. It wouldn’t apply in the general case, but we only have 256 colors to worry about, and that might work to our advantage again. Emily dived into Photoshop because she’s familiar with that tool, and designed some very creative approaches to the idea. I’m not as good with Photoshop, so I dived into what I’m familiar with: writing code.

My RGB332 Color Code Oversight

I felt a certain level of responsibility after my library was accepted to the Arduino library manager, and wrote down all the things I knew I could work on under GitHub issues for the project. I also filled out the README file with usage basics, and I had felt pretty confident I have enabled people to generate color composite video out signals from their ESP32 projects. This confidence was short-lived. My first guinea pig volunteer for a test drive was Emily Velasco, who I credit for instigating this quest. After she downloaded the library and ran my examples, she looked at the source code and asked a perfectly reasonable question: “Roger, what are these color codes?”

Oh. CRAP.

Having many years of experience playing in computers graphics, I was very comfortable with various color models for specifying digital image data. When I read niche jargon like “RGB332”, I immediately know it meant an 8-bit color value with most significant three bits for red channel, three bits for green, and least significant two bits for blue. I was so comfortable, in fact, that it never occurred to me that not everybody would know this. And so I forgot to say anything about it in my library documentation.

I thanked Emily for calling out my blind spot and frantically got to work. The first and most immediate task was to update the README file, starting with a link to the Wikipedia section about RGB332 color. I then followed up with a few example values, covering all the primary and secondary colors. This resulted in a list of eight colors which can also be minimally specified with just 3 bits, one for each color channel. (RGB111, if you will.)

I thought about adding some of these RGB332 values to my library as #define constants that people can use, but I didn’t know how to name and/or organize them. I don’t want to name them something completely common like #define RED because that has a high risk of colliding with a completely unrelated RED in another library. Technically speaking, the correct software architecture solution to this problem is C++ namespace. But I see no mention of namespaces in the Arduino API Style Guide and I don’t believe it is considered a simple beginner-friendly construct. Unable to decide, I chickened out and did nothing in my Arduino library source code. But that doesn’t necessarily mean we need to leave people up a creek, so Emily and I set out to build some paddles for this library.

Initial Issues With ESP_8_BIT Color Composite Video Out Library

I honestly didn’t expect my little project to be accepted into the Arduino Library Manager index on my first try, but it was. Now that it is part of the ecosystem, I feel obligated to record my mental to-do list in a format that others can reference. This lets people know that I’m aware of these shortcomings and see the direction I’ve planned to take. And if I’m lucky, maybe someone will tackle them before I do and give me a pull request. But I can’t realistically expect that, so putting them down on record would at least give me something to point to. “Yes, it’s on the to-do list.” So I wrote down the known problems in the issues section of the project.

First and foremost problem is that I don’t know if PAL code still works. I intended to preserve all the PAL functionality when I extracted the ESP_8_BIT code, but I don’t know if I successfully preserved it all. I only have a NTSC TV so I couldn’t check. And even if someone tells me PAL is broken, I wouldn’t be able to do anything about it. I’m not dedicated enough to go out and buy a PAL TV just for testing. [bootrino] helpfully tells me there are TV that understand both standards, which I didn’t know. I’m not dedicated enough to go out and get one of those TV for the task, but at least I know to keep an eye open for such things. This one really is waiting for someone to test and, if there are problems, submit a pull request.

The other problems I know I can handle. In fact, I had a draft of the next item: give the option to use caller-allocated frame buffer instead of always allocating our own. I had this in the code at one point, but it was poorly tested and I didn’t use it in any of the example sketches. The Arduino API Style Guide suggests trimming such extraneous options in the interest of keeping the API surface area simple, so I did that for version 1.0.0. I can revisit it if demand comes back in the future.

One thing I left behind in ESP_8_BIT and want to revive is a performance metric of some sort. For smooth display the developer must perform all drawing work between frames. The waitForFrame() API exists so drawing can start as soon as one frame ends, but right now there’s no way to know how much room was left before next frame begins. This will be useful as people start to probe the limits.

After performance metrics are online, that data can be used to inform the next phase: performance optimizations. The only performance override I’ve done over the default Adafruit GFX library was fillScreen() since all the examples call that immediately after waitForFrame() to clear the buffer. There are many more candidates to override, but we won’t know how much benefit they give unless we have performance metrics online.

The final item on this initial list of issues is support for double- or triple-buffering. I don’t know if I’ll ever get to it, but I wrote it down because it’s such a common thing to want in a graphics stack. This is a rather advanced usage and it consumes a lot of memory. At 61KB per buffer, the ESP32 can’t really afford many of them. At the very least this needs to come after the implementation of user-allocated buffers, because it’s going to be a game of Tetris to find enough memory in between developer code to create all these buffers and they know best how they want to structure their application.

I thought I had covered all the bases and was feeling pretty good about things… but I had a blind spot that Emily Velasco spotted immediately.