Turning to Chemistry for LCD Panel Polarizer

I thought it might be fun to salvage the polarizer from a broken laptop LCD screen, but it has put up quite a fight. I first tried direct mechanical brute force and managed to shatter the glass. Thankfully, not injuring myself doing it. When physical power doesn’t cut it, we turn to chemistry.

The risk of this approach comes from the fact the polarizer is made of plastic of unknown composition. Ideally I could find a solvent that will dissolve the adhesive and leave the plastic intact. If I was better at chemistry I might have some methodical way to find that solvent, but all I’ve got is trial-and-error. To aid in the trial-ing (and the error-ing) I have a portion of the polarizer I’ve already freed from brute force, carrying with it a layer of tacky glue. It’s enough for me to get started.

I had a rough progression of least- to most-aggressive solvents. First up to bat was 70% isopropyl alcohol, and the glue just laughed at its feeble efforts. After I let the alcohol dry, I tried WD-40, which also did nothing. I wiped up as much of it as I could before moving on to the next contestant: Goo-Gone.

Goo-Gone had some effect. It did not magically dissolve the glue as it tends to do with most other glues I come across, but it did soften this stuff somewhat, and it didn’t seem to damage the plastic. Using Goo-Gone to soften the glue, I was able to peel the sheet of polarizer free of the remaining glass and finally freed myself of the risk of puncturing some body part from thin pieces of broken glass.

However, that’s only half a victory as the glue remained stubbornly attached to the plastic making it unusable for light polarization fun. More Goo-Gone only seemed to spread it around and didn’t dissolve it. So I moved on to the next item: mineral spirits. It further softened the glue enough for me to start rubbing them off the plastic. It was a very labor intensive process, but I could start to see the shiny surface of my polarizer sheet. But I soon reached the limits of this approach as well. I started sensing uneven bumps in the surface and I couldn’t figure out what’s going on until I dried off all the mineral spirits for a look.

It appears there are multiple parts to this glue, and there is a much tougher component that clung on to the film. They were applied in lines and that explained the ridges I could feel in my fingertips while this film was damp with mineral spirit.

Finding the limits of mineral spirits for this task, I moved on to acetone a.k.a. nail polish remover. This is something I knew could melt certain plastics, as it’s used to smooth and weld plastic parts 3D-printed in ABS. However, I also knew it is not equally destructive to all plastic, as it seems to do very little (or absolutely nothing) to 3D-printed PLA parts and acetone itself sometimes comes in plastic bottles. Lacking experience in identifying plastics, I proceeded on my trial-and-error process.

The good news: using a small amount of acetone in a test corner, I found that it quickly dissolved the adhesive, turning them into soft goop that are trivial to remove. Wiping it off, I see the clear surface of polarization film with no evidence of chemical etching or erosion. I think this is the ticket!

But then I went too far by soaking the entire sheet in acetone, expecting to pull out a completely clean polarizer. When immersed in acetone, the polarizer film became brittle and cracked into little pieces. It marked the end of this experiment, but next time (I’m confident there’ll be a next time) I’ll try a few intermediate steps to see if I can find a good point on the spectrum between “few drops in a corner” and “soaking the entire sheet.”

Trying to salvage something from this screen’s LCD module was a bust, but I still have a very fascinating backlight module to play with.

Layers of Glass in LG Laptop LCD

I have a broken laptop LCD display module that I’m taking apart. It is a LG LP133WF2(SP)(A1) and it came from a Toshiba Chromebook 2 which was retired due to said cracked screen. I was able to split it into its two main components, the backlight and the display, both connected to the integrated driver circuit board. The backlight connector was something I could disconnect and reconnect, which is not something I could say for the high density connectors to the front display panel. Fortunately the screen is already cracked and nonfunctional so the majority of risk of disassembly is from broken glass.

The edge of this display module made it clear there is a complex multi-layer sandwich within.

There are at least three layers. The topmost layer is very thin and feels like plastic. The middle and bottom layers feel like glass. They don’t come apart easily, so I thought I’d try peeling the top plastic layer like a sticker. It is indeed backed by some adhesive, pretty tenacious ones at that.

I tried to keep the glass layers as flat as I could while I peeled, a difficult task with the strength of that glue which resulted in some alarming flex in the glass. I double and triple checked to make sure my eye protection is in place while peeling. After several centimeters of progress, scary bending and all, I felt a “pop” as the flexing freed whatever had held the middle and bottom glass layers together around their edges. Once this corner popped free, it was trivial to travel around all edges to peel the two glass layers apart.

It was damp between these two layers, presumably a thin layer of the “liquid” in Liquid Crystal Display (LCD). It was easily absorbed by a single sheet of paper towel, and its oily residue cleaned up nicely with 70% isopropyl alcohol. As far as I know, this is not a toxic material and I had not just cut years off my life, but I went and washed my hands before proceeding.

The bottom layer is where the original crack had lived, and these cracks had gotten worse due to the recent flexing. I don’t see anything of interest in this layer so I set it aside for safe disposal.

The two glass layers each had a grating that can be barely felt with my fingertips. They are also visible if I shined light through each layer. They are orthogonal to each other which would make sense if one set controlled horizontal pixels and the other controlled vertical pixels. Also, once the two glass layers separated, I was able to confirm the passive polarization filter (one of the objectives for salvaging) is the flexible sheet of plastic I had been tugging on. I resumed peeling that layer but didn’t get much further. Now that I only have one glass layer instead of two, it shattered under stress.

Even though I expected this as a potential (likely, even) outcome, it was still a surprise when things finally let go. Three cheers for eye protection! I picked out a few tiny shards of glass from my fingertips, but none of them found a blood vessel so there was no bleeding. And I think I managed to collect all the pieces scattered around the table. I had thought this would be a minor setback and I could continue peeling but just with smaller pieces of glass, but I was wrong. I don’t know my glass properties very well, but something happened here to change the mechanical properties of the glass. Once the first break happened, it has almost no strength at all. Continuing to peel — even at a lower force — causes new breaks. Brute strength will take me no further. And when brute strength fails, I turn to chemistry.

LCD Panel Driver Circuit Board

I’m taking apart a broken laptop LCD panel, a LG LP133WF2(SP)(A1) from a Toshiba Chromebook 2. I started with the very fancy tape surrounding the edges. Once the tape was gone, its top edge started unfolding into two parts. But they’re still held together on the bottom edge with the integrated driver board for this display. So I should figure out what that’s about before trying to completely separate the two parts.

The front side of this board had three sets of extremely high density connectors to carry signal for all 1920×1080 pixels on this module.

The back side of this board had all of the integrated circuits and a lower density connector for the backlight.

A single cable carried both power and data from the laptop mainboard. The chip closest to that connector was the largest IC on this board and probably mastermind in charge of this operation.

A search for “LG ANX2804” came up empty, which is not a huge surprise for a chip designed and built by LG for internal consumption by their display division. There’s no reason for them to distribute specifications or datasheets. On the other side of the board we see a connector for the backlight. The connector has nine pins, but in the ribbon we see six thin wires plus a wider seventh wire. This wider wire consumes two of the nine pins, making it a good candidate for either a common anode or cathode for LEDs. This left one pin in the connector seemingly unused.

I had expected just two wires for a simple string of LEDs, but the backlight is evidently more complicated than that. I’m optimistic I can get this figured out because the IC closest to this connector is clearly marked as a TPS 61187 by Texas Instruments, and I hope the information available online will help me sort it out later.

Returning to the front of this board, these high density data connectors are fascinating but I don’t understand everything that’s going on here.

I count somewhere between four and five contacts within a millimeter. This is definitely beyond my soldering skill, but they aren’t soldered anyway. Whatever this type of connection is, it is clearly single use. Once I detach it (it peeled off like tape) there’s no way for me to reattach it. I see nothing to help me align the connector. I’m also curious about the fact the copper contacts area is wider than what we see actually used. I’m sure it’s a provision for something but I don’t know what. For today it doesn’t matter, as the screen is already cracked and nonfunctional so I lose nothing by peeling them off before I explore its intricate layers of glass.

LG LCD Panel LP133WF2(SP)(A1) Teardown

After I checked the USB OTG reader off my teardown to-do list, I decided to continue ignoring what I had originally planned to do and continued tearing down another item that’s been sitting on my teardown to-do list: a broken LG LCD panel LP133WF2(SP)(A1). It was the original screen in a Toshiba Chromebook 2 (CB35-B3340) which I received in a broken state with the screen cracked. I revived the Chromebook with a secondhand replacement screen, and I set the original cracked screen with the intent of eventually taking it apart to see what I can see. “Eventually” is now.

Out of all the retired screens in my hardware pile, this was the most inviting for a teardown due to its construction. The ever-going quest for lighter and thinner electronics meant this screen wasn’t as stout as screens I’ve removed from older laptops. I noticed how flexible it was and it made me nervous while handling it. Most of the old panels I’ve handled felt roughly as rigid as a thick plastic credit card, this display felt more like a cardboard business card. I’m sure the lack of structure contributed to why the screen was cracked.

The primary objective of this exercise is curiosity. I just wanted to see how far I could disassemble it. The secondary objective is to see if I can salvage anything interesting. While the display itself is cracked and could no longer display data, the backlight was still lit and it would be great if I could salvage an illumination panel. And due to how LCDs work, I know there are polarization filters somewhere in its sandwich of layers. I just didn’t know if it’s practical to separate it from the rest of the display.

The primary concern in this exercise is safety. The aforementioned quest for light weight meant every layer in this sandwich will be as thin as it can possibly be, including the sheets of glass. And since the screen is visibly cracked, we already know this activity will involve shards of broken glass. I will be wearing eye protection at all times. I had also thought I would wear gloves to protect my fingertips, but I don’t have the right types for this work. All the gloves I have are either too bulky (can’t work with fine electronics in gardening gloves) or too thin to offer protection (glass shards easily slice through nitrile.) I resigned to keeping a box of band-aid nearby.

All that said, time to get to work: around the metal frame this panel is surrounded by a thin black material that contributes nothing to structure. It’s basically tape. Cut to precise dimensions and applied with the accuracy of automated assembly robots, but it’s adhesive-backed plastic sheets so: tape.

The adhesive is quite tenacious and it did not release cleanly. Once peeled, the top edge of the LCD array could separate from the backlight. The diagonal crack is vaguely visible through the silvered mirror back of the LCD.

This is a good start, but I can’t pull them apart yet. Right now they’re both connected to this panel’s integrated driver circuit board.

Rosewill USB OTG Memory Card Reader (RHBM-100-U2) Teardown

I got this thing from a “Does Not Work” box intending to do a teardown. Since it’s so small, I thought it would be fun and quick, but I kept putting it off. It’s been sitting adjacent to my workbench through several reorganizations and cleanups, and I kept moving it from one place to another. Today I was about to move it again when I decided: No more. I have other things I need to do, but I’m putting them on pause for this thing. Today is the day.

Based on all the slots on one side, this is clearly a multi-format flash media reader/writer. The other end was a little more interesting, as it is a USB micro-B plug instead of the usual socket. The presence of the plug implies this was designed for use with USB OTG devices such as an Android phone, allowing them to read and write flash cards. Aside from a few labels for the various types of flash media, there was only the “Rosewill” brand logo. I found no model number or serial number printed on the enclosure. Searching for “Rosewill USB OTG” retrieved information on many products. The closest match based on pictures is the RHBM-100-U2.

There was a visible seam around the faceplate full of memory slots. The remainder of the enclosure appeared seamless. The lack of fasteners indicate this faceplate is glued in place. Using pliers, I was able to get a bite out of the enclosure to use as starting point. Not elegant, but I’m going for speed in this teardown and elegance be damned.

The bite allowed my pliers to get a firm grip on the faceplate and peel all around the perimeter. After that, I could pull the faceplate free.

Once the faceplate was removed, a firm push on the USB micro-B plug popped the final few glue points free and I could slide out the PCB. As expected, it was relatively simple dominated by surface mount flash media connectors.

Aside from those media connectors, one side was dominated by small passives.

The other side had one IC clearly more sophisticated than anything else on the device. The only other unexpected item is the black goo on the USB micro-B plug. I have no idea why that is there.

Searching on “GLB23” didn’t get me anywhere, but “GL823” got a likely hit with Genesys Logic. It is advertised as a single-chip solution for implement a multi-format USB media card reader, which is a perfect match for the device at hand. I didn’t bother downloading its datasheet, but I wouldn’t be surprised if this device basically followed the reference design.

Years after I picked this up, intending for a quick teardown, I finally did it. It no longer needs to occupy space on my workbench and I can move on with my life.

Four Screws Fasten NVIDIA GTX 1070 Dust Cover

I recently took apart three external hard drives to retrieve their bare drives within to use in an internal application. In all three cases, there were no externally accessible fasteners to help with disassembly. I had to pop plastic clips loose, breaking some of them. For laughs, I thought it’d be fun to talk about a time when I had the opposite problem: I was confronted with far too many screws, most of which weren’t relevant to the goal.

I have a NVIDIA GTX 1070 GPU and it had been in service for several years. NVIDIA designed the cooling shroud with a faceted polygonal design. Part of it was a clear window where I could see dust had accumulated through years of use. I thought I should clean that up, but it wasn’t obvious to me which of the many visible screws held the dust cover in place. The visual answer is in this picture:

In case these words are separated from the illustrative image, here is the text description:

Starting from the clear window, we see four fasteners closest to them. These four fasteners hold the clear plastic to the dust cover and not useful right now. We need to look a little bit further. There are two screws further away between the clear window and the fan. Those are two of the four dust cover screws. The remaining two are on the top and bottom of the card, closer to the metal video connector backplate. Once these four screws are removed, the dust cover can slide sideways (towards the backplate) approximately 1cm and then the dust cover can lift free. After that, the four screws holding the clear window can be removed to separate the window from the rest of the dust cover.

In the course of discovering this information, I had mistakenly removed a few other screws that turned out to be irrelevant. Since they’re not useful to the task at hand, I put them back without digging deeper into their actual roles.

Hot Air Station Amateur Hour

A hot air station is one of the standard tools for working with surface-mount electronics, mostly in the context of rework to fix problems rather than initial assembly. In addition to manuals for individual pieces of equipment, there are guides like this one from Sparkfun. My projects haven’t really needed me to buy one, though that’s debatable whether that’s a cause or an effect: perhaps I design my projects so I don’t need one, because I don’t have one!

Either way I knew some level of dexterity and skill are required to use the tool well, and the best way to get started is to start playing with one in a non-critical environment. Shortly before the pandemic lockdown, I had the opportunity when Emily Velasco offered to bring her unit to one of our local meetups for me to play with. I had a large collection of circuit boards removed from tearing down various pieces of equipment. I decided to bring the mainboard from an Acer Aspire Switch 10, which was a small Windows 8 laptop/tablet convertible that I had received in an as-is nonfunctional state. I was able to get it up and running briefly but I think my power supply hack had provided the wrong voltage. Because a few months later, it no longer powered up.

Using the hot air rework station, I started with small SMD components. A few capacitors, transistors, things of that nature. I could take them off, and put them back on. I have no idea if they remained functional, that will be a future test at some point.

The USB ports and mini HDMI port on this device were surface mounted and I tried them next. I could remove them with the hot air rework station, but I couldn’t reinstall them. I got close so I believe this is a matter of practice and improving my technique.

Those connectors had relatively few large connection points, I tried my luck with larger chip packages on board. These were memory modules and flash storage modules, fairly large chips with electrical connections underneath where no soldering iron could reach them. My success rate here is similar, of being able to pull them off but not put them back on. I was less optimistic I could get this to work with practice, since these are ball grid array (BGA) modules and I would have to re-ball them to reinstall properly.

The largest chip on the board was the Intel CPU. I suspect there are heat dissipation measures in circuit board copper layers, similar to how a DRV8833 handles cooling with PowerPAD. Whatever is going on, I could not remove the CPU at all with this hot air rework station.

This was a fun introductory hot air play session, I look forward to more opportunities to learn how to use hot air once we can safely hold hacker meetups again. Here’s the final dissected cadaver:

Hot air rework session end

Western Digital My Book 1TB (WDBACW0010HBK-01) Teardown

I took apart an external USB 2.0 hard drive I had formerly used for MacOS Time Machine, but haven’t touched in years. It was the second of three external drives under two terabytes that I had gathering dust. The third and final drive to be disassembled in this work session was used for a similar purpose: the Windows Backup tool that (as far as I can recall) was introduced in Windows 8. Now it will serve that role again, sort of, by becoming part of my fault-tolerant ZFS RAIDZ2 storage array running under TrueNAS. Which does not support USB external drives, so I am removing the bare drive within for its SATA connection.

Like the other two drives, this one lacked external fasteners and had to be taken apart by prying at its seams to release plastic clips. (Not all of the clips survived the process.)

The geometry was confusing to me at first, but following the seams (and releasing clips) made it clear this enclosure was made of two C-shaped pieces that are orthogonal to each other. I thought it was a creative way to approach the problem.

I was also happy to see that the cooling vents on this drive was more likely to be useful than the other two, since the drive is actually exposed to the airflow and it is designed to stand on its edge so warm air can naturally escape by convection. There is no cooling fan, and none was expected.

Like the other two drives, there’s a surface mounted indicator LED on the circuit board. To carry its light to the front façade, there’s an intricately curved light pipe. It might look like a flexible piece of clear plastic in the picture but it is actually rigid. I was a little sad to see that, because its precision fixed curvature means there’s almost no chance I can find a way to reuse it.

Two circuit boards are visible here. The duller green board is the actual hard drive controller circuit, the brighter green board is the USB3 adapter board converting it to an external drive. My goal is to remove the bright green board to expose the bare drive’s SATA interface so I could install it in my TrueNAS server. It was quite stoutly attached! On the other two drives, once the internals were exposed I could easily pull the drive loose from the adapter board. This board was rigidly fastened to the drive with two screws, including this one that took me an embarrassingly long time to find. On the upside, this rigidly fastened metal reinforcement meant the USB3 port is the strongest I’ve seen by far. Another neat feature visible here is a power button, a feature I don’t often see on external drives.

This assembly was mounted inside the external case with some very custom shaped pieces of rubber for vibration isolation. Like the light pipe, I doubt I would be able to find a use for these pieces elsewhere. But that’s fine, the main objective was to retrieve the SATA HDD within this enclosure and that was successful.

This is enough hard drive “shucking” for one work session. I have more retired drives (two terabytes and larger) awaiting disassembly, but I think I have enough to satisfy my TrueNAS array replacement needs for the near future.

Seagate Expansion External Drive 1.5 TB (9SF2A6-500) Teardown

The terabyte drive shucking series continues! Second in this work session is an older Seagate external drive with a slower USB 2.0 interface. They dropped out of favor after USB 3.0 came on the scene, but that’s only a limitation imposed by the external enclosure. I’m confident the hard drive within will be just as fast as the others once I’ve pulled it out and can connect it via SATA to my TrueNAS ZFS storage array. This particular drive served as my MacOS Time Machine backup drive and exhibited some strange problems that resulted in my MacBook showing the spinning beach ball of death patience while the drive makes audible mechanical clicking noises trying to recover. I no longer trust the drive as a reliable single-point backup, but I’m fine with trying it in a fault tolerant RAIDZ2 array.

Again I had no luck finding fasteners on the external enclosure, so I proceeded to pry on the visible seam. I was rewarded by the sound of snapping plastic clips and lid released.

Despite the visible ventilation holes, it seems like the hard drive is actually fully enclosed in a metal shell. I guess those vents didn’t do very much. The activity light in this particular drive was not as clever as the previous drive, it is a straightforward LED at the end of a wire harness.

Unlike the previous drive, which had an external shock-absorbing shell, this drive’s vibration-isolating mechanism is inside in the form of these black squares of soft rubber.

The screws have standard #6-32 thread but have an extra shoulder to fit into these rubber squares. I feel these would be easily reusable so I’m going to save them for when I need a bit of shock absorption.

Once those four screws were removed, the bare drive slid out of the case easily. I didn’t need to bend the top of the sheet metal box to remove the drive, I did it so we can see the circuit board in this picture.

When I added this bare drive to my ZFS array, I had half expected the process to fail. If the clicking-noise problem persists, I expect TrueNAS to fail the drive and tell me to install another. I was pleasantly surprised to see the entire process completely smoothly. There were no audible clicking, and TrueNAS accepted it as a productive member of the drive array. I wonder if the problem I encountered with this drive was MacOS specific? It doesn’t matter anymore, now it helps back up data for all of my computers and not just the MacBook Air. It’ll share this new job with one of its counterparts, who formerly kept my Windows backups.

Seagate Backup Plus Slim Portable Drive 1TB (SRD00F1) Teardown

I remember when consumer level hard drives reached one terabyte of capacity. At the time it seemed like an enormous amount of space and I had no idea how I could possibly use it all, and where the storage industry could go when additional capacity didn’t seem as useful as it once did. The answer to the latter turned out to be solid state drives that sacrificed capacity but had far superior performance. SSD capacities have since grown, as our digital lives have also grown such that a terabyte of data no longer feels gargantuan.

As someone who has played with computers for a while, I naturally had a pile of retired hard drives. An earlier purge dismissed everything under one terabyte, but with the wonder of the terabyte milestone still in my mind I held on to those one terabyte and higher. This became sillier and sillier every year, especially now that the two worlds have met back up: Sometime within the past year I noticed I can buy an one terabyte solid state drive for under $100 USD.

In this environment, the only conceivable use I have for these old drives is to put them together into a large storage array, which motivated me to retire my two-drive FreeNAS box. My replacement running that operating system (which has since been rebranded TrueNAS) put six of my old terabyte drives to use as a RAIDZ2 array, resulting in four terabytes of capacity and tolerance of up to two drive failures. In the year since I’ve fired these old drives back up, I was a bit disappointed but not terribly surprised some of these old drives have already started failing. It’s not a huge worry as I had plenty more drives waiting in reserve. However, some of them are sitting inside external enclosures and need to be shucked in order to retrieve the disk drive within. First up: the Seagate Backup Plus Slim Portable Drive (SRD00F1) This will be a smaller 2.5″ laptop-sized drive with slower performance, but that should be fine as a member of a large secondary storage array.

I used this drive for a while as portable bulk storage to hold stuff that didn’t fit on my laptop’s small SSD, so it had to be something durable enough to be tossed in my backpack without too much worry. I was enamored with the design, which had an impact absorbing exterior of blue rubber that also incorporated a flexible band to hold the corresponding cable cable while in my backpack. It had an USB 3 micro B connector which I rarely see beyond external hard drives like these.

As a small portable drive, there were the expected lack of visible fasteners. Perhaps something is hidden under the sticker?

Nope, no fastener there. Without stickers, this device must be held together by either glue or clips. Most of the body is black plastic and the top feels like a sheet of metal, so the gap between them is the obvious place to start prying. It didn’t take a lot of force to break the top free from some indents cast into the plastic, but it’s enough force to bend the metal. I had passed the point of no return: this drive will never come back together nicely.

The top was held by both double-sided tape and a plastic ring that helped it clip onto the body. I thought it was very clever how they designed the activity indicator light. Under the metal slit is a block of white tape (still attached to the lid in the picture below) serving as diffuser for the LED. The LED is on a circuit board that is almost completely enclosed by foil tape, but there’s a small hole cut in the tape for the LED to shine through.

There were no fasteners inside the case, either. Once the lid was removed, the drive came out easily.

Here’s a closer look at the drive, with its electronics still inside the foil tape. The rectangular hole for activity LED is visible on the right, with the LED itself peeking through.

After the adhesive-backed foil was removed, I could pull off the adapter circuit board. It is an admirably minimal design to bridge USB3 to SATA. The orientation of the board was a surprise, I hadn’t know there were vertically-standing surface-mount connectors for USB3 micro-B and for SATA connectors. Most of the connectors I’ve seen sit flat on the same plane as the circuit board, not orthogonal to the board like these.

At the moment I don’t foresee anything useful I could do with this board, but at least it is tiny so I can toss it into the hoard as I await ideas. In the meantime, it’s onwards to the next retired hard drive.

Cat and Galactic Squid

Emily Velasco whipped up some cool test patterns to help me diagnose problems with my port of AnimatedGIF Arduino library example, rendering to my ESP_8_BIT_composite color video out library. But that wasn’t where she first noticed a problem. That honor went to the new animated GIF she created upon my request for something nifty to demonstrate my library.

This started when I copied an example from the AnimatedGIF library for the port. After I added the code to copy between my double buffers to keep them consistent, I saw it was a short clip of Homer Simpson from The Simpsons TV show. While the legal department of Fox is unlikely to devote resources to prosecute authors of an Arduino library, I was not willing to take the risk. Another popular animated GIF is Nyan Cat, which I had used for an earlier project. But despite its online spread, there is actual legal ownership associated with the rainbow-pooping pop tart cat. Complete with lawsuits enforcing that right and, yes, an NFT. Bah.

I wanted to stay far away from any legal uncertainties. So I asked Emily if she would be willing to create something just for this demo as an alternate to Homer Simpson and Nyan Cat. For the inspirational subject, I suggested a picture she posted of her cat sleeping on her giant squid pillow.

A few messages back and forth later, Emily created Cat and Giant Squid complete with a backstory of an intergalactic adventuring duo.

Here they are on an seamlessly looping background, flying off to their next adventure. Emily has released this art under the CC BY-SA (Creative Commons Attribution-ShareAlike) 4.0 license. And I have happily incorporated it into ESP_8_BIT_composite library as an example of how to show animated GIFs on an analog TV. When I showed the first draft, she noticed a visual artifact that I eventually diagnosed to missing X-axis offsets. After I fixed that, the animation played beautifully on my TV. Caveat: the title image of this post is hampered by the fact it’s hard to capture a CRT on camera.

Finding X-Offset Bug in AnimatedGIF Example

Thanks to a little debugging, I figured out my ESP_8_BIT_composite color video out Arduino library required a new optional feature to make my double-buffering implementation compatible with libraries that rely on a consistent buffer such as AnimatedGIF. I was happy that my project, modified from one of the AnimatedGIF examples, was up and running. Then I swapped out its test image for other images, and it was immediately clear the job is not yet done. These test images were created by Emily Velasco and released under Creative Commons Attribution-ShareAlike 4.0 license (CC BY-SA 4.0).

This image resulted in the flawed rendering visible as the title image of this post. Instead of numbers continously counting upwards in the center of the screen, various numbers are rendered at wrong places and not erased properly in the following screens. Here is another test image to get more data

Between the two test images and observing where they were on screen, I narrowed the problem. Animated GIF files might only update part of the frame and when that happens, the frame subset is to be rendered at a X/Y offset relative to the origin. The Y offset was accounted for correctly, but the X offset went unused meaning delta frames were rendering against the left edge rather than the correct offset. This problem was not in my library, but inherited from the AnimatedGIF example. Where it went unnoticed because the trademark-violating animated GIF used by that example didn’t have an X-axis offset. Once I understood the problem, I went digging into AnimatedGIF code. Where I found the unused X-offset, and added it into the example where it belonged. These test images now display correctly, but they’re not terribly interesting to look at. What we need is a cat with galactic squid friend.

Animated GIF Decoder Library Exposed Problem With Double Buffering

Once I resolved all the problems I knew existed in version 1.0.0 of my ESP_8_BIT_composite color video out Arduino library, I started looking around for usage scenarios that would unveil other problems. In that respect, I can declare my next effort a success.

My train of thought started with ease of use. Sure, I provided an adaptation of Adafruit’s GFX library designed to make drawing graphics easy, but how could I make things even easier? What is the easiest way for someone to throw up a bit of colorful motion picture on screen to exercise my library? The answer came pretty quickly: I should demonstrate how to display an animated GIF on an old analog TV using my library.

This is a question I’ve contemplated before in the context of the Hackaday Supercon 2018 badge. Back then I decided against porting a GIF decoder and wrote my own run-length encoding instead. The primary reason was that I was short on time for that project and didn’t want to risk losing time debugging an unfamiliar library. Now I have more time and can afford the time to debug problems porting an unfamiliar library to a new platform. In fact, since the intent was to expose problems in my library, I fully expected to do some debugging!

I looked around online for an animated GIF decoder library written in C or C++ code with the intent of being easily portable to microcontrollers. Bonus if it has already been ported to some sort of Arduino support. That search led me to the AnimatedGIF library by Larry Bank / bitbank2. The way it was structured made input easy: I don’t have to fuss with file I/O or SPIFFS, I can feed it a byte array. The output was also well matched to my library, as the output callback renders the image one horizontal line at a time, a great match for the line array of ESP_8_BIT.

Looking through the list of examples, I picked ESP32_LEDMatrix_I2S as the most promising starting point for my test. I modified the output call from the LED matrix I2S interface to my Adafruit GFX based interface, which required only minor changes. On my TV I can almost see a picture, but it is mostly gibberish. As the animation progressed, I can see deltas getting rendered, but they were not matching up with their background.

After chasing a few dead ends, the key insight was noticing my noisy background of uninitialized memory was flipping between two distinct values. That was my reminder I’m performing double-buffering, where I swap between front and back buffers for every frame. AnimatedGIF is efficient about writing only the pixels changed from one frame to the next, but double buffering meant each set of deltas was written over not the previous frame, but two frames prior. No wonder I ended up with gibberish.

Aside: The gibberish amusingly worked in my favor for this title image. The AnimatedGIF example used a clip from The Simpsons, copyrighted material I wouldn’t want to use here. But since the image is nearly unrecognizable when drawn with my bug, I can probably get away with it.

The solution is to add code to keep the two buffers in sync. This way libraries minimizing drawing operations would be drawing against the background they expected instead of an outdated background. However, this would incur a memory copy operation which is a small performance penalty that would be wasted work for libraries that don’t need it. After all of my previous efforts to keep API surface area small, I finally surrendered and added a configuration flag copyAfterSwap. It defaults to false for fast performance, but setting it to true will enable the copy and allow using libraries like AnimatedGIF. It allowed me to run the AnimatedGIF example, but I ran into problems playing back other animated GIF files due to missing X-coordinate offsets in that example code.

TIL Some Video Equipment Support Both PAL and NTSC

Once I sorted out memory usage of my ESP_8_BIT_composite Arduino library, I had just one known issue left on the list. In fact, the very first one I filed: I don’t know if PAL video format is properly supported. When I pulled this color video signal generation code from the original ESP_8_BIT project, I worked to keep all the PAL support code intact. But I live in NTSC territory, how am I going to test PAL support?

This is where writing everything on GitHub paid off. Reading my predicament, [bootrino] passed along a tip that some video equipment sold in NTSC geographic regions also support PAL video, possibly as a menu option. I poked around the menu of the tube TV I had been using to develop my library, but didn’t see anything promising. For the sake of experimentation I switched my sketch into PAL mode just to see what happens. What I saw was a lot of noise with a bare ghost of the expected output, as my TV struggled to interpret the signal in a format it could almost but not quite understand.

I knew the old Sony KP-53S35 RPTV I helped disassemble is not one of these bilingual devices. When its signal processing board was taken apart, there was an interface card to host a NTSC decoder chip. Strongly implying that support for PAL required a different interface card. It also implies newer video equipment have a better chance of having multi-format support, as they would have been built in an age when manufacturing a single worldwide device is cheaper than manufacturing separate region-specific hardware. I dug into my hardware hoard looking for a relatively young piece of video hardware. Success came in the shape of a DLP video projector, the BenQ MS616ST.

I originally bought this projector as part of a PC-based retro arcade console with a few work colleagues, but that didn’t happen for reasons not important right now. What’s important is that I bought it for its VGA and HDMI computer interface ports so I didn’t know if it had composite video input until I pulled it out to examine its rear input panel. Not only does this video projector support composite video in both NTSC and PAL formats, it also had an information screen where it indicates whether NTSC or PAL format is active. This is important, because seeing the expected picture isn’t proof by itself. I needed the information screen to verify my library’s PAL mode was not accidentally sending a valid NTSC signal.

Further proof that I am verifying a different code path was that I saw a visual artifact at the bottom of the screen absent from NTSC mode. It looks like I inherited a PAL bug from ESP_8_BIT, where rossumur was working on some optimizations for this area but left it in a broken state. This artifact would have easily gone unnoticed on a tube TV as they tend to crop off the edges with overscan. However this projector does not perform overscan so everything is visible. Thankfully the bug is easy to fix by removing an errant if() statement that caused PAL blanking lines to be, well, not blank.

Thanks to this video projector fluent in both NTSC and PAL, I can now confidently state that my ESP_8_BIT_composite library supports both video formats. This closes the final known issue, which frees me to go out and find more problems!

[Code for this project is publicly available on GitHub]

Allocating Frame Buffer Memory 4KB At A Time

Getting insight into computational processing workload was not absolutely critical for version 1.0.0 of my ESP_8_BIT_composite Arduino library. But now that the first release is done, it was very important to get those tools up and running for the development toolbox. Now that people have a handle on speed, I turned my attention to the other constraint: memory. An ESP32 application only has about 380KB to work with, and it takes about 61K to store a frame buffer for ESP_8_BIT. Adding double-buffering also doubled memory consumption, and I had actually half expected my second buffer allocation to fail. It didn’t, so I got double-buffering done, but how close are we skating to the edge here?

Fortunately I did not have to develop my own tools here to gain insight into memory allocation, ESP32 SDK already had one in the form of heap_caps_print_heap_info() For my purposes, I called it with the MALLOC_CAP_8BIT flag because pixels are accessed at the single byte (8 bit) level. Here is the memory output running my test sketch, before I allocated the double buffers. I highlighted the blocks that are about to change in red:

Heap summary for capabilities 0x00000004:
  At 0x3ffbdb28 len 52 free 4 allocated 0 min_free 4
    largest_free_block 4 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffb8000 len 6688 free 5872 allocated 688 min_free 5872
    largest_free_block 5872 alloc_blocks 5 free_blocks 1 total_blocks 6
  At 0x3ffb0000 len 25480 free 17172 allocated 8228 min_free 17172
    largest_free_block 17172 alloc_blocks 2 free_blocks 1 total_blocks 3
  At 0x3ffae6e0 len 6192 free 6092 allocated 36 min_free 6092
    largest_free_block 6092 alloc_blocks 1 free_blocks 1 total_blocks 2
  At 0x3ffaff10 len 240 free 0 allocated 128 min_free 0
    largest_free_block 0 alloc_blocks 5 free_blocks 0 total_blocks 5
  At 0x3ffb6388 len 7288 free 0 allocated 6784 min_free 0
    largest_free_block 0 alloc_blocks 29 free_blocks 1 total_blocks 30
  At 0x3ffb9a20 len 16648 free 5784 allocated 10208 min_free 284
    largest_free_block 4980 alloc_blocks 37 free_blocks 5 total_blocks 42
  At 0x3ffc1f78 len 123016 free 122968 allocated 0 min_free 122968
    largest_free_block 122968 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffe0440 len 15072 free 15024 allocated 0 min_free 15024
    largest_free_block 15024 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffe4350 len 113840 free 113792 allocated 0 min_free 113792
    largest_free_block 113792 alloc_blocks 0 free_blocks 1 total_blocks 1
    free 286708 allocated 26072 min_free 281208 largest_free_block 122968

I was surprised at how fragmented the memory space already was even before I started allocating memory in my own code. There are ten blocks of available memory, only two of which are large enough to accommodate an allocation for 60KB. Here is the memory picture after I allocated the two 60KB frame buffers (and two line arrays, one for each frame buffer.) With the changed sections highlighted in red.

Heap summary for capabilities 0x00000004:
  At 0x3ffbdb28 len 52 free 4 allocated 0 min_free 4
    largest_free_block 4 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffb8000 len 6688 free 3920 allocated 2608 min_free 3824
    largest_free_block 3920 alloc_blocks 7 free_blocks 1 total_blocks 8
  At 0x3ffb0000 len 25480 free 17172 allocated 8228 min_free 17172
    largest_free_block 17172 alloc_blocks 2 free_blocks 1 total_blocks 3
  At 0x3ffae6e0 len 6192 free 6092 allocated 36 min_free 6092
    largest_free_block 6092 alloc_blocks 1 free_blocks 1 total_blocks 2
  At 0x3ffaff10 len 240 free 0 allocated 128 min_free 0
    largest_free_block 0 alloc_blocks 5 free_blocks 0 total_blocks 5
  At 0x3ffb6388 len 7288 free 0 allocated 6784 min_free 0
    largest_free_block 0 alloc_blocks 29 free_blocks 1 total_blocks 30
  At 0x3ffb9a20 len 16648 free 5784 allocated 10208 min_free 284
    largest_free_block 4980 alloc_blocks 37 free_blocks 5 total_blocks 42
  At 0x3ffc1f78 len 123016 free 56 allocated 122880 min_free 56
    largest_free_block 56 alloc_blocks 2 free_blocks 1 total_blocks 3
  At 0x3ffe0440 len 15072 free 15024 allocated 0 min_free 15024
    largest_free_block 15024 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffe4350 len 113840 free 113792 allocated 0 min_free 113792
    largest_free_block 113792 alloc_blocks 0 free_blocks 1 total_blocks 1
    free 161844 allocated 150872 min_free 156248 largest_free_block 113792

The first big block, which previously had 122,968 bytes available, became the home of both 60KB buffers leaving only 56 bytes. That is a very tight fit! A smaller block, which previously had 5,872 bytes free, now had 3,920 bytes free indicating that’s where the line arrays ended up. A little time with the calculator with these numbers arrived at 16 bytes of overhead per memory allocation.

This is good information to inform some decisions. I had originally planned to give the developer a way to manage their own memory, but I changed my mind on that one just as I did for double buffering and performance metrics. In the interest of keeping API simple, I’ll continue handling the allocation for typical usage and trust that advanced users know how to take my code and tailor it for their specific requirements.

The ESP_8_BIT line array architecture allows us to split the raw frame buffer into smaller pieces. Not just a single 60KB allocation as I have done so far, it can accommodate any scheme all the way down to allocating 240 horizontal lines individually at 256 bytes each. That will allow us to make optimal use of small blocks of available memory. But doing 240 instead of 1 allocation for each of two buffers means 239 additional allocations * 16 bytes of overhead * 2 buffers = 7,648 extra bytes of overhead. That’s too steep of a price for flexibility.

As a compromise, I will allocate in the frame buffer in 4 kilobyte chunks. These will fit in seven out of ten available blocks of memory, an improvement from just two. Each frame would consist of 15 chunks. This works out to an extra 14 allocations * 16 bytes of overhead * 2 buffers = 448 bytes of overhead. This is a far more palatable price for flexibility. Here are the results with the frame buffers allocated in 4KB chunks, again with changed blocks in red:

Heap summary for capabilities 0x00000004:
  At 0x3ffbdb28 len 52 free 4 allocated 0 min_free 4
    largest_free_block 4 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffb8000 len 6688 free 784 allocated 5744 min_free 784
    largest_free_block 784 alloc_blocks 7 free_blocks 1 total_blocks 8
  At 0x3ffb0000 len 25480 free 724 allocated 24612 min_free 724
    largest_free_block 724 alloc_blocks 6 free_blocks 1 total_blocks 7
  At 0x3ffae6e0 len 6192 free 1004 allocated 5092 min_free 1004
    largest_free_block 1004 alloc_blocks 3 free_blocks 1 total_blocks 4
  At 0x3ffaff10 len 240 free 0 allocated 128 min_free 0
    largest_free_block 0 alloc_blocks 5 free_blocks 0 total_blocks 5
  At 0x3ffb6388 len 7288 free 0 allocated 6776 min_free 0
    largest_free_block 0 alloc_blocks 29 free_blocks 1 total_blocks 30
  At 0x3ffb9a20 len 16648 free 1672 allocated 14304 min_free 264
    largest_free_block 868 alloc_blocks 38 free_blocks 5 total_blocks 43
  At 0x3ffc1f78 len 123016 free 28392 allocated 94208 min_free 28392
    largest_free_block 28392 alloc_blocks 23 free_blocks 1 total_blocks 24
  At 0x3ffe0440 len 15072 free 15024 allocated 0 min_free 15024
    largest_free_block 15024 alloc_blocks 0 free_blocks 1 total_blocks 1
  At 0x3ffe4350 len 113840 free 113792 allocated 0 min_free 113792
    largest_free_block 113792 alloc_blocks 0 free_blocks 1 total_blocks 1
    free 161396 allocated 150864 min_free 159988 largest_free_block 113792

Instead of almost entirely consuming the block with 122,968 bytes leaving just 56 bytes, the two frame buffers are now distributed among smaller blocks leaving 28,329 contiguous bytes free in that big block. And we still have anther big block free with 113,792 bytes to accommodate large allocations.

Looking at this data, I could also see allocating in smaller chunks would have led to diminishing returns. Allocating in 2KB chunks would have doubled the overhead but not improved utilization. Dropping to 1KB would double the overhead again, and only open up one additional block of memory for use. Therefore allocating in 4KB chunks is indeed the best compromise, assuming my ESP32 memory map is representative of user scenarios. Satisfied with this arrangement, I proceeded to work on my first and last bug of version 1.0.0: PAL support.

[Code for this project is publicly available on GitHub]

Lightweight Performance Metrics Have Caveats

Before I implemented double-buffering for my ESP_8_BIT_composite Arduino library, the only way we know we’re overloaded is when we start seeing visual artifacts on screen. After I implemented double-buffering, when we’re overloaded we’ll see the same data shown for two or more frames because the back buffer wasn’t ready to be swapped. A binary good/no-good feedback is better than nothing but it would be frustrating to work with and I knew I could do better. I wanted to collect some performance metrics a developer can use to know how close they’re running to the edge before going over.

This is another feature I had originally planned as some type of configurable data. My predecessor ESP_8_BIT handled it as a compile-time flag. But just as I decided to make double-buffering run all the time in the interest of keeping the Arduino API easy to use, I’ve decided to collect performance metrics all the time. The compromise is that I only do so for users of the Adafruit GFX API, who have already chosen ease of use over maximum raw performance. The people who use the raw frame buffer API will not take the performance hit, and if they want performance metrics they can copy what I’ve done and tailor it to their application.

The key counter underlying my performance measurement code goes directly down to a feature of the Tensilica CPU. CCount, which I assume to mean cycle count, is incremented at every clock cycle. When the CPU is running at full speed of 240MHz, it increments by 240 million within each second. This is great, but the fact it is a 32-bit unsigned integer limits its scope, because that means the count will overflow every 232 / 240,000,000 = 17.895 seconds.

I started thinking of ways to keep a 64-bit performance counter in sync with the raw CCount, but in the interest of keeping things simple I abandoned that idea. I will track data through each of these ~18 second periods and, as soon as CCount overflows, I’ll throw it all out and start a new session. This will result in some loss of performance data but it eliminates a ton of bookkeeping overhead. Every time I notice an overflow, statistics from the session is output to logging INFO level. The user can also query the running percentage of the session at any time, or explicitly terminate a session and start a new one for the purpose of isolating different code.

The percentage reported is the ratio of of clock cycles spent in waitForFrame() relative to the amount of time between calls. If the drawing loop does no work, like this:

void loop() {

Then 100% of the time is spent waiting. This is unrealistic because it’s not useful. For realistic drawing loops that does more work, the percentage will be lower. This number tells us roughly how much margin we have to spare to take on more work. However, “35% wait time” does not mean 35% CPU free, because other work happens while we wait. For example, the composite video signal generation ISR is constantly running, whether we are drawing or waiting. Actual free CPU time will be somewhere lower than this reported wait percentage.

The way this percentage is reported may be unexpected, as it is an integer in the range from 0 to 10000 where each unit is a percent or a percent. The reason I did this is because the floating-point unit on an ESP32 imposes its own overhead that I wanted to avoid in my library code. If the user wants to divide by 100 for a human-friendly percentage value, that is their choice to accept the floating-point performance overhead. I just didn’t want to force it on every user of my library.

Lastly, the session statistics include frames rendered & missed, and there is an overflow concern for those values as well. The statistics will be nonsensical in the ~18 second session window where either of them overflow, though they’ll recover by the following session. Since these are unsigned 32-bit values (uint32_t) they will overflow at 232 frames. At 60 frames per second, that’s a loss of ~18 seconds of data once every 2.3 years. I decided not to worry about it and turn my attention to memory consumption instead.

[Code for this project is publicly available on GitHub]

Double Buffering Coordinated via TaskNotify

Eliminating work done for pixels that will never been seen is always a good change for efficiency. Next item on the to-do list is to work on pixels that will be seen… but we don’t want to see them until they’re ready. Version 1.0.0 of ESP_8_BIT_composite color video out library used only a single buffer, where code is drawing to the buffer at the same time the video signal generation code is reading from the buffer. When those two separate pieces of code overlap, we get visual artifacts on screen ranging from incomplete shapes to annoying flickers.

The classic solution to this is double-buffering, which the precedent ESP_8_BIT did not do. I hypothesize there were two reasons for this: #1 emulator memory requirements did not leave enough for a second buffer and #2 emulators sent its display data in horizontal line order, managing to ‘race ahead” of the video scan line and avoid artifacts. But now both of those are gone. #1 no longer applies because emulators had been cut out, freeing memory. And we lost #2 because Adafruit GFX is concentrated around vertical lines so it is orthogonal to scan line and no longer able to “race ahead” of it resulting in visual artifacts. Thus we need two buffers. A back buffer for the Adafruit GFX code to draw on, and a front buffer for the video signal generation code to read from. At the end of each NTSC frame, I have an opportunity to swap the buffers. Doing it at that point ensures we’ll never try to show a partially drawn frame.

I had originally planned to make double-buffering an optional configurable feature. But once I saw how much of an improvement this was, I decided everyone will get it all of the time. In the spirit of Arduino library style guide recommendations, I’m keeping the recommended code path easy to use. For simple Arduino apps the memory pressure would not be a problem on an ESP32. If someone wants to return to single buffer for memory needs, or maybe even as a deliberate artistic decision to have flickers, they can take my code and create their own variant.

Knowing when to swap the buffer was easy, video_isr() had a conveniently commented section // frame is done. At that point I can swap the front and back buffers if the back buffer is flagged as ready to go. My problem was that I didn’t know how to signal the drawing code they have a new back buffer and they can start drawing the next frame. The existing video_sync() (which I use for my waitForFrame() API) forecasts the amount of time to render a frame and uses vTaskDelay() which I am somewhat suspicious of. FreeRTOS documentation has the disclaimer that vTaskDelay() has no guarantee that it will resume at the specified time. The synchronization was thus inferred rather than explicit, and I wanted something that ties the two pieces of code more concretely together. My research eventually led to vTaskNotifyGiveFromISR() I can use in video_isr() to signal its counterpart ulTaskNotifyTake() which I will use for a replacement implementation of video_sync(). I anticipate this will prove to be a more reliable way for the application code to know they can start working on the next frame. But how much time do they have to spare between frames? That’s the next project: some performance metrics.

[Code for this project is publicly available on GitHub]

The Fastest Pixels Are Those We Never Draw

It’s always good to have someone else look over your work, they find things you miss. When Emily Velasco started writing code to run on my ESP_8_BIT_composite library, her experiment quickly ran into flickering problems with large circles. But that’s not as embarrassing as another problem, which triggered ESP32 Core Panic system reset.

When I started implementing a drawing API, I specified X and Y coordinates as unsigned integers. With a frame buffer 256 pixels wide and 240 pixels tall, it was a great fit for 8-bit unsigned integers. For input verification, I added a check to make sure Y did not exceed 240 and left X along as it would be a valid value by definition.

When I put Adafruit’s GFX library on top of this code, I had to implement a function with the same signature as Adafruit used. The X and Y coordinates are now 16-bit numbers, so I added a check to make sure X isn’t too large either. But these aren’t just 16-bit numbers, they are int16_t signed integers. Meaning coordinate values can be negative, and I forgot to check that. Negative coordinate values would step outside the frame buffer memory, triggering an access violation, hence the ESP32 Core Panic and system reset.

I was surprised to learn Adafruit GFX default implementation did not have any code to enforce screen coordinate limits. Or if they did, it certainly didn’t kick in before my drawPixel() override saw them. My first instinct is to clamp X and Y coordinate values within the valid range. If X is too large, I treat it as 255. If it is negative, I treat it as zero. Y is also clamped between 0 and 239 inclusive. In my overrides of drawFastHLine and drawFastVLine, I also wrote code to gracefully handle situations when their width or heights are negative, swapping coordinates around so they remain valid commands. I also used the X and Y clamping functions here to handle lines that were partially on screen.

This code to try to gracefully handle a wide combination of inputs added complexity. Which added bugs, one of which Emily found: a circle that is on the left or right edge of the screen would see its off-screen portion wrap around to the opposite edge of the screen. This bug in X coordinate clamping wasn’t too hard to chase down, but I decided the fact it even exists is silly. This is version 1.0, I can dictate the behavior I support or not support. So in the interest of keeping my code fast and lightweight, I ripped out all of that “plays nice” code.

A height or a width is negative? Forget graceful swapping, I’m just not going to draw. Something is completely off screen? Forget clamping to screen limits, stuff off-screen are just not going to get drawn. Lines that are partially on screen still need to be gracefully handled via clamping, but I discarded all of the rest. Simpler code leaves fewer places for bugs to hide. It is also far faster, because the fastest pixels are those that we never draw. These optimizations complete the easiest updates to make on individual buffers, the next improvement comes from using two buffers.

[Code for this project is publicly available on GitHub]

Overriding Adafruit GFX HLine/VLine Defaults for Performance

I had a lot of fun building a color picker for 256 colors available in the RGB332 color space, gratuitous swoopy 3D animation and all. But at the end of the day it is a tool in service of the ESP_8_BIT_composite video out library. Which has its own to-do list, and I should get to work.

The most obvious work item is to override some Adafruit GFX default implementations, starting with the ones explicitly recommended in comments. I’ve already overridden fillScreen() for blanking the screen on every frame, but there are more. The biggest potential gain is the degenerate horizontal-line drawing method drawFastHLine() because it is a great fit for ESP_8_BIT, whose frame buffer is organized as a list of horizontal lines. This means drawing a horizontal line is a single memset() which I expect to be extremely fast. In contrast, vertical lines via drawFastVLine() would still involve a loop iterating over the list of horizontal lines and won’t be as fast. However, overriding it should still gain benefit by avoiding repetitious work like validating shared parameters.

Given those facts, it is unfortunate Adafruit GFX default implementations tend to use VLine instead of the HLine that would be faster in my case. Some defaults implementations like fillRect() were easy to switch to HLine, but others like fillCircle() is more challenging. I stared at that code for a while, grumpy at lack of comments explaining what it is doing. I don’t think I understand it enough to switch to HLine so I aborted that effort.

Since VLine isn’t ESP_8_BIT_composite’s strong suit, these default implementations using VLine did not improve as much as I had hoped. Small circles drawn with fillCircle() are fine, but as the number of circles increase and/or their radius increase, we start seeing flickering artifacts on screen. It is actually a direct reflection of the algorithm, which draws the center vertical line and fills out to either side. When there is too much to work to fill a circle before the video scanlines start, we can see the failure in the form of flickering triangles on screen, caused by those two algorithms tripping over each other on the same frame buffer. Adding double buffering is on the to-do list, but before I tackle that project, I wanted to take care of another optimization: clipping off-screen renders.

[Code for this project is publicly available on GitHub]

Notes Of A Three.js Beginner: QuaternionKeyframeTrack Struggles

When I started researching how to programmatically animate object rotations in three.js, I was warned that quaternions are hard to work with and can easily bamboozle beginners. I gave it a shot anyway, and I can confirm caution is indeed warranted. Aside from my confusion between Euler angles and quaternions, I was also ignorant of how three.js keyframe track objects process their data arrays. Constructor of keyframe track objects like QuaternionKeyframeTrack accept (as their third parameter) an array of key values. I thought it would obviously be an array of quaternions like [quaterion1, quaternion2], but when I did that, my CPU utilization shot to 100% and the browser stopped responding. Using the browser debugger, I saw it was stuck in this for() loop:

class QuaternionLinearInterpolant extends Interpolant {
  constructor(parameterPositions, sampleValues, sampleSize, resultBuffer) {
    super(parameterPositions, sampleValues, sampleSize, resultBuffer);
  interpolate_(i1, t0, t, t1) {
    const result = this.resultBuffer, values = this.sampleValues, stride = this.valueSize, alpha = (t - t0) / (t1 - t0);
    let offset = i1 * stride;
    for (let end = offset + stride; offset !== end; offset += 4) {
      Quaternion.slerpFlat(result, 0, values, offset - stride, values, offset, alpha);
    return result;

I only have two quaterions in my key frame values, but it is stepping through in increments of 4. So this for() loop immediately shot past end and kept looping. The fact it was stepping by four instead of by one was the key clue. This class doesn’t want an array of quaternions, it wants an array of quaternion numerical fields flattened out.

  • Wrong: [quaterion1, quaternion2]
  • Right: [quaterion1.x, quaterion1.y, quaterion1.z, quaterion1.w, quaternion2.x, quaternion2.y, quaternion2.z, quaternion2.w]

The latter can also be created via quaterion1.toArray().concat(quaternion2.toArray()).

Once I got past that hurdle, I had an animation on screen. But only half of the colors animated in the way I wanted. The other half of the colors went in the opposite direction while swerving wildly on screen. In a HSV cylinder, colors are rotated across the full range of 360 degrees. When I told them to all go to zero in the transition to a cube, the angles greater than 180 went one direction and the angles less than 180 went the opposite direction.

Having this understanding of the behavior, however, wasn’t much help in trying to get things working the way I had it in my head. I’m sure there are some amateur hour mistakes causing me grief but after several hours of ever-crazier animations, I surrendered and settled for hideous hacks. Half of the colors still behaved differently from the other half, but at least they don’t fly wildly across the screen. It is unsatisfactory but will have to suffice for now. I obviously don’t understand quaternions and need to study up before I can make this thing work the way I intended. But that’s for later, because this was originally supposed to be a side quest to the main objective: the Arduino color composite video out library I’ve released with known problems I should fix.

[Code for this project is publicly available on GitHub]