Home Assistant OS in KVM Hypervisor

I encountered some problems running Home Assistant Operating System (HAOS) as a virtual machine on a TrueNAS CORE server, which is based on FreeBSD and its bhyve hypervisor. I wanted to solve these problems and, given my good experience with Home Assistant, I was willing to give it dedicated hardware. A lot of people use a Raspberry Pi, but in these times of hardware scarcity a Raspberry Pi is rarer and more valuable than an old laptop. I pulled out a refurbished Dell Latitude E6230 I had originally intended to use as robot brain. Now it shall be my Home Assistant server, which is a robot brain of sorts. This laptop’s Core i5-3320M CPU launched ten years ago, but as a x86_64 capable CPU designed for power-saving laptop usage, it should suit Home Assistant well.

Using Ubuntu KVM Because Direct Installation Failed Boot

I was willing to run HAOS directly on the machine, but the UEFI boot process failed for reasons I can’t decipher. I couldn’t even copy down an error message due to scrambled text on screen. HAOS 8.0 moved to a new boot procedure as per its release announcement, and the comments thread on that page had lots of users reporting boot problems. [UPDATE: A few days later, HAOS 8.1 was released with several boot fixes.] Undeterred, I tried a different tack: install Ubuntu Desktop 22.04 LTS and run HAOS as a virtual machine under KVM Hypervisor. This is the hypervisor used by the Linux-based TrueNAS SCALE, to which I might migrate in the future. Whether it works with HAOS would be an important data point in that decision.

Even though I expect this computer to run as an unattended server most of the time, I installed Ubuntu Desktop instead of Ubuntu Server for two reasons:

  1. Ubuntu Server has no knowledge of laptop components, so I’d be stuck with default hardware behavior that are problematic. First is that the screen will always stay on, which wastes power. Second is that closing the lid will put the machine to sleep, which defeats the point of a server. With Ubuntu Desktop I’ve found how to solve both problems: edit /etc/systemd/logind.conf and change lid switch behavior to lock, which turns off the screen but leaves the computer running. I don’t know how to do this with Ubuntu Server or Home Assistant OS direct installation.
  2. KVM Hypervisor is a huge piece of software with many settings. Given enough time I’m sure I could learn all of the command line tools I need to get things up and running, but I have a faster option with Ubuntu Desktop: Use Virtual Machine Manager to help me make sense of KVM.

KVM Network Bridge

Home Assistant instructions for installing HAOS as a KVM virtual machine was fairly straightforward except for lack of details on how to set up a network bridge. This is required so HAOS is a peer on my home network, capable of communicating with ESPHome devices. (Equivalent to the network_mode: host option when running Home Assistant Docker container.) HAOS instruction page merely says “Select your bridge” so I had to search elsewhere for details.

A promising search hit was How to use bridged networking with libvirt and KVM on linuxconfig.org. It gave a lot of good background information, but I didn’t care for the actual procedure due to this excerpt: “Notice that you can’t use your main ethernet interface […] we will use an additional interface […] provided by an ethernet to usb adapter attached to my machine.” I don’t want to add another Ethernet adapter to my machine. I know network bridging is possible on the existing adapter, because Docker does it with network_mode:host.

My next stop was Configuring Guest Networking page of KVM documentation. It offered several options corresponding to different scenarios, helping me confirm I wanted “Public Bridge”. This page had a few Linux distribution-specific scripts, including one for Debian. Unfortunately, it wanted me to edit a file /etc/network/interfaces which doesn’t exist on Ubuntu 22.04. Fortunately, that page gave me enough relevant keywords for me to find Network Configuration page of Ubuntu documentation which has a section “Bridging” pointing me to /etc/netplan. I had to change their example to match Ethernet hardware names on my computer, but once done I had a public network bridge upon my existing network adapter.

USB Device Redirection

Even though I’m still running HAOS under a virtual machine hypervisor, ESPHome could access USB hardware thanks to KVM device redirection.

First I plug in my ESP32 development board. Then, I open the Home Assistant virtual machine instance and select “Redirect USB device” under “Virtual Machine” menu.

That will bring up a list of plugged-in USB devices, where I could select the USB to UART bridge device on my ESP32 development board. Once selected, the ESPHome add-on running within this instance of HAOS could see the ESP32 board and flash its firmware via USB. This process is not as direct as it would have been for HAOS running directly on the computer, but it’s far better than what I had to do before.

At the moment, surfacing KVM capability for USB device redirection is not available on TrueNAS SCALE but it is a requested feature. Now that I see the feature working, it has become a must-have for me. Until this is done, I probably won’t bother migrating my TrueNAS server from CORE (FreeBSD/bhyve) to SCALE (Linux/KVM) because I want this feature when I consolidate HAOS back onto my TrueNAS hardware. (And probably send this Dell Latitude E6230 back into the storage closet.)

Start on Boot

And finally, I had to tell KVM to launch Home Assistant automatically upon boot. By checking “Start virtual machine on host boot up” under “Boot Options” setting.

In time I expect that I’ll learn the KVM command lines to accomplish what I’m doing today with Virtual Machine Manager, but today I’m glad VMM helps me get everything up and running quickly.

[UPDATE: virsh autostart is the command line tool to launch a virtual machine upon system startup. Haven’t yet figured out command line procedure for USB redirection.]

Home Assistant OS in TrueNAS CORE Virtual Machine

I started playing with Home Assistant in the form of Home Assistant Core, a docker container, for its low commitment. Once I decided Home Assistant was worthwhile, I moved up to running Home Assistant Operating System as a virtual machine on my TrueNAS CORE server. This move gained features of Home Assistant Supervisor and I found the following subset quite useful:

  • Easy upgrade and rollback for failed upgrades.
  • Add-on integration features, especially for ESPHome.
  • Ability to backup critical data in a compact *.tar file.

The backup situation is a tradeoff. When I ran Home Assistant Core docker container, I could map its data directory to my TrueNAS storage pool. It was a more robust data retention system, as I had configured it for nightly snapshots and regular backups to external media. However, this would take upwards of hundreds of megabytes especially when I’m flooding the Home Assistant database. In contrast, the data backup archive generated by Home Assistant Supervisor is tiny at only a few megabytes. I have ambition to eventually get the best of both worlds: a Home Assistant automation that triggers Supervisor backups, and then store those backup files on my TrueNAS storage pool. This should be possible, as people have created addons to perform automatic backups and upload to Google Drive. But right now I have to do it manually.

Unrelated to the backup situation, there were two significant downsides to running Home Assistant OS on a TrueNAS CORE virtual machine.

  1. The virtual machine does not have access to hardware. If ESPHome add-on could access USB, it could perform first-time firmware upload on ESP32/ESP8266 devices. Without hardware access, I have to perform initial upload some other way which is cumbersome. (Following uploads could be done via WiFi, a huge benefit of ESPHome.)
  2. There are various problems with the FreeBSD bhyve hypervisor running Linux-based operating systems. A category of them (apparently there are more than one) interferes with the ability for a Linux operating system to reboot itself. In practice, this means every time the Home Assistant OS updates itself and reboots, it would shut down but not restart. At one point, I could not perform manual shutdown from TrueNAS interface, so the horrible workaround was to reboot my TrueNAS server. After a few TrueNAS updates, I can now manually shut down and restart the VM. But it is still a big hassle to do this on every Home Assistant OS update.

Due to these problems, I would NOT recommend running Home Assistant OS as a TrueNAS CORE virtual machine. Issue #2 became quite annoying and, when my Home Assistant got stuck trying to reboot for an upgrade to Home Assistant OS 8.0, I decided it was time to try a different hypervisor.

Larson Scanner Demo for Tape Deck LCD

I am happy with a sense of accomplishment after I deciphered all the information necessary to utilize this circuit board, formerly the faceplate for a salvaged car tape deck. I started this investigation when I found I could power it up under control of the original mainboard. Now, I can work with the LCD and read all knobs and buttons with an Arduino, independent of the original mainboard. My original intent was just to see if I could get to this point. I thought I would learn a lot whether I succeeded or failed trying to control this faceplate. I have gained knowledge and experience I didn’t have before, and a faceplate I can control.

Now what?

It feels like I should be able to build something nifty with this faceplate, there’s room to be creative in repurposing it. At the moment I don’t have any ideas that would creatively utilize the display and button/knob input, but I could build a simple demo. This LCD is wide and not very tall, so I thought I would make it into a simple Larson Scanner. (a.k.a. Cylon lights a.k.a. K.I.T.T. lights.)

First, I divided the LCD segments into 16 groups of roughly similar size. I started using my segment map to generate the corresponding bit patterns by hand but then decided I should make a software tool to do it instead. I’ve already written code to light up one segment at a time for generating the segment map, it took only a bit of hacking to make it into a drawing tool. I use the knob to move segments as I did before, but now I could press the knob to toggle the selected segment. Every time I pressed the knob, I print the corresponding bit pattern out to serial terminal in a format that I could copy into C source code.

I then added a different operating mode to my Arduino test program. Pressing the Volume knob would toggle between drawing mode and Larson Scanner mode. While in Larson Scanner mode, I would select two of those 16 groups based on scanner position, and bitwise OR them together into my display. This gives me a nice little demo that is completely unrelated to this LCD’s original purpose, and confidence I no longer need this tape deck’s original mainboard.


Source code for this demo is publicly available on GitHub.

Segmented LCD From Sunbeam PAC-215

I now have imperfect but working control of a 2-digit 7-segment LCD module salvaged from a Sunbeam PAC-215 electric blanket controller. I used an ESP32 for this initial run because I thought I needed voltage control over all ten pins. After a little bit of experimentation, I’ve found that I could probably get away with using 0V and 3.3V directly on each of the segment pins. For the common pins I need an additional option of the midway point 1.65V, but if it is true that an LCD uses very little current, perhaps I could supply that voltage with merely a voltage divider built from resistors? If true, I might be able to control this with something far simpler and less powerful than an ESP32. Perhaps an ATmega328 (Arduino)? Or a PIC16F18345, which has a DAC peripheral and already on hand. (If I’m going to buy a chip, I’ll take Joey Castillo’s suggestion and buy one with LCD control peripheral built in.) These are potential experiments for the future. For now, I want to wrap up work with this particular pair of salvaged LCDs which means leaving good notes for my future self.

I scribbled down this diagram when probing the pins. The pins were numbered before I knew what they did. Now that I know which are segment and which are common, I could label them differently.

I also moved to a zero-based counting system thinking it might be easier to match up to code. Color-coding the different common pins is an idea I picked up from Joey Castillo’s slides, but I wonder if that’s the best way to go. I can’t claim to be an expert on how to use color in a way that’s still accessible to the color blind, and coding common pin information in color risks losing data for scenarios like printing on a black-and-white laser printer.

Moving to letters for segments and numbers for common allows me to put them together as labels on each segment while reducing ambiguity. This is also friendlier to black-and-white printing, but it loses the visual delineation of color coding. Is this better? I don’t know as I see strengths in each. I couldn’t make up my mind on which to keep so I’ll keep both of these diagrams here as references for later.

With this successful milestone of working with LCD modules, I revisited a module that frustrated me earlier.

ESP32 as Driver for Simple Segmented LCD

Once I have some confidence that I can write ESP-IDF code to control voltage levels at a speed relevant for driving a LCD, it was time to move beyond a simple breadboard test circuit. Onward to a perforated prototype board for all ten pins of a simple 2-digit 7-segment LCD, salvaged from an electric blanket controller. In the interest of keeping the wiring simple, I chose ten GPIO pins on one side of this ESP32 dev module I’m using (*) so I could wire everything (almost) parallel.

A 1kΩ resistor connects each ESP32 GPIO pin to an LCD pin, and a 330pF capacitor connects each ESP32 GPIO pin to ground. It isn’t quite a straight shot of ten parallel pins, with a two-wide gap on the left (serial communication TX/RX pins) and a single pin gap on the right (GPIO2, which this dev board connected to a blue LED that I’m using as a system heartbeat indicator.) On the far right is a loop of wire connected to the ground plane, so I have a convenient place for my oscilloscope to clip onto.

That completes the hardware side for my initial test. Moving on to my ESP-IDF project, I started hard-coding a sequence of LEDC PWM duty cycle adjustments that would drive this LCD in a fashion that I believe is called 1/2 duty, 1/2 bias. I think 1/2 duty means switching off between two common segments so each set of segments has 1/2 of the time, and 1/2 bias means the inactive common pin is held at 1/2 of the voltage difference of the active pins. I’m still a beginner so that might be wrong!

I chose to hard code the test because it avoids all the lookup table code I’d have to do if this were to become an interactive changeable display. I have two number digits to work with, so my test pattern is a Hitchhiker’s Guide reference.

The result is clearly legible at the optimal viewing angle, but it fades off quite a bit as the perspective changes. I remember a much wider range of legible viewing angles in its original use, I assume it means I’m doing something different from real LCD driver circuits in this little hack. Possibly related is the observation that, if I illuminate this screen from behind with a LED, its light washes out the LCD.

The original device used two LEDs behind the LCD for backlight, and it didn’t make the digits hard to read, so there’s definitely something missing in my amateur hour LCD controller. But the fact remains it is under ESP32 control, and I learned a lot on my way here. This was the first tangible result of a lot of fumbling around after listening to Joey Castillo’s Remoticon talk on hacking LCDs. Seeing “42” show up on screen is a good milestone to stop and review what I’ve learned.


Source code for this experiment is available on GitHub.

(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Switching to ESP-IDF For PWM Waveform Control

I used ESPHome and Home Assistant to quickly experiment with parameters for ESP32 chip’s LEDC peripheral for generating PWM (pulse-width modulated) signals, seeing how they looked under a cheap oscilloscope. But for actually driving a segmented LCD, I will need even better control over signal behavior. It is an issue of timing: I need to toggle between high and low states for each common segment pin to generate an alternating signal, and I have two common segments to cycle through. In order to avoid flickering the LCD, this cycle needs to occur at least several tens of times a second.

The tightest control over timing I could get with ESPHome appears to be the on_loop automation, which is generally triggered roughly every 16 milliseconds. This translates to roughly 62Hz which, if I could complete the entire cycle, would be sufficient. But performing all of those toggles within a single on_loop would be too fast for our eyes to see, so we can only take one step in the cycle per on_loop. In order to toggle both high and low on consecutive on_loop, that cuts me down to 31Hz. Then there are two common segments, which cuts it further to 15Hz. I need something faster.

Until I have other tools in my toolbox, the “something faster” I have on hand require going to Espressif’s ESP-IDF SDK. PlatformIO makes ESP-IDF easier to work with, and I’ve had experience with this arena. My starting point (chosen because I’ve done similar things before) is to write a FreeRTOS task dedicated to toggling voltage levels by changing PWM parameters. In between steps of the cycle, I use a FreeRTOS wait (vTaskDelay) to send this task into the background until the next step. This mechanism allows finer control over timing than the ~16ms of on_loop, though it is only slightly better at 10ms by default. Repeating the math above, that works out to 25Hz which would at least be as good as 24fps film. But that is not the limit. Once I’m working within ESP-IDF, I have the option to get even finer timing control. I can get a little bit faster by reconfiguring FreeRTOS tick rate via ESP-IDF’s menuconfig tool. And for ultimate timing control I can start working with hardware timers.

I whipped up a test program to generate a staircase pattern. From 0% duty cycle, to 50% duty cycle, to 100%, then 50%, and repeat with 0%. Running at 20ms per step in the cycle, the timing looks solid. I can easily move this to 10ms and still have a solid square wave.

The 50% PWM value looked almost good enough without a capacitor. (Left) I have a huge pile of 300pF capacitors on hand, so I tried one and the waveform looked much better. (Right.) This is good enough for me to move forward with wiring this signal into a segmented LCD.


Source code for this experiment is available on GitHub.

Quick ESP32 PWM Experiment via ESPHome

I’ve mapped out the segments of a small LCD salvaged from an electric blanket controller. I activated these segments with an ESP8266 that alternated between 0V and 3.3V on two GPIO pins. Good for individual segments, but not good enough to drive the whole display. There are eight segment pins and two common pins for 8*2=16 total possible combinations (14 of which are used for the two 7-segment digits) controlled from ten total pins.

Technically speaking, an ESP8266 has enough GPIO pins, but we’d start intruding into the realm of sharing pins between multiple tasks which is more complexity than I wanted to tackle for a quick test. When driving a LCD, we also want to control voltage levels on these pins and ESP8266 lacks hardware PWM peripheral. For these reasons I will use an ESP32. It has more than enough pins, and hardware PWM peripherals to support generating a voltage on all those pins. ESP32 LEDC PWM peripheral is very flexible, and I need to determine how I want to configure that peripheral.

I used ESPHome because it is a great platform for quick experiments like this. I don’t strictly need WiFi here, but easy integration to Home Assistant and ability to update code over the network are great conveniences. Before I found ESPHome, I would wire up a few potentiometers to the circuit board and write code to use ADC to read their positions. A bit of code will then allow me to interactively play with parameters and see their results. But now, with ESPHome in my toolbox, I don’t need to solder potentiometers or write code for ADC. I can get such interactivity from the Home Assistant dashboard.

By default, ESPHome configures LEDC PWM peripheral to run at a frequency of 1kHz. According to Espressif documentation, it can be configured to run as high as 40MHz, though at that point it really isn’t a “PWM” signal anymore with only a fixed 50% duty cycle. Slowing down the frequency increases the number of bits available for duty cycle specification, and I wanted to find a tradeoff that I think will work well for this project. Here is an excerpt of ESPHome configuration YAML I used for this test.

output:
  - platform: ledc
    pin: GPIO23
    id: pwm_23
  - platform: template
    id: pwm_freq
    type: float
    write_action:
      lambda: |-
        int newFreq = (int)(10000000.0*state)+1000;
        id(pwm_23).update_frequency(newFreq);
    
light:
  - platform: monochromatic
    gamma_correct: 1.0
    output: pwm_23
    name: "PWM23"
  - platform: monochromatic
    gamma_correct: 1.0
    output: pwm_freq
    name: "PWM Freq"

This allows me to adjust two variables, each exposed to Home Assistant as a monochromatic dimmable light. Which I can change via a slider on a web page instead of a potentiometer soldered to the board. (The gamma correction value was set to 1.0 because we’re not actually controlling a visible light that requires gamma correction.)

  1. pwm_23 controls the PWM duty cycle.
  2. pwm_freq controls the PWM frequency via ESPHome Template Lambda. Theoretically from 1kHz to 10MHz, though in practice we won’t reach either end as the state never gets all the way down to 0.0 nor all the way up to 1.0.

As I adjusted the frequency, ESPHome automatically calculates the duty cycle bit depth.

[15:32:00][D][ledc.output:041]: Calculating resolution bit-depth for frequency 2491148.000000
[15:32:00][D][ledc.output:046]: Resolution calculated as 5
[15:32:00][D][ledc.output:041]: Calculating resolution bit-depth for frequency 2494322.000000
[15:32:00][D][ledc.output:046]: Resolution calculated as 5
[15:32:00][D][ledc.output:041]: Calculating resolution bit-depth for frequency 2497869.000000
[15:32:00][D][ledc.output:046]: Resolution calculated as 5
[15:32:00][D][ledc.output:041]: Calculating resolution bit-depth for frequency 2501411.000000
[15:32:00][D][ledc.output:046]: Resolution calculated as 4
[15:32:00][D][ledc.output:041]: Calculating resolution bit-depth for frequency 2503623.000000
[15:32:00][D][ledc.output:046]: Resolution calculated as 4
[15:32:00][D][ledc.output:041]: Calculating resolution bit-depth for frequency 2505428.000000
[15:32:00][D][ledc.output:046]: Resolution calculated as 4

From this snippet, we can see that 2.5MHz is the limit for 5 bits of resolution. 25 = 32 levels, which gives me control of resulting voltage in (3.3V / 32) ~= 0.1V increments. I think that’ll be good enough.


Here are some plots in oscilloscope form: First the starting point of 50% duty cycle at default 1kHz frequency, before and after adding a 100nF capacitor into the mix.

Not nearly good enough to output 1.65V. To make this better, I can increase the capacitance or increase frequency. Increasing capacitance will dull system response (and I need pretty quick response to rapidly cycle between LCD common pins) so I start cranking up the frequency.

At hundreds of kilohertz without capacitor, the resulting wave is more than what this little oscilloscope can capture at this timescale. When the 100nF capacitor is added in, we see a pretty respectable 1.65V signal, might even be good enough. But there’s plenty of room left to go faster.

Getting into the megahertz range, there’s enough natural capacitance in the system (wires, breadboard, etc.) that we see a pretty good waveform even without a real capacitor in line with the output. But with just a 330pF capacitor (much smaller than the 100nF I started with) the resulting voltage looks pretty good. At least, at this time scale. Now I need to move beyond ESPHome for better control of 2.5MHz PWM signals.

Not a Fan of Bonded Touch Screens

After replacing the touchscreen module of a Pixel 3a phone, I’ve decided I’m not a fan of a modern evolution in their design. A touchscreen module has two major components: an output module to show information to the user (display) and input module to detect location of user’s fingers (digitizer). While they are both advanced glass technologies, they are made with very different manufacturing processes. Thus they are built on two separate pieces of glass, and assembled together into a touchscreen module. The straightforward way to do this is to place one on top of the other, like this Amazon Fire tablet I took apart. In that case, I found the display was intact and it was only the digitizer glass that had cracked.

There were some downsides to this approach. Aesthetically, it means it’s possible for dust and dirt to get between the layers, where it is very difficult to clean. These surfaces also mean there’s more chance for reflections, and two separate pieces of glass meant the user’s finger is a few millimeters away from the visual interface they are interacting with. Mechanically, as separate pieces they are each on their own resisting mechanical stresses.

To address these problems, many newer touchscreen devices use an optically clear adhesive (usually a resin formulated for the task) to bond the touch digitizer to the display. Once bonded, there’s no way for dust and dirt to get in between those layers. The resin eliminates reflections between the two layers and visually connect the two layers. This help build the illusion that the user is directly manipulating onscreen objects with their finger by putting the display closer to the fingertip. And finally, bonding them together makes the module mechanically stronger as both layers work together to resist mechanical stresses. This is especially useful for phones using OLED displays, which are thinner (and more fragile) than backlit LCD displays.

But stronger touchscreens still have a limit and once exceeded, we’re back to the problem of a cracked screen. In the case of the Pixel 3a I just worked on, only the digitizer layer is cracked. The display layer is still intact and functioning. Unfortunately, due to the fact they are bonded, there’s no practical way to replace just the digitizer portion. So, the entire module has to be replaced. Trying to pull the digitizer glass away from the display glass stretches the resin and once released, we see bubbles between the two layers ruining optical clarity.

As expensive and wasteful as it is, the perfectly functional OLED display can only be thrown away. In comparison, cracked digitizer on a device without a bonded screen could be replaced independent of the display unit. One such high-volume repair is replacing Apple iPad touch digitizers. Amazon vendors sell digitizers for $20-$50 USD, depending on iPad generation. In comparison, as of this writing replacement Pixel 3a modules start at double that cost. This difference is even more stark when considering the fact an iPad has a much larger screen.

The bonded touchscreen module of a Pixel 3a is so expensive that, as of this writing, it actually isn’t economically sensible to replace the screen even at the lowest bidder unknown vendor option of $85. Because if we browse eBay we can find secondhand Pixel 3a for sale for as low as $50. If this keeps up, there’s a business opportunity for a phone chop shop: buy these secondhand Pixel 3a, disassemble them, and sell the components piecemeal. I don’t see any used/salvaged Pixel 3a touchscreen modules for sale at the moment, but maybe it’s a matter of time before they start popping up. But still, I hated throwing away this OLED screen

I understand the reasons why bonded touchscreen modules exist, but since they make repairs difficult, wasteful, and less economical, I am not a fan of this feature.

Pixel 3a Screen Replacement

I was very impressed by iFixit’s Pixel 3a Screen Replacement kit, but naturally I bought it for a reason. I have a Pixel 3a with a shattered screen I wanted to fix by following their instructions.

Well, “shattered” might be going a bit too far but it was definitely broken well beyond just “cracked”. Amazingly enough, the phone still ran in this state. As we can see here, the display works just fine. However, touch responsiveness isn’t great on account of all the cracks running throughout the screen. There are several scattered dead spots. Inconveniently, a few dead spots were where the keyboard would live which makes it a challenge to put in WiFi passwords and such. (* one workaround below)

I turned the screen off and chose a camera angle so my workbench light could highlight the damage.

Near the bottom of the screen, small bits are missing allowing us to see the phone internals within. Given this damage, I was amazed touch input still worked at all. My first order of business was to remove this impressively damaged screen. A suction cup was included in the iFixit kit to grab onto glass, but it could not get a good seal on this screen: too many cracks let in air and ruin the suction.

Backup plan: use tweezers to pick at little pieces of glass.

That was enough to open up an entry point for the Wankel rotor-shaped pick.

At certain locations, putting too much stress would damage the optically clear resin bonding the touch digitizer glass to the OLED screen. Bubbles would form, and the bonding is no longer optically clear. This would be a concern if I wanted to reuse this screen, but due to that same resin I could not.

It took me roughly half an hour of painstaking work to free the old screen from the adhesive holding it down all around the phone. Occasionally the “pain” is literal as small shards of glass stabbed into my hand. Fortunately, no blood was drawn.

Once removed and laid down as per iFixit guide, I could compare the original module with the new replacement. This is necessary because there may be small components that need transferring, and the list is not definitive because little accessories vary depending on supplier whim. In similar repairs, we frequently need to replace the speaker mesh. Fortunately my replacement module already has a mesh in place so I was spared that task. However, there’s a light gray piece of plastic surrounding the front-facing camera that I would have to transfer over.

After doing that comparison, I unplugged the old screen and plugged in the new one. I wanted to be sure the new screen will work before I roll up my sleeve for the tedious work of cleaning up.

If the new screen didn’t work, I didn’t want to waste my time on annoying tasks. Cleaning up remaining shards of glass, wipe up dirt, and my least favorite part of working with modern electronics: scraping off gummy bits of adhesive.

Once most of the adhesive was cleaned up (as much as I had patience for) I transferred this light gray piece of plastic camera surround. Then I followed the iFixit guide for applying custom-cut piece of new adhesive and installed the new screen in its home.

Left: a stack of plastic backings to protect adhesives and glass surfaces. Center: the old screen in one large and many small pieces. Right: the repaired phone with its shiny new intact screen!

I shall celebrate with a sticker from the iFixit repair kit. As much as I loved this success, I wished it didn’t have to be expensive as it was. I blame the design decision to bond touch digitizer and OLED display.


(*) One way to get an Android phone on WiFi when the touchscreen is too erratic to let us enter a WiFi password:

  1. Take another Android phone on the same WiFi network we want to get on.
  2. Tap its WiFi connection to reach “Network Details” screen.
  3. Tap “Share” to generate a QR code.
  4. Scan QR code from erratic phone.

iFixit Pixel 3a Screen Replacement Kit

Today’s post is an iFixit fan post. It is not paid or sponsored by iFixit, I’m just really really impressed by my latest purchase: the iFixit Pixel 3a screen replacement kit. I’ve been given a Google Pixel 3a cell phone with a shattered screen, and I thought I would try my hand at replacing its screen. iFixit’s replacement kit should mesh well with their screen replacement guide, which I will be using.

This is not my first phone repair. I’ve bought replacement parts from several Amazon & eBay vendors. I decided to upgrade for this project and I’m glad I did. None of those prior repair kits come anywhere close to the iFixit kit in quality and completeness.

My positive impressions started with the shipping box.

As soon as we open the box, we get an encouraging “You got this” on the first flap we see. I love it.

Inside the box are several smaller boxes. Two labeled “Repair Tools” and one “Repair Part”.

Inside these boxes are the replacement screen plus all the tools we’d need to replace it. Starting from the sticker pack on the left, we have:

And finally, the replacement screen module itself along with a custom cut sheet of adhesive. The latter is a premium luxury not always found in other products. Many Amazon/eBay vendors sell replacement screen modules with generic rectangular strips of adhesive we would have to cut to shape ourselves. Some didn’t include the adhesive at all! (If that should happen to you, iFixit can still help as they sell the custom-cut adhesive separately.)

At the time of this writing, iFixit charges a $7 premium for the full kit over just the screen module by itself., which is in turn only a few dollars more expensive than lowest bidder of the day. I decided to go for the full kit and I’m glad I did, these tools are well worth the price difference and gave me confidence I have everything I needed to tackle my screen replacement project.

Unity Without OpenGL ES 2.0 Is All Pink on Fire TV Stick

I dug up my disassembled Fire TV Stick (second generation) and it is now back up and running again. This was an entertaining diversion in its own right, but during the update and Android Studio onboarding process, I kept thinking: why might I want to do this beyond “to see if I could”? This was the same question I asked myself when I investigated the Roku Independent Developer’s Kit just before taking apart some Roku devices. For a home tinkerer, what advantage did they have over a roughly comparable Raspberry Pi Zero? I didn’t have a good answer for Roku, but I have a good answer for Fire TV: because it is an Android device, and Android is a target platform for Unity. Raspberry Pi and Roku IDK, in comparison, are not.

I don’t know if this will be useful for me personally, but at the very least I could try installing my existing Unity project Bouncy Bouncy Lights on the device. Loading up Unity Hub, I saw that Unity had recently released 2021 LTS so I thought I might as well upgrade my project before installing Unity Android target platform tools. Since Bouncy Bouncy Lights was a very simple Unity project, there were no problems upgrading. Then I could build my *.apk file which I could install on my Fire TV just like introductory Android Studio projects. There were no error messages upon installation, but upon launch I got a warning: “Your device does not match the hardware requirements of this application.” What’s the requirement? I didn’t know yet, but I got a hint when I chose to continue anyway: everything on screen rendered a uniform shade of pink.

Going online for answers, I found many different problems and solutions for Unity rendering all pink. I understand pink-ness is a symptom of something going wrong in the Unity graphics rendering pipeline, and it is a symptom that can have many different causes. Without a single solution, further experiment and diagnosis is required.

Most of the problems (and solutions) are in the Unity “Edit”/”Project Settings…”/”Player”/”Other Settings” menu. This Unity forum thread with the same “hardware requirements” error message suggests checking to ensure “Auto Graphics API” is checked (it was) and “Rendering Path” to Linear (no effect). This person’s blog post was also dealing with a Fire TV and their solution was checking “Auto Graphics API” which I am already doing. But what if I uncheck that box? What does this menu option do (or not do?)

Unchecking that box unveils a list of two Graphics APIs: Vulkan and OpenGLES3. Hmm, I think I see the problem here. Fire TV Stick second generation hardware specification page says it only supported OpenGL ES 2.0. Digging further into Unity documentation found that OpenGL ES 2.0 support is deprecated and not included by default, but we could add it to a project if we need it. Clicking the plus sign allowed us to add it as a graphics API for use in our Unity app:

Once OpenGL ES 2.0 is included in the project as a fallback graphics API, I could rebuild the *.apk file and install the updated version.

I got colors back! It is no longer all pink, and cubes that are pink look like they’re supposed to be pink. So the cubes look fine, but all color has disappeared from the platform. It was supposed to have splotches of color cast by randomly colored lights attached to each block.

Instead of showing different colors, it has apparently averaged to a uniform gray. I guess this is where an older graphics API gets tripped up and why we want newer APIs for best results. But at least it is better than a screen full of pink, even if the solution in my case was to uncheck “Auto Graphics API”. The opposite of what other people have said online! Ah well.

Reviving Previously Disassembled Fire TV Stick

Before I took apart a pair of Roku streaming devices, I did a quick investigation to see if there’s anything interesting I might be able to do with them via software realm. I found that while Roku did release an Independent Developers Kit, those old Roku were not compatible. Even if they were compatible with the IDK, though, I’m not sure they would have been compelling when I could play with a Raspberry Pi zero or Android-based hardware like Amazon’s Fire TV.

That train of thought reminded me that I have a Fire TV device, or at least I used to. My first HD digital flat-screen TV was before the age of internet connectivity features, so I tried plugging in a Chromecast. Dissatisfied, I tried a Fire TV streaming stick. I eventually ended up buying a new TV for its UHD resolution, along with bonus benefit of built-in Roku software. (Which is also incompatible with Roku IDK, but that was not a shopping requirement.) After my UHD upgrade both the Chromecast and the Fire TV sat unused a drawer until I brought them to the inaugural Disassembly Academy during Pasadena Connect Week. One of the participants took up my invitation to take things apart.

The whole thing was a single circuit board inside the plastic enclosure. There were no user controls at all, not even the single reset button or status LED of the Roku streaming stick. In a plastic bag I found the circuit board and the plastic enclosure, but the misshapen metal RF shields and attached heat dissipation pads visible in this picture are now gone. If I plug it in, I’m optimistic it will run unless overheating and/or RF interference causes problems.

I plugged it in and watch a successful boot sequence on screen, which led to the next problem: its corresponding remote-control unit was superficially intact, but the violence of enclosure disassembly meant nothing fit anymore. Specifically, the keypad no longer fit in the front faceplate, which also lost its fit so it could no longer press the circuit board against battery contact points.

I tried a few different ways to make it work again, eventually ignoring the faceplate and using my thumb to press circuit board directly against battery tray contacts. If I end up using this for more than a trivial test, I’ll solder a battery tray to the circuit board instead. For now, I just got it back up and running on the latest Fire TV software release. The “About” screen called itself a second-generation Fire TV stick, which is enough information for me to look up its specifications. It’s pretty close to the Roku equivalent streaming stick, with two major differences: I have a functional (if annoying) remote control, and I have the Android software development platform. I installed Android Studio and deployed a few “Hello World” type Android apps to the Fire TV stick just to prove the tooling pipeline worked. Then I tried installing something I built in Unity… and saw everything was pink.

Partial Home Assistant Control of Mr. Robot Badge Mk. 2

Once I had Mr Robot Badge Mk. 2 up and running on Arduino, I thought: what about Home Assistant? After all, ESPHome is built on compiling ESP8266 Arduino Core code using PlatformIO command-line tools. The challenging part of ESPHome is always that initial firmware flash, which I’m already set up to do, and after that I could do updates wirelessly.

I started easy: getting badge buttons to be recognized as binary sensors in Home Assistant.

binary_sensor:
  - platform: gpio
    pin:
      number: 0
      mode:
        input: true
        pullup: true
      inverted: true
    name: "Left"
    filters:
      - delayed_on_off: 20ms
  - platform: gpio
    pin:
      number: 16
      mode:
        input: true
      inverted: true
    name: "Down"
    filters:
      - delayed_on_off: 20ms
  - platform: gpio
    pin:
      number: 15
      mode:
        input: true
        pullup: true
    name: "Right"
    filters:
      - delayed_on_off: 20ms
  - platform: gpio
    pin:
      number: 13
      mode:
        input: true
        pullup: true
      inverted: true
    name: "Up"
    filters:
      - delayed_on_off: 20ms
  - platform: gpio
    pin:
      number: 12
      mode:
        input: true
        pullup: true
      inverted: true
    name: "A"
    filters:
      - delayed_on_off: 20ms
  - platform: gpio
    pin:
      number: 14
      mode:
        input: true
        pullup: true
      inverted: true
    name: "B"
    filters:
      - delayed_on_off: 20ms

Piece of cake! Now on to the more interesting (and more challenging) part: using the LEDs. I followed ESPHome examples to create a custom component. This exposed a single custom output that treated the entire array as a single color (monochromatic) dimmable light.

C header file with custom class declaration:

#include "esphome.h"
#include "Adafruit_IS31FL3741.h"
using namespace esphome;

class MrRobotBadgeFloatOutput : public Component, public FloatOutput
{
    Adafruit_IS31FL3741 ledArray;

    public:
        void setup() override
        {
            ledArray.begin();
            ledArray.setLEDscaling(0x3F); // 25% of max
            ledArray.setGlobalCurrent(0x3F); // 25% of max
            ledArray.enable(true);  // bring out of shutdown
        }

        void write_state(float state) override
        {
            int8_t value = state * 128; // 50% of max
            ledArray.fill(value);
        }
};

YAML:

output:
  - platform: custom
    type: float
    lambda: |-
      auto badge_float = new MrRobotBadgeFloatOutput();
      App.register_component(badge_float);
      return {badge_float};
    outputs:
      id: badge_float

light:
  - platform: monochromatic
    name: "Badge Lights"
    output: badge_float

Tada! I’m back at the same test I had using Arduino, lighting up all the pixels to highlight problematic LEDs.

The next step in this progression is to expose the entire 18×18 array as individually addressable pixels. Doing so would allow using ESPHome’s display rendering engine to draw text and graphics, much as Adafruit GFX does. I found the Addressable Light base class but I got stuck trying to make it happen. I’ve had no luck finding examples on how to implement a custom addressable light, and didn’t get much of anywhere bumping my head in the dark. Digging through existing Addressable Light implementations on GitHub, I’ve found many object classes that may or may not be required to implement my own addressable light.

I imagine there’s an architecture diagram somewhere that describes how these components interact, but my search has come up empty so far and there’s not much in the way of code comments. Most of the online discussions about addressable LED pixels in Home Assistant are centered around WLED, which as of this writing does not support the IS31LF3741 control chip. I’m going to put further exploration on hold until I have more experience with ESPHome custom components to understand their object hierarchy.

Programming Mr Robot Badge Mk. 2 with Arduino

This particular Mr. Robot Badge Mk. 2 was deemed a defective unit with several dark LEDs. I used it as a practice subject for working with surface-mounted electronic devices, bringing two LEDs back into running order though one of them is the wrong color. Is the badge fully repaired? I couldn’t quite tell. The default firmware is big on blinking and flashing patterns, making it difficult to determine if a specific LED is functioning or not. What I needed was a test pattern, something as simple as illuminate all of the LEDs to see if they come up. Fortunately, there was a URL right on the badge that took me to a GitHub repository with sample code and instructions. It used Arduino framework to generate code for this ESP8266, and that’s something I’ve worked with. I think we’re in business.

On the hardware side, I soldered sockets to the unpopulated programmer header and then created a programming cable to connect to my FTDI serial adapter (*). For the software, I cloned the “Starter Pack” repository, followed installation directions, and encountered a build failure:

Arduino\libraries\Brzo_I2C\src\brzo_i2c.c: In function 'brzo_i2c_write':
Arduino\libraries\Brzo_I2C\src\brzo_i2c.c:72:2: error: cannot find a register in class 'RL_REGS' while reloading 'asm'
   72 |  asm volatile (
      |  ^~~
Arduino\libraries\Brzo_I2C\src\brzo_i2c.c:72:2: error: 'asm' operand has impossible constraints
exit status 1
Error compiling for board Generic ESP8266 Module.

This looks like issue #44 in the Brzo library, unfixed at time of this writing. Hmm, this is a problem. Reading the code some more, I learned Brzo is used to create I2C communication routines with the IS31FL3741 driver chip controlling the LED array. Aha, there’s a relatively easy solution. Since the time the badge was created, Adafruit has released a product using the same LED driver chip and corresponding software libraries to go with it. I could remove this custom I2C communication code using Brzo and replace it with Adafruit’s library.

Most of the conversion was straightforward except for the LED pixel coordinate lookup. The IS31Fl3741 chip treats the LEDs as a linear array, and something had to translate the linear index to their X,Y coordinates. The badge example code has a lookup table mapping linear index to X,Y coordinates. To use Adafruit library’s frame buffer, I needed the reverse: a table that converts X,Y coordinates to linear index. I started typing it up by hand before deciding that was stupid: this is the kind of task we use computers for. So I wrote this piece of quick-and-dirty code to cycle through the existing lookup table and print the information back out organized by X,Y coordinates.


      for(uint8_t x = 0; x < 18; x++)
      {
        for(uint8_t y = 0; y < 18; y++)
        {
          reverseLookup[x][y]=-1;
        }
      }
      for(uint8_t i = 0; i < PAGE_0_SZ; i++)
      {
        reverseLookup[page0LUT[i].x][page0LUT[i].y] = i;
      }
      for(uint16_t i = 0; i < PAGE_1_SZ; i++)
      {
        // Unused locations were marked with (-1, -1) but x and y are
        // declared as unsigned which means -1 is actually 0xFF so
        // instead of checking for >0 we check for <18
        if(page1LUT[i].x < 18 && page1LUT[i].y < 18)
        {
          reverseLookup[page1LUT[i].x][page1LUT[i].y] = i+PAGE_0_SZ;
        }
      }
      for(uint8_t y = 0; y < 18; y++)
      {
        Serial.print("{");
        for(uint8_t x = 0; x < 18; x++)
        {
          Serial.print(reverseLookup[x][y]);
          if (x<17)
          {
            Serial.print(",");
          }
        }
        if (y<17)
        {
          Serial.println("},");
        }
        else
        {
          Serial.println("}");
        }
      }

This gave me the numbers I needed in Arduino serial monitor, and I could copy it from there. Some spacing adjustment to make things a little more readable, and we have this:

const uint16_t Lookup[ARRAY_HEIGHT][ARRAY_WIDTH] = {
  {  17,  47,  77, 107, 137, 167, 197, 227, 257,  18,  48,  78, 108, 138, 168, 198, 228, 258},
  {  16,  46,  76, 106, 136, 166, 196, 226, 256,  19,  49,  79, 109, 139, 169, 199, 229, 259},
  {  15,  45,  75, 105, 135, 165, 195, 225, 255,  20,  50,  80, 110, 140, 170, 200, 230, 260},
  {  14,  44,  74, 104, 134, 164, 194, 224, 254,  21,  51,  81, 111, 141, 171, 201, 231, 261},
  {  13,  43,  73, 103, 133, 163, 193, 223, 253,  22,  52,  82, 112, 142, 172, 202, 232, 262},
  {  12,  42,  72, 102, 132, 162, 192, 222, 252,  23,  53,  83, 113, 143, 173, 203, 233, 263},
  {  11,  41,  71, 101, 131, 161, 191, 221, 251,  24,  54,  84, 114, 144, 174, 204, 234, 264},
  {  10,  40,  70, 100, 130, 160, 190, 220, 250,  25,  55,  85, 115, 145, 175, 205, 235, 265},
  {   9,  39,  69,  99, 129, 159, 189, 219, 249,  26,  56,  86, 116, 146, 176, 206, 236, 266},
  {   8,  38,  68,  98, 128, 158, 188, 218, 248,  27,  57,  87, 117, 147, 177, 207, 237, 267},
  {   7,  37,  67,  97, 127, 157, 187, 217, 247,  28,  58,  88, 118, 148, 178, 208, 238, 268},
  {   6,  36,  66,  96, 126, 156, 186, 216, 246,  29,  59,  89, 119, 149, 179, 209, 239, 269},
  {   5,  35,  65,  95, 125, 155, 185, 215, 245, 270, 279, 288, 297, 306, 315, 324, 333, 342},
  {   4,  34,  64,  94, 124, 154, 184, 214, 244, 271, 280, 289, 298, 307, 316, 325, 334, 343},
  {   3,  33,  63,  93, 123, 153, 183, 213, 243, 272, 281, 290, 299, 308, 317, 326, 335, 344},
  {   2,  32,  62,  92, 122, 152, 182, 212, 242, 273, 282, 291, 300, 309, 318, 327, 336, 345},
  {   1,  31,  61,  91, 121, 151, 181, 211, 241, 274, 283, 292, 301, 310, 319, 328, 337, 346},
  {   0,  30,  60,  90, 120, 150, 180, 210, 240, 275, 284, 293, 302, 311, 320, 329, 338, 347}
};

This lookup table got Mr. Robot Badge Mk. 2 up and running without Brzo library! I could write that simple “turn on all LED” test I wanted.

The test exposed two more problematic LEDs. One of them was intermittent: I tapped it and it illuminated for this picture. If it starts flickering again, I’ll give it a dab of solder to see if that helps. The other one is dark and stayed dark through (unscientific) tapping and (scientific) LED test equipment. It looks like I need to find two more surface-mount red LEDs to fully repair this array.

In case anybody else wants to play with Mr. Robot Badge and runs into the same problem, I have collected my changes and updated the README installation instructions in pull request #3 of the badge code sample repository. If that PR rejected, clone my fork directly.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Surface Mount Repair Practice with Mr. Robot Badge

Years ago at a past Hackaday Superconference, I had the chance to play with a small batch of “Mr. Robot Badge” that were deemed defective for one reason or another. After a long story that isn’t relevant right now, I eventually ended up with a single unit from that batch with (at least) two dark LEDs in its 18×18=324 array of LEDs. I thought trying to fix it would be a good practice exercise for working with surface-mount electronics, and set it aside for later. I unearthed it again during a recent workshop cleanup and decided it was a good time to check my current surface mount skill level.

I’m missing a few tools that I thought would be useful, like a heat plate or hot air rework station. And while my skills are better than they were when I was originally gifted with the badge, it’s still far from “skilled” at surface mount stuff. (SMD for surface-mount devices, or SMT for surface-mount technology.) One relatively new tool is a dedicated LED tester, and I used it to probe the two dark LEDs. One of them would illuminate with the test probe in place, and a dab of solder was enough to make proper electrical connection and bring it to life. The other one stayed dark even with test probe and would need to be replaced.

Looking in my pile of electronics for a suitable doner, I picked out my ESP32 dev module with a flawed voltage regulator that went up in smoke with 13V DC input (when it should have been able to handle up to 15V DC). This module has a red LED for power and a blue LED for status. I probed the red LED and found it dead, but the blue LED lit up fine. Between “keep looking for a red LED” and “use the blue one in my hand” I chose the latter out of laziness. This is a practice exercise with low odds of success anyway.

Lacking a hot air rework station or a hot plate, I didn’t have a good way to heat up both sides of a LED so there was a lot of clumsy application of solder and flux as I worked to remove the dead badge LED. It came apart in two pieces, so I practiced removal again with the dead red LED on my ESP32 dev module. Once I removed it intact, I tossed it aside and used my newfound (ha!) skill to successfully remove the blue LED in one piece. I might be getting the hang of this? Every LED removal was easier than the last, but I still think a hot air station would make this easier. After I cleaned up the badge LED pads, I was able to solder one side of the salvaged LED then the other. I could see a little bit of green on one side of the LED indicating polarity, so I lined it up to be consistent with the rest of the LEDs on the badge.

I turned on the badge and… the LED stayed dark. It then occurred to me I should have verified the polarity of the LED. Pulling out the LED tester again I confirmed I have soldered it on backwards. Valuable lesson: LED manufacturers are not consistent about how they used the little bit of green on a LED to indicate polarity. So now I get to practice LED removal once again, followed by soldering it back on the correct way. I would not have been surprised if the blue LED had failed after all this gross abuse. On the upside a failure would motivate me to go find another red LED somewhere.

Here is a closeup of the problematic area on the badge. The circle on the right was the LED that just needed a bit of solder for proper contact. The circle on the left represents the scorched disaster zone that resulted from SMT rework amateur hour as I unsoldered and resoldered multiple times. The little blue LED clung on to life through all this hardship I had inflicted and shone brightly when the badge was turned on. I’ll call that a successful practice session.

[Update: I didn’t originally intend to do anything with the badge beyond soldering practice, but I looked into trying to write my own code to run on the badge and successfully did so.]

ESPHome Remote Receiver Test: Simplistic Shooting Game

I salvaged an infrared remote control receiver from a Roku Premier 4620X (“Cooper”) and dumped out some codes using an ESP32 microcontroller running ESPHome software’s Remote Receiver component. This is great, but before I moved on, I ran a simple introduction to actually using it. The “Hello World” of ESPHome remote receiver, so to speak.

The idea is a very simple shooting game. I will add an LED to the breadboard connected to GPIO 18. I will count to five seconds in an on_loop lambda and illuminate the LED using ESPHome GPIO output component. Once illuminated, it will wait for the signal sent by a Roku RC108 remote when the “OK” button is pressed. Once received, I will darken the LED for five seconds before turning it back on. With this very simple game I pretend to “shoot out” the light with my remote.

It was silly and trivially easy as far as shooting games go. Infrared remote controls are designed so the couch potato doesn’t have to aim very accurately for them to work. The emitter sends out the signal in a very wide cone, and the receiver is also happy to receive that signal from within a wide cone. If I am to evolve this into an actually challenging target practice contraption, I would have to figure out how to narrow the cone of effectiveness on the emitter, or the receiver, or both!

But that was not the objective today. Today it was all about dipping my toes in that world before I continued with my Roku teardown. I wanted to keep the hardware trivial and the code simple, so here is the ESPHome code excerpt for this super easy shooting game:

esphome:
  on_loop:
    then:
      - lambda: |-
          if (id(ulNextOn) < millis())
          {
            id(led_01).turn_on();
          }

status_led:
  pin:
    number: 2

output:
  - platform: gpio
    pin:
      number: 18
      inverted: true
    id: led_01

globals:
  - id: ulNextOn
    type: unsigned long
    restore_value: no
    initial_value: '0'

remote_receiver:
  pin:
    number: GPIO36
    inverted: true  
  dump: nec
  tolerance: 10
  on_nec:
    then:
      - lambda: |-
          unsigned long ulTurnOff;
          
          if (0x55AA == x.command)
          {
            id(led_01).turn_off();
            id(ulNextOn) = millis() + 5000;
          }

Roku Premiere (4620X “Cooper”) Infrared Receiver

Looking over the circuit board of a Roku Premiere 4620X (“Cooper”) I saw a lot of things that might be fun to play with but require more skill than I have at the moment. But that’s fine, every electronics hobbyist has to start somewhere, so I’m going to start small with the infrared remote control subsystem.

Consumer infrared (IR) is not standardized, and signals may be sent on several different wavelengths. Since I want to play with the Roku RC108 remote control unit, I needed to remove the infrared receiver of its corresponding Premiere 4620X in order to guarantee I have a matching set. This is a small surface-mount device protected by a small metal shield that I could fold out of the way.

Once the shield was out of the way, I could turn on the Roku and probe its pins with my voltmeter to determine the power supply pin (constant 3.3V) the ground pin, and the signal pin (mostly 3.3V but would drop as I sent signals with the remote.) I then removed this receiver with my soldering iron and connect this tiny part to a set of 0.1″ spacing headers so I could play with it on a breadboard.

In this picture, from top to bottom the pins are:

  • Ground
  • (Not connected)
  • Power 3.3V
  • (Not connected)
  • Active-low signal

I installed it on a breadboard with an ESP32, flashed with ESPHome which is my current favorite way to explore things. In this case, ESPHome has a Remote Receiver component that has decoders for many popular infrared protocols. What if we don’t know which decoder to use? That’s fine, we can set the dump parameter to all which will try every decoder all at once. For this experiment I chose an ESP32 because the of its dedicated remote control (RMT) peripheral for accurate timing while decoding signals. After I get something up and running, I might see if it works on an ESP8266 without the RMT peripheral.

With dump parameter set to all listening to a Roku RC108, I got hits from the JVC, LG, and NEC decoders. And occasionally I would get a RAW message when none of them could understand the signal. If I hold the [up] button, I get one instance of:

[18:55:50][D][remote.jvc:049]: Received JVC: data=0x5743
[18:55:50][D][remote.lg:054]: Received LG: data=0x57439867, nbits=32
[18:55:50][D][remote.nec:070]: Received NEC: address=0xC2EA, command=0xE619

But just the first one. All following hits look like this, and they repeat for as long as I held down the [up] button.

[18:55:50][D][remote.jvc:049]: Received JVC: data=0x5743
[18:55:50][D][remote.lg:054]: Received LG: data=0x57439966, nbits=32
[18:55:50][D][remote.nec:070]: Received NEC: address=0xC2EA, command=0x6699

Then if I press the [down] button on the remote, the first interpreted signal became:

[18:55:52][D][remote.jvc:049]: Received JVC: data=0x5743
[18:55:52][D][remote.lg:054]: Received LG: data=0x5743CC33, nbits=32
[18:55:52][D][remote.nec:070]: Received NEC: address=0xC2EA, command=0xCC33

Then it would repeat the following for as long as [down] was held:

[18:55:53][D][remote.jvc:049]: Received JVC: data=0x5743
[18:55:53][D][remote.lg:054]: Received LG: data=0x5743CD32, nbits=32
[18:55:53][D][remote.nec:070]: Received NEC: address=0xC2EA, command=0x4CB3

Looking at this, it looks like JVC is overeager and doesn’t actually understand the protocol as it failed to differentiate up from down. That leaves LG and NEC decoders. To find out which is closer to the Roku protocol, I started tightening the tolerance parameter from its default of 25%. When I have it tightened to 10%, only the NEC decoder returned results. Even better, each button returns a repeatable and consistent number that is different from all of the others. Even if Roku isn’t actually using a NEC protocol, using the NEC decoder is good enough to understand all its buttons. I used the NEC decoder to generate this lookup table.

Roku Remote ButtonDecoded Command
Back0x19E6
Home0x7C83
Up0x6699
Left0x619E
OK0x55AA
Right0x52AD
Down0x4CB3
Instant Replay0x07F8
Options0x1EE1
Rewind0x4BB4
Pause0x33CC
Fast Forward0x2AD5
Netflix0x34CB
Sling0x58A7
Hulu0x32CD
Vudu0x7788

I think I got lucky this time with the NEC decoder. I had another infrared remote control in my pile of electronics (JVC RM-RK52) and none of the ESPHome decoders could decipher it, just RAW data all the time. Alternatively, it’s possible that remote is on a different infrared wavelength and thus this particular receiver is only picking up fringe gibberish. I’ll put the JVC remote back in the pile until I am in the mood to dig deeper, because right now I want to quickly play with this salvaged infrared system.

Recording ESPHome Sensor Values: Min, Max, and Average

I’m learning more and more about ESPHome and Home Assistant, most recently I was happy to confirm that ESPHome code was very considerate about flash memory wear. Another lesson I’ve learned is the use of “templates” (or “lambdas”). It is a mechanism to insert small pieces of C code, letting me add functionality unavailable from ESPHome configuration files. Here I’m using it to do something I’ve wanted to do ever since I learned about sensor filters. It expands on an existing ESPHome feature to calculate an aggregate sensor value from multiple samples. We could choose from aggregation functions like “minimum” or “maximum” or “sliding window average”. Now, with the template mechanism, I could track minimum and maximum and average.

First, I needed to declare two template sensors. They let template code send data into the ESPHome (and therefore Home Assistant) sensor reporting mechanism. I will use this to report the highest (maximum) and lowest (minimum) power values. (Hence units of “W” or Watts.)

sensor:
  - platform: template
    name: "Output Power (Low)"
    id: output_power_low
    unit_of_measurement: "W"
    update_interval: never # updates only from code, no auto-updates
  - platform: template
    name: "Output Power (High)"
    id: output_power_high
    unit_of_measurement: "W"
    update_interval: never # updates only from code, no auto-updates

The power sensor is configured to report sliding window average, which will take multiple samples and report the average to Home Assistant. The reporting event is on_value, but there’s also on_raw_value which is triggered on each sample. This is where I can attach a small fragment of C code to track the minimum and maximum values seen while the rest of ESPHome tracks the average.

    power:
      name: "Output Power"
      filters:
        sliding_window_moving_average:
          window_size: 180
          send_every: 120
          send_first_at: 120
      on_raw_value:
        then:
          lambda: |-
            static int power_window = 0;
            static float power_max = 0.0;
            static float power_min = 0.0;
            
            if (power_window++ > 120)
            {
              power_window = 0;
              id(output_power_low).publish_state(power_min);
              id(output_power_high).publish_state(power_max);
            }
            
            if (power_window == 1)
            {
              power_max = x;
              power_min = x;
            }
            else
            {
              if (x > power_max)
              {
                power_max = x;
              }
              if (x < power_min)
              {
                power_min = x;
              }
            }

The hard-coded value of 120 represents the number of samples to take before I report. When I have the sensor configured to take a sample every half second, 120 samples translates to one minute. (If the sensor is sampling once a second, 120 samples would be two minutes, etc.)

I discard the very first (zeroth) data sample to work around a quirk with ESPHome INA219 sensor support: the very first reported power value is always zero. I don’t know what’s going there but since zero is a valid reading (solar panel generates no power at night) I couldn’t just discard a zero power reading whenever I see it. Hence I reset power_max and power_min when power_window is one, not zero as I tried first.

Here is a plot of all three values. The average value in purple, the maximum in cyan, and the minimum in orange. Three devices were represented in this power consumption graph. The HP Stream 7 is always on through this period, and we can see its power consumption fluctuates throughout the day. Around midnight, the Raspberry Pi powered up to take a replication snapshot of my TrueNAS storage array and I shut it off shortly after it was done. And in the morning, after the solar monitor battery is charged (not shown on this graph) at about 10AM, the Pixel 3a started charging until just after noon.

For the Raspberry Pi, power consumption average hovered around 6W, but the maximum spiked a little over 10W. Similarly, the Pixel 3a charging averaged less than 6W but would spike up to 8W. The average value is useful for calculations regarding things like battery capacity, and the maximum value is necessary to ensure all components are staying within their maximum operating limits. And for now, the minimum value is merely informatively and not used, but that might change later.

Flash Memory Wear Effects of ESPHome Recovery: ESP8266 vs. ESP32

One major difference between controlling charging of a battery and controlling power to a Raspberry Pi is the tolerance for interruptions. Briefly interrupting battery charging is nothing to worry about, we can easily pick up where we left off. But a brief interruption of Raspberry Pi power means it will reset. At the minimum we will lose in-progress work, but consequences can get worse including corruption of the microSD card. If I put an ESPHome node in control of Raspberry Pi power, what happens when that node reboots? I don’t want it to trigger a Raspberry Pi reboot as well.

This was on my mind when I read ESPHome documentation for GPIO Switch: There is a parameter “restore_mode” that allows us to specify how that switch will behave upon bootup. ALWAYS_ON and ALWAYS_OFF are straightforward: the device is hard-coded to flip the switch on/off upon bootup. Neither of these would be acceptable for this case, so I have to use one of the restore options. I added it to my ESP32 configuration and performed an OTA firmware update to trigger a reboot. I was happy to see there was no interruption to the Pi. Or at least if there was, it was short enough that the capacitors I added to my Raspberry Pi power supply was able to bridge the gap.

This is great! But how does the device know the previous state to restore? The most obvious answer is to store information in the onboard flash memory for these devices, but flash memory has a wear life that embedded developers must keep in mind. Especially when dealing with inexpensive components like ESP8266 and ESP32 modules. Their low price point invites use of inexpensive flash with a short wear life. I don’t know how to probe flash memory to judge their life, but I do know ESPHome is an open-source project and I could dig into source code.

ESPHome GPIO Switch page has a link to Core Configuration, where there’s a deprecated flag esp8266_restore_from_flash to dictate whether to store persistent data in flash memory. That gave me the keyword needed to find the Global Variables section on ESPHome Automations page. Where it said there is only 96 bytes available in a mechanism called “RTC memory” and that it would not survive a power-cycle. That didn’t sound very useful but researching further I learned it survives deep sleep and so there’s utility there. Searching in ESPHome GitHub repository, I found the file preferences.cpp for ESP8266 where I believe the implementation lives. It defaults to false which means the default wouldn’t wear out ESP8266 flash memory but at the expense of RTC memory not surviving a power cycle. If we really need that level of recovery and switch esp8266_restore_from_flash to true, we have an additional knob to make trade offs between accuracy and flash memory lifespan using the flash_write_interval parameter.

So that covers ESPHome running on an ESP8266. What about an ESP32? While I see that ESP32 has its own concept of RTC memory, looking in ESPHome source code for ESP32 variant of preferences.cpp I see that it used a different mechanism called NVS. Non-Volatile Storage library is tailored for storing small key-value pairs in flash memory, and was written to minimize wear. This is great. Even better, the API also leaves the door open for different storage mechanisms in future hardware revisions, possibly something with better write durability.

From this, I conclude that ESPHome projects that require restoring states through reboots events are better off running on an ESP32 and its dedicated NVS mechanism. I didn’t have this particular feature in mind when I made the decision to use an ESP32 to build my power-control board, but in hindsight that was the right choice! Armed with confidence in the hardware, I can patch up a few to-do items in my ESPHome-based software.

Power Control Board for TrueNAS Replication Raspberry Pi

Encouraged by (mostly) success of controlling my Pixel 3a phone’s charging, the next project is to control power for a Raspberry Pi dedicated to data backup for my TrueNAS CORE storage array. (It is a remote target for replication, in TrueNAS parlance.) There were a few reasons for dedicating a Raspberry PI for the task. The first (and somewhat embarrassing) reason was that I couldn’t figure out how to set up a remote replication target using a non-root account. With full root level access wide open, I wasn’t terribly comfortable using that Pi for anything else. The second reason was that I couldn’t figure out how to have a replication target wake up for the replication process and go to sleep after it was done. So in order to keep this process autonomous, I had to leave the replication target running around the clock, and a dedicated Raspberry Pi consumes far less power than a dedicated PC.

Now I want to take a step towards power autonomy and do the easy part first. I have my TrueNAS replications kick off in response to snapshots taken, and by default that takes place daily at midnight. The first and easiest step was then to turn on my Raspberry Pi a few minutes before midnight so it is booted up and ready to receive replication snapshot shortly after midnight. For the moment, I would still have to shut it down manually sometime after replication completes, but I’ll tackle that challenge later.

From an electrical design perspective, this was no different from the Pixel 3a project. I plan to dedicate another buck converter for this task and connect enable pin (via a cable and a 1k resistor) to another GPIO pin on my existing ESP32. This would have been easy enough to implement with a generic perforated prototype circuit board, but I took it as an opportunity to play with a prototype board tailored for Raspberry Pi projects. Aside from the form factor and pre-wired connections to Raspberry Pi GPIO, these prototype kits also usually come with appropriate pin header and standoff hardware for mounting on a Pi. Looking over the various offers, I chose this particular four-pack of blank boards. (*)

Somewhat surprisingly for cheap electronics supply vendors on Amazon, this board is not a direct copy of an existing Adafruit item. Relative to the Adafruit offering, this design is missing the EEPROM provision which I did not need for my project. Roughly two-thirds of the prototype area has pins connected as they are on a breadboard, and the remaining one-third are individual pins with no connection. In comparison the Adafruit board is breadboard-like throughout.

My concern with this design is in its connection to ground. It connects only a single pin, designated #39 in most Pi GPIO diagrams and lower-left in my picture. The many remaining GND pins: 6,9,14,20,25,30, and 34 appear to be unconnected. I’m not sure if I should be worried about this for digital signal integrity or other reasons, but at least it seems to work well enough for today’s simple power supply project. If I encounter problems down the line, I can always solder more grounding wires to see if that’s the cause.

I added a buck converter and a pair of 220uF capacitors: one across input and one across output. Then a JST-XH board-to-wire connector to link back to my ESP32 control board. I needed three wires: +Vin, GND and enable. But I used a four-pin connector just in case I want to surface +5Vout in the future. (Plus, I had more four-pin connectors remaining in my JST-XH assortment pack than three-pin connectors. *)

I thought about mounting the buck converter and capacitors on the underside of this board. There’s enough physical space between the board and the Raspberry Pi to fit them. I decided against it on concern of heat dissipation, and I was glad I did. After this board was installed on top of the Pi, the CPU temperature during replication rose from 65C to 75C presumably due to reduced airflow. If I had mounted components underneath, that probably would have been even worse. Perhaps even high enough to trigger throttling.

I plan to have my ESP32 control board run around the clock, so this particular node doesn’t have the GPIO deep sleep state problem of my earlier project with ESP8266. However, I am still concerned about making sure power stays on, and the potential problems of ensuring so.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.