A Closer Look at LED Backlight Panel

I’ve successfully interfaced with the existing TPS61187 driver chip on the circuit board of a LG laptop display panel LP133WF2(SP)(A1), and brought the backlight module back to life. Given all the new territory I had to explore to get this far, I was very excited by a successful initial test. After I was able to calm down, I settled down to take a closer look at its physical/optical behavior.

Since I tested it face-down, the easiest thing to look at first are the backsides of the LED strip. Most of it is hidden by the sheet metal frame from this side, but from earlier examination I knew there was even less to see from the front. Once illuminated, we can see the structure inside the light strip. The yellow flexible segment that connects to the green circuit board isn’t a separate piece like I thought earlier, it is actually all a single sheet of flexible circuit. All the LEDs are mounted on it, and they are located at the very bottom edges of the screen. I knew the lights themselves had to be very thin and well hidden right up against the bottom edge, but I couldn’t figure out where the wiring would go. Now we can see all electrical wiring runs above the LEDs, and when we look at it from the front we can see it as a thin strip of light gray along the bottom.

I had been worried that the illumination would be compromised because it is working without some of the friends it had earlier. The backside used to have a laptop lid that would have helped reflect and diffuse light. And the front used to be up against the LCD pixel array, which was backed by a mirror finish that would have also helped reflect light around.

I need not have worried. It was quite evenly illuminated and, as seen in the wire shadow picture above, there are no distinct spotlights marking location of individual LEDs.

I also wondered if the surprisingly complex four-layer diffuser required precise alignment with the LEDs in order to do their light distribution magic. They are no longer pressed by the LCD pixel array into a tight space, but happily they still worked quite well. While they worked visibly best at certain positions, the falloff is graceful. Not like aiming a laser at precision optics. Now I’m even more impressed by this stuff performing magic with light in ways I don’t understand.

But one thing I do understand is that they look thin and quite fragile. Designed to sit behind a LCD panel of multiple glass layers and without that, these layers of magical optical sheets flap around. I looked around and found a piece of 3mm clear acrylic that is nearly the perfect size and taped it to the metal backing chassis. The acrylic is far thicker than the LCD glass sandwich used to be, but it is also more rigid, so that’s a good tradeoff.

The final comparison I wanted to make before moving on is: how bright is the backlight alone compared to the full backlight plus LCD screen? I placed this backlight, turned brightness all the way up high, and set it side-by-side with the intact replacement screen still serving display duty in the Chromebook. I then turned on the Chromebook and increased its screen brightness to maximum setting.

I don’t have light level measurement instruments to obtain an objective number, but this picture makes it quite clear there is a dramatic difference in brightness. I knew some light would have been lost within the layers of a LCD panel, but it’s fun to see firsthand it’s far more than I had expected. This really drove home why alternate display technologies with self-illuminating pixels (OLED, micro-LED, etc.) can offer much brighter pictures than a backlit LCD could. My salvaged backlight is plenty bright running on just 5V, but running it on USB took more effort than expected.

Arduino Nano PWM Signal for TPS61187 LED Driver

Trying to revive the LED backlight from a LG Lp133WF2(SP)(A1) laptop display panel, I am focused on a TPS61187 LED driver chip on its integrated circuit board. After studying its datasheet, I soldered a few wires to key parts of the circuit and applied power, checking the circuit as I went. Nothing has gone obviously wrong yet, so the final step is to give that driver chip a PWM signal to dictate brightness.

This is where I am happy I’ve added Arduino to my toolbox, because I was able to whip up a controllable PWM signal generator in minutes. Putting an Arduino Nano onto a breadboard, I wired up a potentiometer to act as interactive input. 5V power and ground were shared with the panel, and one of the PWM-capable output pins was connected to the TPS61187 PWM input line via a 10 kΩ resistor as per datasheet. I found that my enable line already had a 1 kΩ resistor on board, so now I wired enable directly to the 5V line.

Since I wanted some confidence in my circuit before plugging the panel into the circuit, I also wired a test LED in parallel with the PWM signal line. I had originally thought I could use the LED already on board the Arduino, but that is hard-wired to pin 13 which is not one of the PWM-capable pins, so the external LED was necessary for me to run my PWM-generating test code, which thanks to the Arduino framework was as easy as:

int sensorPin = A0;    // select the input pin for the potentiometer
int ledPin = 3;        // select the pin for the LED
int sensorValue = 0;   // variable to store the value coming from the sensor

void setup() {
  // declare the ledPin as an OUTPUT:
  pinMode(ledPin, OUTPUT);
}

void loop() {
  // read potentiometer position
  sensorValue = analogRead(sensorPin);

  // map analogRead() range to analogWrite() range
  sensorValue = map(sensorValue, 0, 1023, 0, 255);

  analogWrite(ledPin, sensorValue);
}

My external test LED brightened and dimmed in response to potentiometer knob turns, so that looked good. My heart started racing as I connected the panel to my Arduino breadboard, which is then connected to my benchtop power supply. Even though I’m powering this system with 5V, I used a bench power supply instead of a USB port. Because I didn’t know how much the panel drew and didn’t want to risk damaging my computer. Also, by using a benchtop power supply I could monitor the current draw and also set a limit of 120mA (20mA spread across 6 strings) for the first test.

I powered up the system with the potentiometer set to minimum, then slowly started turning the knob clockwise…

It lit up! It lit up! Woohoo!

I was very excited at this success, jumping and running down the hallway. It was a wild few minutes before I could settle down and calmly take a closer look.

Soldering Wires to TPS61187 LED Driver

After passively studying its documentation, and passively studying how it is installed on an existing circuit board, it is now time for me to go active and start working on it. Whether I can get this backlight up and running is almost secondary at this point, it has already been a great electronics learning opportunity and I want to see how far I can get.

This next step tests my skill working with components far smaller than what I’m used to. The picture of these added wires spoke volumes: I used the finest spools of wire I had on hand, but 26 gauge wire looks absolutely gargantuan when soldered to this board. Due to their small size I assumed these surface mount components would not have the strength to handle external stresses. As a temporary measure I used a piece of tape to hold the wires in place, hopefully diverting all the little twists and tugs yet to come as I connect and disconnect these wires to power and signal sources.

Once the wires were in place, I had to make a very important decision: what voltage do I feed into the Vin wire? Earlier probing failed to find the values of resistors used in the boost converter feedback regulation circuit. If I had those resistance values I could hoped to calculate the expected input voltage range using the formula in the datasheet. The only other guideline I had was the requirement that input voltage must be lower than the voltage drop across individual LED strings so the boost converter (critical part of this circuit) can function.

Looking for hints elsewhere, I reviewed my earlier notes looking inside this machine. Its battery is labeled as a lithium-ion pack with a nominal voltage of 10.8V, implying three cells in series. This is within the valid input range for a TPS61187 chip and I thought it is possible they would wire the battery voltage straight through. This would avoid any conversion losses from a boost or a buck converter along the way.

Following my startup plan, I used my bench power supply to put 10.8V on Vin and was encouraged I didn’t see any sparks or smell any smoke. My probe saw 5V on VDDIO which I took to be a good sign and satisfies step 1. Moving on to step 2, I checked enable (EN) pin to find it is low, so nothing else on the board is raising it. I used a one kilo-Ohm (1 kΩ) resistor to pull EN high and it did, so either nothing else on the board is pulling it low or 1 kΩ is enough to overrule their signal. If something is fruitlessly trying to pull it low, the few milliamps involved here should not damage it. Or if I do damage it, hopefully it wouldn’t be anything I care about.

If I understand the datasheet correctly, once enabled the TPS61187 will put minimum voltage across the LED strings. I probed the voltage between LED common anode and ground, and found it was 6.7V, which is lower than the input voltage of 10.8V, exactly what the datasheet said not to do. Oh no! I quickly turned the input voltage down to 6V so it would be below the LED voltage, wondering if it was already too late. There were no obvious signs of damage so… whether I’ve just killed it or not, my next step is to put a PWM signal on the brightness control line.

Finding TPS61187 LED Driver Interfaces

I want to reuse the TPS61187 LED Driver IC already on board the display circuit board of a LG LP133WF2(SP)(A1) laptop LCD panel. Driving the backlight that used to shine from behind the now-cracked LCD screen. After studying the datasheet and probed around the circuit board to understand how the chip is configured, I am now probing to see where I can solder wires to interface with the chip.

There are two major concerns here. The first is that I’m learning to work with modern consumer electronics, with circuit boards populated by very small surface mount components. Most of the resistors and capacitors I probed earlier are barely larger than the tip of my soldering iron’s finest tip. The “normal” tip is comically large next to these things. If I continue with experiments like this I will need to buy my own hot air station and learn to use it well.

The second concern are the other components on this control board. If I supply voltage and ground to the TPS61187, the rest of the circuit will probably come alive in some way I don’t understand. I’m not worried about them draining a bit of battery, that’s the best case scenario. I’m more worried about them interfering with the backlight control signals for enable (EN) and brightness (PWM).

First target is to find a place to inject power. The datasheet told me where the power pins are on this chip, but it’s far too small for me to hand solder myself. I’m not saying it’s impossible to hand solder to them, I’m just saying it’s very difficult and unlikely to succeed with my current skill level. So I went poking around nearby components looking for a decoupling capacitor. Because this driver IC uses a boost converter circuit to raise the voltage to drive the backlight LEDs, I expect to find a sizable decoupling capacitor nearby to isolate the rest of this board from the electrical noise of boost conversion. I found one adjacent to the inductor. For a surface-mounted component, it is large and thankfully big enough for me to feel confident I could solder to its two ends for VIN and GND.

I found a resistor and capacitor nearby connected to the enable pin. But even though they are larger than the corresponding TPS61187 pin I couldn’t solder to them. I was able to connect to one side of the small capacitor, but an intended-to-be-gentle test tug ripped off the wire taking a corner chunk of the capacitor with it. Oops, not so gentle after all.

I had even less luck finding a PWM signal connection near the TPS61187. I could see its pin connection on the circuit board, but it instantly sank out of sight to one of the other layers of the board and I found no place nearby where it surfaced.

After some thought, I decided to look for these signals near the largest chip sitting on the other end of the board, labelled “LG Display ANX2804”. I reasoned that EN and PWM are probably controlled from this chip and perhaps I could find something. There was nothing obvious near the chip itself, but I struck gold on the backside of the board. Sitting effectively under the ANX2804 are several labelled and accessible test points, and I was happy to see “LED_EN” next to one pad and “PWM” next to another. (There’s also VLED visible further left I didn’t notice until later, which should be better than soldering to the VIN end of a surface mount decoupling capacitor.)

Continuity test confirmed these do connect all the way across this thin strip of circuit board to the TPS61187, I think we are in business! Time to do some soldering.

Probing TPS61187 LED Driver Configuration

I’ve read through the datasheet for a TI TPS61187 LED driver chip and I think I have a fair (if not perfect) grasp of how to use it. Specifically I want to use one I found on the integrated driver board of a LG laptop LCD panel I’ve taken apart, there to drive the backlight module I wanted to salvage as a LED light. Armed with the datasheet and a multimeter, I started poking at the driver board to understand how it uses the chip. Since most of the chip’s configuration are done via resistors connected to certain pins, I could use the ohmmeter to decipher configuration. I enlarged and printed out my picture of that area of the circuit board so I could scribble down notes as I went.

Here’s what I found on this board, listed in order of their corresponding datasheet sections:

7.3.1 Supply Voltage

For applications that are always-on, it is valid for the enable (EN) pin to be connected to the chip’s internal regulator output pin VDDIO. Since a laptop would want to put a screen to sleep, I did not expect EN to be tied to VDDIO and my meter confirmed it was not. Which means I’ll have to go hunting for my own connection to EN later.

7.3.2 Boost Regulator and Programmable Switch Frequency (FSCLT)

The internal boost converter can operate at a range of frequencies, giving the designer an option to tradeoff between efficiency, inductor component size, etc. I probed the selection resistor on this board to be 822 kΩ. Plugging this into datasheet formula I arrive at a switching frequency of 608 kHz. Table 1 lists a few recommended values, and 833 kΩ is one of the recommendations for 600 kHz. I suspect this was indeed the intent and this 822 kΩ resistor is pretty good at less than 2% off nominal value.

7.3.3 LED Current Sinks

This is arguably the core parameter of driving LEDs. I traced the circuit and found two resistors in series. ~20 kΩ and ~31 kΩ but they added up to about 58 kΩ so there’s obviously something else I missed. Nevertheless, plugging 58 kΩ into datasheet formula says it’ll sets the target at 20mA. Typical for driving LEDs.

7.4.2 Adjustable PWM Dimming Frequency and Mode Selection (R_FPWM/MODE)

There are two ways to control brightness of the backlight. Either they can be blinked directly by an external PWM signal, or they can be blinked with an internal signal generator. One advantage of using the internal signal is that the phase for each of six strings are offset, so they blink in turn instead of simultaneously, which I expect to give smoother dimming. Another advantage of separating the two signals is that the external PWM can run at a far slower frequency, even one that would otherwise cause visible flicker, but it wouldn’t matter. Because once its duty cycle is read, it is copied for use by the internal generator running at a far faster flicker-free rate.

Probing the configuration resistors proved this board uses the internal high speed PWM signal. The resistor is 3.9 kΩ which works out to about 46.6 kHz. This is not one of the Table 2 recommendations, in fact it is over twice the speed of the highest recommended value. at 9.09 kΩ / 20 kHz. Higher switching frequency usually mean smoother behavior but lower power efficiency, I wonder what design meeting decisions at LG led to this value. Though of course it’s possible I’ve misread the value somehow.

7.4.4 Overvoltage Clamp and Voltage Feedback (OVC/FB)

These resistors configure how the boost converter works, and there’s an ideal formula in the datasheet mapping input voltage to LED output voltage. I was able to measure Rdown as 20 kΩ, but Rupper did not converge. My ohmmeter’s initial reading was in the 370-400 kΩ range, but the value continued to increase as I kept the probes in place. Eventually it would read as off-scale high. I think this means there’s a capacitor in parallel?

Out of all the configurations I had hoped to read, this was the one I really wanted to get because it would inform me as to the best voltage level to feed into this system. With this ambiguous reading, I’m sadly no wiser.

But at least I have some idea of how this chip has been configured to run, so I could continue probing this circuit board looking for places where I can interface with this LED driving circuit.

My TPS61187 LED Driver Startup Plan

I wanted to see if I could power up just the LED backlight portion of a broken LG LCD laptop screen, model LP133WF2(SP)(A1). It was cracked and couldn’t display images, but the backlight still worked before I took it apart. Does it still work? I wanted to find out and I still had the screen’s integrated driver circuit board and will try that first. The biggest question mark here is how the rest of the circuit board will react if I try to power up the TI TPS61187 LED driver chip in-place on the circuit board. My fallback position is to bypass the chip and power the LED strings directly, but that wouldn’t be as energy-efficient and I lose out on cool features. The one most novel to me is the phase-shifted PWM dimming control, where the six LED strings are dimmed round-robin instead of all at once for a smoother display. It’s not something I would likely do if I had to power the LEDs directly with my own cricuit.

To see if I could get the original circuit running, I plan to do it in steps based on this excerpt from the datasheet:

7.3.4 Enable and Start-up

The internal regulator which provides VDDIO wakes up as soon as VIN is applied even when EN is low. This allows the device to start when EN is tied to the VDDIO pin. VDDIO does not come to full regulation until EN is high. The TPS61187 checks the status of all current feedback channels and shuts down any unused feedback channels. It is recommended to short the unused channels to ground for faster start-up.

After the device is enabled, if the PWM pin is left floating, the output voltage of the TPS61187 regulates to the minimum output voltage. Once the device detects a voltage on the PWM pin, the TPS61187 begins to regulate the IFB pin current, as pre-set per the ISETH pin resistor, according to the duty cycle of the signal on the PWM pin.

This translated to the following plan:

  1. Put minimal voltage across VIN and GND. If it doesn’t go up in smoke, probe VDDIO to see if it has some voltage.
  2. If that works, check the Enable pin. If I am to drive the chip, I will need to control the state of the Enable pin. This is where an interaction with existing components might cause headaches: something else on the board might be trying to keep it high or keep it low, and if I put voltage on that pin the opposite state, I might damage that component unless I cut a trace somewhere to disconnect it.
  3. I might also have to find and cut a trace for the PWM pin for the same reason.
  4. Send the Enable signal, and check the voltage level across a LED string for the “minimum output voltage” mentioned by the datasheet.
  5. If all of the above works, then I’ll work on how to generate the PWM dimming signal.

Plans rarely survive intact upon their first contact with reality, but I wanted to have one before I got started. It will guide me as I probe the circuit board to understand how it uses a TPS61187.

TI TPS61187 Circuit’s Boost Converter

I had a broken LG laptop screen, model LP133WF2(SP)(A1), which I’ve disassembled and now I’m digging into its backlight module. I want to see if it I could make it work as a standalone diffuse light panel. I could probably wire up the LEDs directly with a voltage source and current-limiting resistor, but I also have its original integrated driver circuit board which still worked as far as I knew. I’m sure most of it were concerned with moving pixels which are no longer relevant, but there is also a TI TPS61187 chip on the board to drive the backlight section.

The PWM control signal is 3.3V friendly with a logic high threshold of 2.1V, so I could use either a 5V ATmega328 Arduino or a 3.3V ESP32. The part I didn’t understand was the power input. The datasheet says input voltage can range anywhere from 4.5V to 24V, and that it has a built-in boost converter to send up to 38V to the LED strings. I had expected to see a separate output pin for this higher voltage, but in the Typical Application schematic, the LED’s common anode is connected to the input voltage plane via a diode and an inductor. This combined with the following quote in the datasheet confused me:

there must be enough white LEDs in series to ensure the output voltage stays above the input voltage range

With the common anode seemingly tied to voltage input, I didn’t understand how the anode voltage could be higher than the input voltage. The next hypothesis is that instead of different voltage supply planes, perhaps there are separate ground planes at different levels. I saw there was a PGND pin for logic that is separate from AGND pin for the LED strings so the hypothesis had potential. But when I probed the circuit board, my meter said PGND and AGND pins are tied together on my board, eliminating the “separate ground levels” idea.

With a distinct sense that I have misunderstood something, I went to Wikipedia to learn more about boost converters and how they work. As soon as the diagrams came onscreen for that page, I realized that inductor and diode I saw earlier WAS the boost converter. I just didn’t recognize it as I was only aware of a block diagram representation and didn’t know it when the core components of a boost converter were staring at me in the schematic. Now it all makes sense how the LED string common anode voltage will be higher than the input voltage, and I feel confident enough to devise a plan.

Sawppy Rover Battery Voltage Monitor

Trying to debug a mobile web page without any debug support — no syntax error, no line number, not even console.log() — was extremely frustrating and quite draining. After that experience I was ready for a break from rover work. But before I take that break, I wanted to at least attempt to use the voltage-monitoring provision I wired in to this draft of my Sawppy Rover ESP32 control board.

I encountered a few more mysteries of ESP32 ADC while doing so. I initially set the ADC attenuation at 2.5db, which should have given me a range spanning 1.1V which fits nicely with my 10:1 voltage divider. But the raw ADC values were far below where I thought they should be. My voltmeter read 0.6V on the pin after the divider, which should have been a little more than 2048 out of the ADC’s 12-bit resolution. (0-4095) Instead I got values closer to 1350 and I didn’t understand why yet.

Another mystery was working to convert that value to a voltage reading. I had thought the conversion would be straightforward: look at the raw ADC value, measure the actual voltage with my trusty Fluke meter, and divide them to obtain a conversion coefficient. But I found that the ratio between raw ADC value and voltage measured by the meter was not consistent across the voltage range. The coefficient I calculated from a 5V USB input voltage was different from the coefficient calculated from 7.4V of my 2-cell LiPo battery pack. This was a surprise.

To solve this problem correctly, I should consult Espressif ESP32 ADC documentation on their factory ADC calibration values and how to leverage that work into a more precise value. But what I have today is good enough to roughly monitor battery level. I wanted to keep going and get the rest of the basic infrastructure set up before I ran out of motivation to work on this code.

The small bit of ADC conversion code posted a message containing calculated voltage to a FreeRTOS queue, where multiple tasks can make use of that information. I updated the HTTP server code to peek at values on that queue, and send it to the user interface via websocket in the form of a JSON string. The ESP32 HTTP server sent real data, the Node.js stub server only sent a placeholder value. Sawppy’s browser-side JavaScript was then modified to listen to that message, parse that JSON string, and print that data on the status bar.

With these changes, I could monitor battery voltage level from my touchscreen control. This is enough for the moment. I have a few tasks for the future, using this voltage reading at a few other places. Starting with these two:

  1. If the voltage drops below a certain level, the rover should stop. Or at least use a “limp mode” that runs slower, in order to avoid an ESP32 brownout.
  2. I want the motor control PWM frequency to dynamically adjust according to the battery voltage, with the goal to maintain consistent output voltage to the motors. For example, a command for full speed should always output 6V to the motors. If the battery is drained down to 6V, the PWM frequency would be 100%. If the battery is full, the PWM frequency would be lower than 100% even at “full speed” in order to avoid burning out the motor.

I’ll add those features in a future Sawppy brain coding sprint. Software development will go on pause while I live with the feature set I’ve got on hand for a while. Besides, sometimes you don’t even need the features I already have.

[Code for this project is publicly available on GitHub]

Windows Phone Debug Tools Rode Into Sunset

Organizing and commenting code for my Sawppy Rover ESP32 control project had two sides: the server-side code running on the ESP32 and the client-side code running in a web browser. The MIT license text and general comments went well, but when I tried to improve error handling, I ran into a serious problem with the old Internet Explorer browser built into Windows Phone 8.1: I have no debugging tools.

My only indication was that the page failed to load and all I saw was a blank screen. There were no error messages displayed onscreen, and unlike desktop browsers I couldn’t even look at what’s been printed to console.log(). Just a blank nothing. I backtracked through my code changes and eventually traced it down to a bit of error handling code I added to the JavaScript file.

try {
  // Do potentially problematic thing
} catch {
  // Use alternate approach instead
}

This worked in desktop browsers, this was also accepted in my modern Android phone’s Chrome browser, but it was treated as an error in Internet Explorer. Even though I didn’t care about the specific error thrown, IE didn’t permit omitting the specifier. The following syntax was required:

try {
  // Do potentially problematic thing
} catch(err) {
  // Use alternate approach instead
}

This code change was not, in itself, a big deal. But it took an hour of trial-and-error to find the location of the error (no feedback = no line number!) and figure out what the browser would tolerate. During this time I was operating blind with only a binary “blank screen or not” as my feedback mechanism. I need better tools if I am to tackle more challenging JavaScript programming on Windows Phone IE.

Unfortunately, almost all of the debugging resources for Windows Phone platform have disappeared. Microsoft’s own Visual Studio IDE — formerly the home of Windows Phone app development — don’t even mention the platform at all in its “Mobile Development” feature page. A promising resource titled Diagnosing Mobile Website Issues on Windows Phone 8.1 with Visual Studio (published in 2014) said to best tool to use is the Windows Phone emulator because it was easier. Avoiding all the hoops one must jump through to put a physical Windows Phone device in developer mode for debugging. Today it’s not just a matter of “easier” since the latter is outright impossible: the Windows Phone developer portal has been shut down and I can no longer put any of my devices into developer mode.

But perhaps they’re both equally impossible, as the Windows Phone emulator is no longer an option for installation in Visual Studio 2019. A search for related pages led me to Mobile Apps & Sites with ASP.NET (published in 2011) whose section had a promising link “Simulate Popular Mobile Devices for Testing“. But that link is no longer valid, clicking it merely redirects back to the Mobile Apps & Sites with ASP.NET page. Many search boxes later, I eventually found what claims to be the Windows Phone emulator as a standalone download. I did not try to download or install it because at that point I was no longer interested.

I aborted my intention to organize my browser JavaScript code. Right now everything sits as peers at top-level and globally accessible. I had intended to reorganize the code into three classes: One handles drawing, one handles user input (pointer events), and the third handles server communications (websocket). However It looks like Internet Explorer never supported the official JavaScript class mechanism. I can probably hack up something similar, JavaScript is flexible like that. People have been hacking up class-like constructs in JS long before the official keyword was adopted. But to do that I need debugging tools for when I run into inevitable problems. Doing it without even the ability to see syntax errors or console.log() is masochistic self-punishment and I am not interested in inflicting that upon myself. No debug tool, no reorganization. I will add comments but the code structure will stay as-is.

This frustrating debugging session sapped my enthusiasm for working on Sawppy rover ESP32 control code. But before I go work on something else for a change of pace, I wanted to get a basic voltage monitoring system up and running.

[Code for this project is publicly available on GitHub]

Cleaning Up And Commenting Sawppy Rover ESP32 Code

The rover is now up and running on its own, independent of my home network. I felt this was a good time to pause feature development of Sawppy ESP32 control software. From an user experience viewpoint, this software is already in better shape than my SGVHAK rover software. Several of the biggest problems I’ve discovered since our local rovers started running have been solved. I feel pretty good about taking this rover out to play. More confident, in fact, than rovers running my earlier software.

Before I turn my attention elsewhere, though, I need to pay off some technical debt in the areas of code organization and code commenting. I know if I don’t do it now, it’ll never happen. Especially the comments! I’m very likely to forget if I don’t write them down now while they’re fresh in my mind. The code organization needed to be tamed because there were a lot of haphazard edits as I experimented and learned how to do things in two new environment simultaneously: ESP32 and JavaScript. The haphazard nature was not necessarily out of negligence but out of ignorance as I had no idea what I was doing. Another factor was that I copied and pasted a lot of Espressif sample code (especially in the WiFi area) and I had instances of direct copied text “EXAMPLE” that needed to be removed.

I also spent time improving code robustness with error handling code paths. And if errors are not handled, they should at least be reported. I have many unhappy memories trying to debug some problem that made no sense, and eventually traced it to a failed API call upstream whose error code was never reported. Here I found Espressif’s own ESP_LOG AND ESP_ERROR_CHECK macros to be quite handy. Some were carried over directly from sample code. I’ve already started using ESP_ERROR_CHECK during experimentation, because again I didn’t have robust error handling in place at the time but I wanted to know immediately if something fails.

Since this was my first nontrivial ESP32 project, I don’t think it’s a good idea for anyone to use and copy my amateurish code. But if they want to, they should be allowed to. Thus another bulk edit added the MIT license prefix to all of my source files. Some of them had pieces copied from Espressif example, but those were released to public domain so I understand there would be no licensing conflict.

While adding error handling code, though, I wasted an hour to a minor code edit because I didn’t have debug tools.

[Code for this project is publicly available on GitHub]

Sawppy Rover Independence with ESP32 Access Point

I wanted to make sure I had good visual indication of control client disconnect before the next task on my micro Sawppy rover ESP32 brain: create its own little WiFi network. This is something I configured the Raspberry Pi to do on SGVHAK rover and Sawppy V1. (Before upgrading Sawppy to use a commercial router with 5GHz capability and much longer range.)

All of my development to date have been done with the ESP32 logged on to my home network. This helps with my debugging, but if I take this rover away from home, I obviously won’t have my home network. I need to activate this ESP32’s ability to act as its own WiFi access point, a task I had expected to be more complex than setting up the ESP32 as a client (station mode) but I was wrong. The Espressif example code for software-based access point (softAP) mode was actually shorter than its station mode counterpart!

It’s shorter because it was missing a lot of the functionality I had expected to see. My previous experience with ESP32 acting as its own access point had been with products like Pixelblaze, but what I saw was actually a separate WiFi manager module built on top of ESP32’s softAP foundation. That’s where the sophisticated capabilities like captive portal configuration are implemented.

This isn’t a big hinderance at the moment. I might not have automatic forwarding with captive portal, but it’s easy enough to remember to type in the default address for the ESP32 on its own network. (http://192.168.4.1) On the server side, I had to subscribe to a different event to start the HTTP server. In station mode, IP_EVENT_STA_GOT_IP signals that we’re connected to the access point and it has assigned an IP address. This doesn’t apply when the ESP32 is itself the access point. So instead I listen for a control client to connect with WIFI_EVENT_AP_STACONNECTED, and launch the HTTP server at that point. The two events are not analogous, but are close enough for the purpose micro rover control. Now it can roam without requiring my home WiFi network.

Sometime in the future I’ll investigate integrating one of those WiFi manager modules so Sawppy rover users can have the captive portal and all of those nice features. A quick round of web searching found this one by Tony Pottier, with evidence of several more out in circulation. These might be nice features to add later, but right now I should clean up the mess I have made.

[Code for this project is publicly accessible on GitHub.]

Make Disconnected Client Visually Obvious

Testing my server code to disconnect inactive websocket clients, I learned a behavior I hadn’t known before: when a browser tab is no longer the foreground tab, the page’s scripts continue running. This made sense in hindsight, because we need those scripts to run for certain features. Most web mail clients display the number of unread messages in their tab title, and if a new piece of mail arrived, that number couldn’t be updated unless scripts were allowed to run.

In my scenario, I wanted the opposite. If someone has put my Sawppy rover control page in the background, they are no longer driving and should vacate the position for another. I started looking for HTML DOM events that I could subscribe to and learn if my tab is no longer the foreground, but a few experiments with the “onblur” event didn’t function as I hoped they would. As a fallback I decided to take advantage of a side effect: modern browsers do keep running scripts for background tabs, but they do so at a much slower rate. I “just” have to shorten the timeout interval and that would disconnect clients running slowly just as it does for those who have lost network connection or stopped. This is fragile as it depends on unspecified browser behavior, but good enough for today.

When testing this, I noticed another problem: for the user, it’s not very evident when the server has disconnected from their control client. I have a little bit of text at the bottom saying “Disconnected” but when I’m testing and juggling multiple browsers I started wishing for something more visually obvious. I thought about blanking out the page completely but such a drastic change might be too disorienting for the user. For the best user experience I want an obvious sign but I don’t want to club them over the head with it! I decided to change the color of the touchpad to grayscale in the websocket.onclose handler. The user can see they’re still on the page they expected, but they can also clearly see something is different. This accomplishes my goal.

From an accessibility perspective, this isn’t great for color-blind users. It’ll be fine for certain types of color blindness like red-green color blindness, but not for fully color blind users because it’s all gray to them. Thus I do not see this behavior as replacement for the “Disconnect” text. That text still needs to be there for those who could not see the change in color. And knowing connected/disconnected status will become more important as the little rover wanders away from my home, away from the powerful standalone wireless access point, and have to become its own wireless access point.

[Code for this project is publicly available on GitHub.]

Detect and Disconnect Inactive Web Sockets

Constraining my Sawppy rover logic to only a single rover operator was good, but that code immediately exposed the next problem: a web socket on one side doesn’t always know when it has lost contact with the other end. When this happens to my ESP32 server on the rover, it means the rover doesn’t know its driver is gone. And thanks to the code I just added, it means nobody else can get into the driver’s seat, either.

This is a known failure mode for web sockets, and there’s a prescribed mechanism to deal with it: a web socket heart beat with the “ping” and “pong” control frames. Either end of a web socket can choose to send a ping. Upon receipt of this ping a web socket implementation is obligated to reply with a pong. Doing this on a regular basis lets us check to verify the connection is still alive.

I started writing code to send pings, but then I realized it’s not really necessary. The browser client is obligated to send steer and speed values on a regular basis, and that can serve as my heartbeat. I can set a timer each time the rover receives the steer and speed commands, and if it’s been too long since the last transmission, the rover can proactively terminate the web socket so another rover operator can assume command.

As usual I started with my Node.js server running on my desktop to explore the concept and get an idea of how it’s supposed to work. For JavaScript I start a timer with setTimeout() and every time I receive a client command I call refresh() on that timer to reset the clock. If the timer goes off, it’s been too long and I call terminate() on the web socket instance. Which I need to keep track of now, something I managed to avoid earlier.

Once I understood how it was supposed to work, I moved on to implementation on ESP32. For this task I chose to use a FreeRTOS software timer. With mostly the same semantics as in JavaScript. When a new web socket is accepted, I call xTimerStart(). Every time the rover receives a command, xTimerReset() is called. If a reset does not occur in time, I queue up a web socket control frame set to HTTPD_WS_TYPE_CLOSE to close up shop.

That code took care of the server side logic, but that left a problem on the client side: How can I make it obvious when the server has decided to quit listening to commands from a particular controller?

[Code for this project is publicly available on GitHub]

Sawppy Rover Driver Max Occupancy: One

Steering control precision was something I found lacking in my SGVHAK rover software project. This is my second effort at browser-based rover control and I added code to vary steering rate as a function of speed. Over the next few weeks (or more) I will see if it’s an overall improvement and see if it’s worth keeping. The next problem I wanted to solve with browser-based rover driving is that HTTP was designed to be completely stateless, and a mechanism to serve many clients. This doesn’t work so well for driving a vehicle, where we want to have only one driver at the wheel.

I didn’t know how to solve this problem with SGVHAK rover. Once I had an HTTP web server up and running, it would happily serve rover control UI to any number of clients. And it would happily accept and process HTTP POST submissions from any and all of those clients. In practice this means we can have multiple touchscreen phones all trying to drive the rover, and the rover ends up being very confused with conflicting messages coming in interleaved with each other. Steering servos would rapidly flick between multiple positions, and driving motors would rapidly change speeds. This causes hardware damage.

Switching from stateless HTTP POST to web sockets gave me a tool to solve this problem. Now the server side code can keep a reference to a specific web socket, and any additional attempts to set up a rover driving web socket can be rejected. This allows me to keep the number of rover drivers to at most one.

For my Node.js server, I didn’t even need to keep a global reference. The web socket server class maintains a list of clients, and I can check the number of clients. The trickier part for me was figuring out how to reject additional sockets. I looked fruitlessly in the web socket server for a while, because the answer is actually a little bit upstream: The HTTP server has an “upgrade” event that I can subscribe to, and it is called whenever a web socket client request upgrading from HTTP GET to websocket. This is the location where I can reject the connection if there was already an existing client. With the Node.js server configured to test the scenario on my development desktop, I found a few bugs in my client-side browser code. Once it worked I could continue to my ESP32 code.

For my ESP32 server, it means tracking two things: an identifier for the HTTP server (httpd_handle_t) and a socket descriptor. Together those two values uniquely identify a websocket. The URI handler I registered to handle websocket upgrade requests is given an instance of httpd_req_t. Using that, I can obtain both parts of an unique identifier and compare them against future calls into the URI handler. I process requests if the server handle and socket descriptor matches, and reject them if they don’t. With this code in place, only a single driver is commanding a rover at any given time. But this code also immediately exposed another problem: how to detect if that single driver is gone?

[Code for this project is publicly available on GitHub.]

Variable Steering on Sawppy ESP32 HTML Control

It’s great to see my ESP32-based HTML control scheme for Sawppy rover up and running end-to-end. However, the code to get this far is an extremely rough first draft. I still have a lot of refinement work ahead. The first thing I wanted to tackle was control precision while the little rover is running at high speed. My Spektrum radio gave me extremely precise control over steering angle, but my touchscreen control was comparatively crude. Instead of going in the direction I want, it would dart too far one way, I would over-correct and it dart the other way, and repeat. I noticed this problem with my HTML touch joystick control pad with SGVHAK rover and Sawppy V1 rover, but they were larger rovers that travelled slower so the problem wasn’t as bad. A little hyperactive rover with a much shorter wheelbase suffers far more from twitchy steering. So while it was something I just tolerated on the larger rovers, it became a priority to address on the little one.

I implemented the first idea that came to mind: make the steering range variable as a function of speed. I hope this would feel intuitive relative to everyday cars, since we perform tight turns at low speed and learn to keep steering gentle at higher speeds. The caveat is that it wouldn’t be implemented the same way. In our cars the steering ratio remains constant no matter the speed, we just learn to be gentle and not yank the wheel about on the highway. Now I am going to vary the control ratio and this might be confusing. When car manufacturers started exploring variable-ratio power steering racks on their cars, some early implementations made customers unhappy.

Back to my little rover. The initial implementation mapped the left-right position of my joystick pad directly to a particular turning radius, no matter what speed the rover is travelling. My first experiment is to modify that so the steering ratio would drop and turning radius would widen as speed increased. Meaning it would be very hard to maintain a specific turning radius while varying the speed, but that’s not something I see as a rover driving pattern anyway so maybe it’s OK for that to be difficult.

After I implement the variable ratio, at top speed (joystick pad at the top edge) the left-right steering range is only a fraction of maximum. This allows me to fine-tune rover heading as it runs, and I don’t have to worry about suddenly throwing the rover into a sharp U-turn by accident. This part worked well, but it is tricky to drive the rover while slowly accelerating since steering angle changes as I accelerate. It’s possible this would prove to be a problem worse than the original one I set out to solve, I don’t know yet. I’ll drive with this variable ratio mechanism in place for a while and see how it goes. I’ll also make sure only one person is driving.

[Code for this project is publicly available on GitHub]

Micro Sawppy Beta 3 Running With HTML Control

After I established websocket communication between an ESP32 server and a phone browser JavaScript client, I needed to translate that data into my rover’s joystick command message modeled after ROS joy_msg. For HTML control, I decided to send data as JSON. This is not the most bandwidth-efficient format. In theory I could encode everything into binary with two signed 8-bit integers. One byte for speed and one byte for steering is all I really need. However I have ambition for more complex communication in the future, thus JSON’s tolerance for extra fields has merit from the perspective of forward compatibility.

Of course, that meant I had to parse my simple JSON on the ESP32 server. The first rule of writing my own parser is: DON’T. It’s a recurring theme in software development: every time someone thinks “Oh I can just whip up a quick parser and it’ll be fine” they have been wrong. Fortunately Espressif packaged the open source cJSON library in ESP-IDF and it is as easy as adding #include "cJSON.h" to pull it into my project. Using it to extract steering and speed input data from my JSON, I could translate that into a joy_msg data structure for posting to the joystick queue. And the little rover is up and running on HTML control!

The biggest advantage of using ESP32’s WiFi capability to turn my old Nokia Lumia 920 phone into a handheld controller is cost. Most people I know already have a touchscreen phone with a browser, many of whom actually own several with older retired phones gathering dust in a drawer somewhere. Certainly I do! And yeah I also have a Spektrum ground radio transmitter/receiver combo gathering dust, but that is far less typical.

Of course, there are tradeoffs. A real radio control transmitter unit has highly sensitive inputs and tactile feedback. It’s much easier to control my rover precisely when I have a large physical wheel to turn and a centering spring providing resistance depending on how far off center I am. I have some ideas on how to update the browser interface to make control more precise, but a touchscreen will never have the spring-loaded feedback. Having used a RC transmitter a few days before bringing up my HTML touch pad, I can really feel the difference in control precision. I understand why a lot of Sawppy rover builders went through the effort of adapting their RC gear to their rovers. It’s a tradeoff between cost and performance, and I want to leave both options open so each builder can decide what’s best for themselves. But that doesn’t mean I shouldn’t try to improve my HTML control precision.

[Code for this project is publicly available on GitHub.]

Shiny New ESP32 WebSocket Support

My ESP32 development board is now configured to act as HTTP server sending static files stored in SPIFFS. But my browser scripts trying to talk to the server via websocket would fail, so it’s time to implement that feature. When I did my initial research for technologies I could use in this project, there was a brief panic because I found a section in Espressif documentation for for websocket client, but there wasn’t a counterpart for websocket server! How can this be? A little more digging found that websocket server was part of the HTTP Server component and I felt confident enough to proceed.

That confident might have been a little misplaced. I wouldn’t have felt as confident if I had known websocket support was added very recently. According to GitHub it was introduced to ESP-IDF main branch on November 12, 2020. As a new feature, it was not enabled by default. I had to turn it on in my project with the CONFIG_HTTPD_WS_SUPPORT parameter, which was something I could update using the menuconfig tool.

Side note: since menuconfig was designed to run in a command prompt, using it from inside Visual Studio Code PlatformIO extension is a little different due to input handling of arrow keys. The workaround is that J and K can be used in place of up/down arrow keys.

Attempting to duplicate what I had running in my Node.js placeholder counterpart, I found one limitation with ESP32 websocket support: Using Node.js I could bind HTTP and WS protocols both to root URI, but this ESP32 stack requires a different URI for different protocols. This shouldn’t be a big deal to accommodate in my project but it does mean the ESP32 can’t be expected to emulate arbitrary websocket server configurations.

I also stumbled across a problem in example code, which calls httpd_ws_recv_frame with zero to get length, then allocating the appropriate sized buffer before calling it again to actually fetch data. Unfortunately it doesn’t work for me, the call fails with ESP_ERR_INVALID_SIZE instead of returning filled structure with length. My workaround is to reuse the same large buffer I allocated to serve HTML. I think I’ve tracked this down to a version mismatch. The “call with zero to get length” feature was only added recently. Apparently my version of ESP-IDF isn’t recent enough to have that feature in order to run the example code. Ah, the joys of running close to the leading edge. Still, teething problems of young APIs aside, it was enough to get my rover up and running.

[Code for this project is publicly available on GitHub.]

ESP32 HTTP Was Easy But Sending Files Need SPIFFS

I got my ESP32 on my local home WiFi network, but it doesn’t respond to anything yet. The next step is to write code so it could act as a static web server, sending files stored in ESP32 flash memory to clients requesting them via HTTP GET. The good news is that setting up a HTTP server was really easy, and I quickly got it up and running to serve a “Hello World” string hard-coded in my ESP32 source code.

I briefly considered calling this situation good enough and move on. I could embed my HTML/CSS/JavaScript files as hard-coded strings inside my C files. But doing so meant reviewing those files to make sure they don’t have conflicts with C string syntax, and that’s something I’d have to do every time I wanted to update any of those files. This is quite clumsy. What I really want is to keep those files to be served over HTTP separate from my C code, so that they could be updated independently of code and I don’t have to review them for C string incompatibility.

Doing this requires carving out a portion of an ESP32’s flash memory storage for a simple file storage system called SPIFFS. Allocation of flash storage is declared in a partition file, which is a list of comma-separated values (CSV) of partition information like size, offset, and assigned purpose. PlatformIO ESP-IDF project template includes a default partition file but it had no provision for a SPIFFS partition. I need to add one to my project to override that default. I found an example partition file as my starting point, copying it into my project and making only minor changes. (I didn’t need quite as large of a SPIFFS partition.) If I were using ESP-IDF directly, I believe I would use the menuconfig tool to point to my partition file. But as I’m using ESP-IDF indirectly via PlatformIO, I specified my partition file location using a parameter in the platformio.ini configuration file.

Once I have a partition, I need to put my files in it. Apparently it’s not as simple as using a USB flash drive, I have to build the entire data partition (like building a disk image for a PC) and upload the whole thing. There are ESP-IDF command line tools to build SPIFFS partition, I decided to go with PlatformIO route by specifying the platformio.ini parameter data-dir. I could point it at my Node.js working directory, which is great because that meant I only had one copy of these files in my code repository. Eliminating the task of keeping multiple copies in sync. From the PlatformIO UI I could then use “Platform/Build Filesystem Image” followed by “Platform/Upload Filesystem Image”. I haven’t figured out if this is necessarily separate from code update or if there’s a way to do both at the same time.

Putting files in SPIFFS isn’t useful until I have code to read those files. I followed the links to Espressif code example for reading and writing to SPIFFS. And ugh… I haven’t had to deal with old school C file input/output API in quite some time. I briefly considered the effort to keep things memory efficient by breaking these file I/O actions up into small pieces, but I don’t need that kind of sophistication just yet. In order to simplify code, and because I have RAM to spare at the moment, I allocated a large buffer so all the operations to read from SPIFFS and send via HTTP can be done in a single pass. Which worked well for sending data to the control client, but now I need to listen to control commands coming back.

[Code for this project is publicly available on GitHub.]

Notes On Getting ESP32 On WiFi

I’m working towards a minimal first draft of my Sawppy rover’s HTML-based control scheme. I tackled client side first because that side had more similarity to my earlier SGVHAK rover control project. I got a first draft by using a Node.js server on my desktop as a placeholder for the ESP32, now it’s time to start working on the real server code. The first step is to learn how to activate an ESP32’s WiFi capabilities in software.

There are many options on how to set up networking between an ESP32 and a browser running on a phone. I decided to start with connecting it as a node to my home network, which Espressif documentation calls “station mode“. I expect this to be easier to develop and debug, but obviously it won’t work for a rover roaming away from home. For that the ESP32 will have to be switched to “AP mode” acting as a WiFi access point. In order to accommodate this, I aim to structure my code so setting up a WiFi connection is its own separate FreeRTOS task, so that I can switch between station and AP (or maybe even more?) variants as needed at compile-time. If I ever need the option to switch between modes at runtime, I can integrate one of the available ESP32 WiFi managers.

To learn about setting up an ESP32 in station mode, Espressif’s written documentation is pretty thin. I had to do most of my learning from their station example, and thankfully it was indeed enough to get my ESP32 on my network. Converting the sample code to a FreeRTOS task I can incorporate into my rover project required more than 2 kilobyte of stack, which is what I had been using as a default. A bit of quick experimentation established 3 kilobytes are enough for the new WiFi task. Right now I’m not too worried about RAM footprint, so I won’t spend time pinning down exactly where between 2 and 3 it needs.

One item that I saw and copied without understanding, was a code block initializing an ESP32’s non-volatile storage (NVS) used to store information in key-value pairs. I can infer that ESP32 WiFi stack required some of those values, but I couldn’t find documentation on exactly what was needed. The more obvious WiFi-related pieces of information (like the SSID for my home network and associated password) are in my code, so it must be something less obvious but I don’t yet know what they are.

Events were an aspect of the sample code that confused me for a while. Eventually I figured out my confusion stemmed from the fact I got two items mixed up. One of them is an “Event Group“, which is a FreeRTOS mechanism used here so a piece of code could wait until WiFi initialization completes before they start doing network things. The other mechanism is an “Event Loop” which is an Espressif creation for ESP-IDF. They both had “event” in their name but were completely independent, which was why I got lost trying to understand how the two mechanisms interacted behind the scenes: the answer is that they don’t! So when reading the code it’s important to figure out which “event” we are talking about. The Espressif reference for WiFi initialization events, for example, refers to the event loop.

I’m happy I was successful in connecting an ESP32 to my home WiFi by following example code. Now that it is on the network, I need to make it do something useful.

[Code for this project is publicly available on GitHub.]

Sawppy HTML Canvas and Websocket

After a little bit of research to figure out what parts of HTML I should be able to use in my Sawppy rover control project, I got to work. Some minor parts of the 2D joystick touchpad <canvas> code was copied from the SGVHAK rover project, but the majority did not. One change was that I no longer sent updates upon every user input event, as I’ve learned that generated far too much data for this purpose. The internal calculations will still be made in the input event handlers, but those updated coordinates are not sent to the server except by a polling loop set up to run at regular intervals.

Speaking of the input event handlers, I switched to using standard generalized pointer event API instead of those specific to mouse input or touch input, and a quick test showed it functioned as expected on all the platforms I tried. (Chrome on Windows 10, Chrome on Android, Safari on iOS, and IE on Windows Phone 8.) If there’s a good reason why I didn’t use that earlier for SGVHAK rover, I have yet to encounter it.

Those changes were relatively minor in comparison to the next part: switching to using websocket for communicating steering and speed values to the server. For this I had to modify my Node.js placeholder server, as it was no longer a static file server. I expected to find a Node.js websocket library and was not surprised to find there were actually several to choose from. Based on a quick glance at the getting started examples, ws looks like one I could get up and running fastest for my purposes, and it did not disappoint. It took far less work than I had expected to get websocket communication up and running, albeit only one-way from client browser to server. But this is fine, the Node.js placeholder has done its job, now I have a very rough but minimally functional set of client-side code for Sawppy HTML control. Enough for me to start looking at ESP32 server-side code.

[Code for this project is publicly available on GitHub]