High Power 600W Power Supply (HP1-J600GD-F12S)

Along with the “keyboard is broken” laptop, I was also asked to look into a mid-tower PC that would no longer turn on. I grabbed a power supply I had on hand and plugged it into the motherboard, which happily powered up. Diagnosis: dead power supply. I bought a new power supply for the PC to bring it back to life, now it’s time to take apart the dead power supply to see if I can find anything interesting. Could it be as easy as a popped circuit breaker or a blown fuse?

According to the label, the manufacturer has the impossibly unsearchable name of “High Power”. Fortunately, the model number HP1-J600GD-F12S is specific enough to find a product page on the manufacturer’s site. The exact model string also returned a hit for a power supply under Newegg’s house brand Rosewill, implying the same device was sold under Newegg’s own name. Amusingly, Newegg’s Rosewill product listing included pictures with “High Power” embossed on the side.

If there is a user-replaceable fuse or a user-accessible circuit breaker, they should be adjacent to the power socket and switch. I saw nothing promising at the expected location or anywhere else along the exterior.

Which meant it was time to void the warranty.

Exterior enclosure consisted of two pieces of sheet metal each bent into a U shape and held together with four fasteners. Once pried apart, I had to cut a few more zip-ties holding the cooling fan power wire in place before I could unplug it to get a clear view at the interior. Everything looks clean. In fact, it looked too clean — either this computer hadn’t been used very much before it blew, or it lived in a location with good air filtration to remove dust.

Still on the hunt for a circuit breaker or a fuse, I found the standard boilerplate fuse replacement warning. Usually, this kind of language would be printed immediately adjacent to a user-serviceable fuse. But getting here required breaking the warranty seal and none of the adjacent components looked like a fuse to me.

Disassembly continued until I could see the circuit traces at the bottom of the board. Getting here required some destructive cutting of wires, so there’s no bringing this thing back online. Perhaps someone with better skills could get here nondestructively but I lacked the skill or the motivation to figure things out nicely. I saw no obviously damaged components or traces on this side, either. But more importantly, now I could see that 120V AC line voltage input wire is connected to a single wire. That must lead to the fuse.

Turning the board back over, I see the line voltage input wire (brown) connected to a black wire that led to a cylinder covered in heat-shrink tubing and held in place by black epoxy. The shape of that cylinder is consistent with a fuse. The heat-shrink and epoxy meant this is really not intended to be easily replaced.

Once unsoldered, I could see the electronic schematic symbol for fuse printed on the circuit board. The “F” in its designation “F1” is consistent with “Fuse”, as do the amperage/voltage ratings listed below. This fuse is a few centimeters away from the caution message I noticed earlier, which was farther away than I had expected. My multi-meter showed no continuity across this device, so indeed the fuse has blown. I cut off the heat-shrink hoping to see a burnt filament inside a glass tube, but this fuse didn’t use a glass tube.

I started this teardown wondering if it was “as easy as a popped circuit breaker or a blown fuse”. While it was indeed a blown fuse, a nondestructive replacement would not have been easy. I don’t know why the fuse on this device was designed to be so difficult to access and replace, but I appreciate it is far better to blow a fuse than for a failing power supply to start a fire.

Windows PC Keyboard Beeps Instead of Types? Turn Off “Filter Keys”

A common side effect of technical aptitude is the inevitable request “Can you help me with my computer?” Whether this side effect is an upside or downside depends on the people involved. Recently I was asked to help resurrect a computer that had been shelved due to “the keyboard stopped working.”

Before I received the hardware, I was told the computer was an Asus T300L allowing me to do a bit of research beforehand. This is a Windows 8 era touchscreen tablet/laptop convertible along the lines of a Microsoft Surface Pro or the HP Split X2. This added a twist: the T300L keyboard base not only worked while docked, but it could also continue working as a wireless keyboard + touchpad when separated from the screen. This could add a few hardware-related variables for me to investigate.

When I was finally presented with the machine, I watched the owner type their Windows login password using the keyboard. “Wait, I thought you said the keyboard didn’t work?” “Oh, it works fine for the password. It stops working after I log in.”

Ah, the hazard of imprecision of the English language. When I was first told “keyboard doesn’t work” my mind went to loose electrical connections. And when I learned of the wireless keyboard + touchpad base, I added the possibility of wireless settings (device pairing, etc.) I had a hardware-oriented checklist ready and now I can throw it all away. If the keyboard worked for typing in Windows password, the problem is not hardware.

Once the Windows 8 desktop was presented, I could see what “keyboard stopped working” meant: every keypress resulted in an audible beep but no character typed on screen. A web search with these symptoms found this Microsoft forum thread titled “Keyboard Beeps and won’t type” with the (apparently common) answer to check Windows’ Ease of Access center. I made my way to that menu (as the touchscreen worked fine) and found that Filter Keys were turned on.

Filter Keys is a feature that helps users living with motor control challenges that result in shaky hands. This could result in pressing a key multiple times when they only meant to press a key once or jostling adjacent keys during that keypress. Filter Keys slow the computer’s keyboard response, so they only register long and deliberate presses as a single action. Rapid tap and release of a key — which is what usually happens in mainstream typing action — are ignored and only a beep is played. Which is great, if the user knew how to use Filter Keys and intentionally turned it on.

In this case, nobody knows how this feature got turned on for this computer, but apparently it was not intentional. They didn’t recognize the symptoms of Filter Keys being active. Lacking that knowledge, they could only communicate their observation as “the keyboard stopped working.” I guess that description isn’t completely wrong, even if it led me down the wrong path in my initial research. Ah well. Once Filter Keys were turned off, everything is fine again.

Mystery Slot in Xbox Series X Packaging

In the video game console market, I am definitely on Team Green of the pie chart going all the way back to the original Xbox. Right now, the Xbox hardware product line is split into two: the expensive Series X with maximum power and the Series S which made design tradeoffs for affordability. Supplies of both were hampered by global electronics supply chain disruption at launch. I wanted a Series X but I didn’t want it badly enough to pay a scalper premium. The Series S got sorted out and has been widely available for several months, and I was happy to find that the supply of Series X is just starting to catch up to demand. During this year’s Black Friday sales season when everyone was out looking for discounts, I was just happy to find Series X available at all for list price. (There were discounts on Series S, but I was not interested.)

When I flipped opened the box, I was happy to see that Microsoft put some design effort into its packaging. The unboxing experience isn’t up to the premium bar set by Apple & others but a far step above the “sufficient and practical” packaging of past Xbox consoles. The console itself is front and center, wrapped like a gift under a “Power Your Dreams” banner. A cardboard box behind the console held a power cable, a 4K120FPS capable HDMI cable, and a single Xbox controller complete with a pair of AA batteries.

Underneath the console, between two blocks of packaging foam, is a piece of cardboard. This turned out to be a “Getting Started” card for those too impatient to read a manual.

The bottom of that card has a fold, and a rounded slot was cut out of it. Why is it shaped like that?

Making this fold and cutting out that slot consumed manufacturing time and money. This was definitely an intentional design choice, but I can’t tell what its purpose could be. The information printed on the card is specific to Series X, so the cutout wouldn’t have been used elsewhere and just went unused here. I thought maybe it was supposed to help hold it somewhere in the box so we could see it when we flipped it open, but this card was just lying in the bottom of my box. The “Power Your Dreams” banner is front and center, so that’s not the location for this card and (2) I don’t see anything for that slot to fit onto elsewhere.

The rest of the package is too well thought-out for this slot cutout to have been an accident, yet it went unused. I can smell a story here, and I am fascinated, but I have to accept that I will never know the answer.

MageGee Wireless Keyboard (TS92)

In the interest of improving ergonomics, I’ve been experimenting with different keyboard placements. I have some ideas about attaching keyboard to my chair instead of my desk, and a wireless keyboard would eliminate concerns about routing wires. Especially wires that could get pinched or rolled over when I move my chair. Since this is just a starting point for experimentation, I wanted something I could feel free to modify as ideas may strike. I looked for the cheapest and smallest wireless keyboard and found the MageGee TS92 Wireless Keyboard (Pink). (*)

This is a “60% keyboard” which is a phrase I’ve seen used two different ways. The first refers to physical size of individual keys, if they were smaller than those on a standard keyboard. The second way refers to the overall keyboard with fewer keys than the standard keyboard, but individual keys are still the same size as those on a standard keyboard. This is the latter: elimination of numeric keypad, arrow keys, etc. means this keyboard only has 61 keys, roughly 60% of standard keyboards which typically have 101 keys. But each key is still the normal size.

The lettering on these keys are… sufficient. Edges are blurry and not very crisp, and consistency varies. But the labels are readable so it’s fine. The length of travel on these keys are pretty good, much longer than a typical laptop keyboard, but the tactile feedback is poor. Consistent with cheap membrane keyboards, which of course this is.

Back side of the keyboard shows a nice touch: a slot to store the wireless USB dongle so it doesn’t get lost. There is also an on/off switch and, next to it, a USB Type-C port (not visible in picture, facing away from camera) for charging the onboard battery.

Looks pretty simple and straightforward, let’s open it up to see what’s inside.

I peeled off everything held with adhesives expecting some fasteners to be hidden underneath. I was surprised to find nothing. Is this thing glued together? Or held with clips?

I found my answer when I discovered that this thing had RGB LEDs. I did not intend to buy a light-up keyboard, but I have one now. The illumination showed screws hiding under keys.

There are six Philips-head self-tapping plastic screws hidden under keys distributed around the keyboard.

Once they were removed, keys assembly easily lifted away to expose the membrane underneath.

Underneath the membrane is the light-up subassembly. Looks like a row of LEDs across the top that shines onto a clear plastic sheet acting to diffuse and direct their light.

I count five LEDs, and the bumps molded into clear plastic sheet worked well to direct light where the keys are.

I had expected to see a single data wire consistent with NeoPixel a.k.a. WS2812 style of individually addressable RGB LEDs. But label of SCL and SDA implies this LED strip is controlled via I2C. If it were a larger array I would be interested in digging deeper with a logic analyzer, but a strip of just five LEDs isn’t interesting enough to me so I moved on.

Underneath the LED we see the battery, connected to a power control board (which has both the on/off switch and the Type-C charging port) feeding power to the mainboard.

Single cell lithium-polymer battery with claimed 2000mAh capacity.

The power control board is fascinating, because somebody managed to lay everything out on a single layer. Of course, they’re helped by the fact that this particular Type-C connector doesn’t break out all of the pins. Probably just a simple voltage divider requesting 5V, or maybe not even that! I hope that little chip at U1 labeled B5TE (or 85TE) is a real lithium-ion battery manage system (BMS) because I don’t see any other candidates and I don’t want a fiery battery.

The main board has fewer components but more traces, most of which led to the keyboard membrane. There appears to be two chips under blobs of epoxy, and a PCB antenna similar to others I’ve seen designed to work on 2.4GHz.

With easy disassembly and modular construction, I think it’ll be easy to modify this keyboard if ideas should strike. Or if I decide I don’t need a keyboard after all, that power subsystem would be easy (and useful!) for other projects.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Google AIY Vision Bonnet & Accessories

The key component of a Google AIY Vision kit is the “Vision Bonnet”, a small circuit board to sit atop the Raspberry Pi Zero WH bundled in the kit. In addition to all the data interface available via the standard Raspberry Pi GPIO pins, this peripheral also gets “first dibs” on raw camera data. The camera itself is a standard Raspberry Pi Camera V2.1 but instead of connecting directly to the Raspbery Pi Zero, the camera cable connects directly to the vision bonnet. There is then a second cable connecting from vision bonnet to the Raspberry Pi camera connector, for the bonnet to forward camera data to the Pi after Vision Bonnet is done processing it. This architecture ensures the Vision Bonnet will never be constrained by data interface limitations onboard the Pi. It can get raw camera feed and do its magic before camera data even gets into a Pi.

The vision coprocessor on this Vision Bonnet circuit board is a Movidius Myriad MA2450, launched in 2016 and discontinued in 2020. Based on its application here, I infer the chip can accelerate inference operations for vision-based convolutional neural networks that fit within constraints outlined in the AIY Vision documentation. I don’t know enough about the field of machine vision to judge whether these constraints are typical or if they pose an unreasonable burden. What I do know is that, now that everything has been discontinued, I probably shouldn’t spend much more time studying this hardware.

My interest in commercially available vision coprocessors has since shifted to Luxonis OAK-D and related products. In addition to a camera array (two monochrome cameras for stereoscopic vision and one color camera for object detailed) it is built around Luxonis OAK SoM (System on Module) built around the newer Movidius Myriad MA2485 chip. Luxonis has also provided far more software support and product documentation on their OAK modules than Google ever did for their AIY Vision Bonnet.

I didn’t notice much of interest on the back side of AIY Vision Bonnet. The most prominent chip is marked U2, an Atmel (now Microchip) SAM-D.

The remainder of hardware consists of a large clear button with three LEDs embedded within. (Red, green, and blue.) That button hosts a small circuit board that connects to the vision bonnet via a small ribbon cable. It also hosts connectors for the piezo buzzer and the camera activity (“privacy”) LED. The button module appears identical to the counterpart in AIY Voice kit (right side of picture for comparison) but since voice kit lacked piezo buzzer or LED, it lacked the additional circuit board.

Google AIY Vision Kit

A few years ago, I tried out the Google AIY “Voice” and “Vision” kits. They featured very novel hardware but that alone was not enough. Speaking as someone not already well-versed in AI software of the time, there was not enough documentation support to get people like me onboard to do interesting things with that novel hardware. People like me could load the default demo programs, we could make minor modifications to it, but using that hardware for something new required climbing a steep learning curve.

At one point I mounted the box to my Sawppy rover’s instrument mast, indicating my aspirations to use it for rover vision, but I never got much of anywhere.

The software stack also left something to be desired, as it built on top of Raspberry Pi OS but was fragile and easily broken by Raspberry Pi updates. Reviewing my notes, I realized I published my notes on AIY Voice but the information on AIY Vision was still sitting in my “Drafts” section. Oops! Here it is for posterity before I move on.

Google AIY Vision Kit

The product packaging is wonderful. This was from the era of Google building retail products from easily recycled cardboard. All parts were laid out and neatly labeled in a cardboard box.

Google AIY online instructions.jpg

There was no instruction booklet in the box, just a pointer to assembly instructions online. While fairly easy to follow, note that instructions were written for people who already know how to handle bare electronic circuit boards. Handle the circuit boards by the edges and avoid touching components (especially electrical contacts) and such. Complete beginners unaware of basics might ruin their AIY kit.

Google AIY Vision Kit major components

From a hardware architecture perspective, the key is the AIY Vision bonnet that sat on top of a Raspberry Pi Zero WH. (W = WiFi, H = with presoldered header pins.) In addition to connection with all Pi Zero GPIO pins, it also connects to the Pi camera connector for direct access to camera feed. (Normal data path: Camera –> Pi Zero. AIY Vision data path: Camera –> Vision Bonnet –> Pi.) In addition to the camera, there is a piezo buzzer for auditory feedback, a standalone green LED to indicate camera is live (“Privacy LED”), and a big arcade-style button with embedded LEDs.

Once assembled, we could install and run several visual processing models posted online. If we want to go beyond that, there are instructions on how to compile trained TensorFlow models for hardware accelerated inference by the AIY Vision Bonnet. And if those words don’t mean anything (it didn’t to me when I played with the AIY Vision) then we’re up a creek. That was bad back then, and now that a few years have gone by, things have gotten worse.

  1. The official Google AIY system images for Raspberry Pi hasn’t been updated since April 2021. And we can’t just take it and pick up more recent updates, because that breaks bonnet functionality.
  2. The vision bonnet model compiler is only tested to work on Ubuntu 14.04, whose maintenance updates ended in 2019.
  3. Example Python code is in Python 2, whose support ended January 1st, 2020.
  4. Example TensorFlow information are for the now-obsolete TensorFlow 1. TensorFlow 2 was a huge breaking change, and it takes a lot of work — not to mention expertise — to migrate from TF1.x to TF2.

All of these factors together tell me the Google AIY Vision bonnet has been left to the dusty paths of history. My unit has only ever ran the default “Joy Detection” demo, and I expect this AIY Vision Bonnet will never run anything else. Thankfully, the rest of the hardware (Raspberry Pi Zero WH, camera, etc.) should have better prospects of finding another use in the future.

Old OCZ SSD Reawakened and Benchmarked

In the interest of adding 3.5″ HDD bays to a tower case, along with cleaning up wiring to power them, I installed a Rosewill quad hard drive cage where a trio of 5.25″ drive bays currently sit open and unused. It mostly fit. To verify that all drive cage cable connections worked with my SATA expansion PCIe card (*) I grabbed four drives from my shelf of standby hardware. When installing them in the drive cage, I realized I made a mistake: one of the drives was an old OCZ Core Series V2 120GB SSD that had stopped responding to its SATA input. I continued installation anyway because I thought it would be interesting to see how the SATA expansion card handled a nonresponsive drive.

Obviously, because today intent was to see an unresponsive drive, Murphy’s Law stepped in and foiled the plan: when I turned on the computer, that old SSD responded just fine. Figures! I don’t know if there was something helpful in the drive cage, or the SATA card, or if something was wrong with the computer that refused to work with this SSD years ago. Whatever the reason, it’s alive now. What can I do with it? Well, I can fire up the Ubuntu disk utility and get some non-exhaustive benchmark numbers.

Average read rate 143.2 MB/s, write 80.3 MB/s, and seek of 0.22 ms. This is far faster than what I observed by using the USB2 interface, so I was wrong earlier about the performance bottleneck. Actual performance is probably lower than this, though. Looking at the red line representing write performance, we can see it started out strong but degraded at around 60% of the way through the test and kept getting worse, probably the onboard cache filling up. If this test ran longer, we might get more and more of the bottom end write performance of 17 MB/s.

How do these numbers compare to some contemporaries? Digging through my pile of hardware, I found a Samsung ST750LM022. This is a spinning-platter laptop hard drive with 750GB capacity.

Average read 85.7 MB/s, write 71.2 MB/s, and seek of 16.77 ms. Looking at that graph, we can clearly see degradation in read and write performance as the test ran. We’d need to run this test for longer before seeing a bottom taper, which may or may not be worse than the OCZ SSD. But even with this short test, we can see the read performance of a SSD does not degrade over time, and that SSD has a much more consistent and far faster seek time.

That was interesting, how about another SSD? I have an 120GB SSD from the famed Intel X25-M series of roughly similar vintage.

Average read 261.2 MB/s, write 106.5 MB/s, seek 0.15 ms. Like the OCZ SSD, performance took a dip right around the 60% mark. But after it did whatever housekeeping it needed to do, performance level resumed at roughly same level as earlier. Unlike the OCZ, we don’t see as much of a degradation after 60%.

I didn’t expect this simple benchmark test to uncover the full picture, as confirmed after seeing this graph. By these numbers, the Intel was around 30% better than the OCZ. But my memory says otherwise. In actual use as a laptop system drive, the Intel was a pleasure and the OCZ was a torture. I’m sure these graphs are missing some important aspects of their relative performance.

Since I had everything set up anyway, I plugged in a SanDisk SSD that had the advantage of a few years of evolution. In practical use, I didn’t notice much of a difference between this newer SanDisk and the old Intel. How do things look on this benchmark tool?

Average read 478.6 MB/s, write 203.4 MB/s, seek 0.05 ms. By these benchmarks, the younger SanDisk absolutely kicked butt of an older Intel with at least double the performance. But that was not borne out by user experience as a laptop drive, it didn’t feel much faster.

Given that the SanDisk benchmarked so much faster than the Intel (but didn’t feel that way in use) and OCZ benchmarked only slightly worse than the Intel (but absolutely felt far worse in use) I think the only conclusion I can draw here is: Ubuntu Disk Utility built-in benchmarking tool does not reflect actual usage. If I really wanted to measure performance details of these drives, I need to find a better disk drive benchmarking tool. Fortunately, today’s objective was not to measure drive performance, it was only to verify all four bays of my Rosewill drive cage were functional. It was a success on that front, and I’ll call it good for today.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Rosewill Hard Disk Drive Cage (RSV-SATA-Cage-34)

Immediately after my TrueNAS CORE server power supply caught fire, I replaced it with a spare power supply I had on hand. This replacement had one annoyance: it had fewer SATA power connectors. As a short-term fix, I dug up some adapters from the older CD-ROM style power connectors to feed my SATA drives, but I wanted a more elegant solution.

The ATX tower case I used for my homebuilt server had another issue: it had only five 3.5″ hard drive bays for my six-drive array. At the moment it isn’t a problem, because the case had two additional 2.5″ laptop sized hard drive mount points and one drive in my six-drive array was a smaller drive salvaged from an external USB drive which fits in one bay. The other 2.5″ bay held the SSD boot drive for my TrueNAS CORE server. I did not want to be constrained to using a laptop drive forever, so I wanted a more elegant solution to this problem as well.

I found my elegant solution for both problems in a Rosewill RSV-SATA-Cage-34 hard drive cage. It fits four 3.5″ drives into the volume of a trio of 5.25″ drive bays, which is present on my ATX tower case and currently unused. This would solve my 3.5″ bay problem quite nicely. It will also solve my power connector problem, as the cage used a pair of CD-ROM style connectors for power. A circuit board inside this cage redistributes that power to four SATA power connectors.

First order of business was to knock out the blank faceplates covering the trio of 5.25″ bays.

Quick test fit exposed a problem: the drive cage is much longer than a CD-ROM drive. For the case to sit at the recommended mounting location for 5.25″ peripherals, drive cage cooling fan would bump up against the ATX motherboard power connector. This leaves very little room for the four SATA data cables and two CD-ROM power connectors to connect. One option is to disconnect and remove the cooling fan to give me more space, but I wanted to maintain cooling airflow, so I proceeded with the fan in place.

Given the cramped quarters, there would be no room to connect wiring once the cage was in place. I pulled the cage out and connected wires while it was outside the case, then slid it back in.

It is a really tight fit in there! Despite my best effort routing cables, I could not slide the drive cage all the way back to its intended position. This was as hard as I was willing to shove, leaving the drive cage several millimeters forward of its intended position.

As a result, the drive cage juts out beyond case facade by a few millimeters. Eh, good enough.

Notes on “Make: FPGAs” by David Romano

After skimming through a Maker Media book on CNC routing wood furniture, I wanted to see what I could learn from their FPGAs: Turning Software into Hardware with Eight Fun & Easy DIY Projects (*) by David Romano. I was motivated by the FPGA-based badge of Superconference 2019, which had (I was told) a relatively powerful FPGA at its core. But all my badge work were at the software level, I never picked up enough to make gateware changes. Perhaps this book can help me?

My expectations dropped when I saw it was published in February 2016. The book is very honest that things are evolving quickly in the realm of FPGAs and things would be outdated quickly, but I was surprised at how little of the information in the book could be transferred to other FPGA projects.

In the preface, the author explained they had worked with FPGAs in a professional context since the early days (1980s) of the field. Seeing the technology evolve over the years and drop in price into hobbyist-accessible range, this book was written to share excitement with everyone. This is an admirable goal! But there is a downside to a book written by someone who has been with the technology for so long. They are so familiar with concepts and jargons that it’s difficult to get in the right frame of mind to explain things in a way that novices in the field can understand.

As an example of this problem, we only got up to page 15 before we are hit with this quote: “Behavioral models and bus functional models are used as generators and monitors in the test bench. A behavioral model is HDL code that mimics the operation of a device, like a CPU, but is not gate-level accurate. In other words, it is not synthesizable.” That sound is the <WOOSH> of indecipherable words flying over my head.

The hardware examples used in this book are development boards built around various FPGAs from Xilinx. To use those boards, we have a long list of proprietary software. It starts with Xilinx software for the FPGA itself, followed by tools from each development board vendor to integrate with their hardware. This introduces a long list of headaches, starting from the fact Xilinx’s “ISE WebPack” was already a discontinued product at the time of writing, with known problems working under 64-bit Windows. And things went downhill from there.

For reference, the hardware corresponding to projects in the book are:

The project instructions do not get into very much depth on how to create FPGA gateware. After an overly superficial overview (I think the most valuable thing I learned is that $display is the printf() of Verilog) the book marches into using blocks of code published by other people on OpenCores, and then loading code written by others onto FPGA. I guess it’s fine if everything works. But if anything goes wrong in this process, a reader lacks knowledge to debug the problem.

I think the projects in this book have the most value for someone who already has one of the above pieces of hardware, giving them instructions to run some prebuilt gateware on it. The value decreases for other boards with Xilinx FPGA, and drops to nearly nothing for non-Xilinx FPGA. This book has little relevance to the Lattice ECP5 on board Superconference 2019 badge, so I will have to look elsewhere for my FPGA introduction.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Adafruit SSD1305 Arduino Library on ESP8266

Thanks to Adafruit publishing an Arduino library for interfacing with SSD1305 display driver chip, I proved that it’s possible to control an OLED dot matrix display from a broken FormLabs Form 1+ laser resin 3D printer. But the process wasn’t seamless, I ran into several problems using this library:

  1. Failed to run on ESP32 Arduino Core due to watchdog timer reset.
  2. 4 pixel horizontal offset when set to 128×32 resolution.
  3. Sketch runs only once on Arduino Nano 33 BLE Sense, immediately after uploading.

Since Adafruit published the source code for this library, I thought I’d take a look to see if anything might explain any of these problems. For the first problem of watchdog reset on ESP32, I found a comment block where the author notes potential problems with watchdog timers. It sounds like an ESP8266 is a platform known to work, so I should try that.

  // ESP8266 needs a periodic yield() call to avoid watchdog reset.
  // With the limited size of SSD1305 displays, and the fast bitrate
  // being used (1 MHz or more), I think one yield() immediately before
  // a screen write and one immediately after should cover it.  But if
  // not, if this becomes a problem, yields() might be added in the
  // 32-byte transfer condition below.

While I’m setting up an ESP8266, I could also try to address the horizontal offset. It seems a column offset of four pixels were deliberately added for 32-pixel tall displays, something not done for 64-pixel tall displays.

  if (HEIGHT == 32) {
    page_offset = 4;
    column_offset = 4;
    if (!oled_commandList(init_128x32, sizeof(init_128x32))) {
      return false;
    }
  } else {
    // 128x64 high
    page_offset = 0;
    if (!oled_commandList(init_128x64, sizeof(init_128x64))) {
      return false;
    }
  }

There was no comment to explain why this line of code was here. My best guess is the relevant Adafruit product has internally wired its columns with four pixels of offset, so this code makes a shift to compensate. If I remove this line of code and rebuild, my OLED displays correctly.

As for the final problem of running just once (immediately after upload) on an Arduino Nano 33 BLE Sense, I don’t have any hypothesis. My ESP8266 happily restarted this test sketch whenever I pressed the reset button or power cycled the system. I’m going to chalk it up to a hardware-specific issue with the Arduino Nano 33 BLE Sense board. At the moment I have no knowledge (and probably no equipment and definitely no motivation) for more in-depth debugging of its nRF52840 chip or Arm Mbed OS.

Now I have this OLED working well with an ESP8266, a hardware platform I have on hand, I can confidently describe this display module’s pinout.

First Test with Adafruit SSD1305 Library

I feel I now have a good grasp on how I would repurpose the OLED dot matrix display from a broken FormLabs Form 1+ laser resin 3D printer. I felt I could have figured out enough to play back commands captured by my logic analyzer, interspersed with my own data, similar to how I controlled a salvaged I2C LCD. But this exploration was much easier because a user on FormLabs forums recognized the SSD1305-based display module. Thanks to that information, I had a datasheet to decipher the commands, and I could go searching to see if anyone has written code to interface with a SSD1305. Adafruit, because they are awesome, published an Arduino library to do exactly that.

Adafruit’s library was written to support several of their products that used an SSD1305, including product #2675 Monochrome 2.3″ 128×32 OLED Graphic Display Module Kit which looks very similar to the display in a Form 1+ except not on a FormLabs custom circuit board. Adafruit’s board has 20 pins in a single row, much like the Newhaven Display board but visibly more compact. Adafruit added level shifters for 5V microcontroller compatibility as well as an extra 220uF capacitor to help buffer power consumption.

Since the FormLabs custom board lacked such luxuries, I need to use a 3.3V Arduino-compatible microcontroller. The most convenient module at hand (because it was used in my most recent project) happened to be an ESP32. The ssd1305test example sketch of Adafruit’s library compiled and uploaded successfully but threw the ESP32 into a reset loop. I changed the Arduino IDE Serial Monitor baud rate to 115200 and saw this error message repeating endlessly every few seconds.

ets Jun  8 2016 00:22:57

rst:0x8 (TG1WDT_SYS_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0030,len:1344
load:0x40078000,len:13516
load:0x40080400,len:3604
entry 0x400805f0
SSD1305 OLED test

Three letters jumped out at me: WDT, the watchdog timer. Something in this example sketch is taking too long to do its thing, causing the system to believe it has locked up and needs a reset to recover. One unusual aspect of ssd1305test code is that all work live in setup() leaving an empty loop(). As an experiment, I moved majority of code to loop(), but that didn’t fix the problem. Something else is wrong but it’ll take more debugging.

To see if it’s the code or if it is the hardware, I pulled out a different 3.3V microcontroller: an Arduino Nano 33 BLE Sense. I chose this hardware because its default SPI communication pins are those already used in the sample sketch, making me optimistic it is a more suitable piece of hardware. The sketch ran without triggering its watchdog dimer, so there’s an ESP32 incompatibility somewhere in the Adafruit library. Once I saw the sketch was running, I connected the OLED and immediately saw the next problem: screen resolution. I see graphics, but only the lower half. To adjust, I changed the height dimension passed into the constructor from 64 to 32. (Second parameter.)

Adafruit_SSD1305 display(128, 32, &SPI, OLED_DC, OLED_RESET, OLED_CS, 7000000UL);

Most of the code gracefully adjusted to render at 32 pixel height, but there’s a visual glitch where pixels are horizontally offset: the entire image has shifted to the right by 4 pixels, and what’s supposed to be the rightmost 4 pixels are shown on the left edge instead.

The third problem I encountered is this sketch only runs once, immediately after successful uploading to the Nano 33 BLE Sense. If I press the reset button or perform a power cycle, the screen never shows anything again.

Graphics onscreen prove this OLED responds to an SSD1305 library, but this behavior warrants a closer look into library code.

Wemos D1 Mini ESP32 Derivative

It was fun to build a LED strobe light pulsing in sync with a cooling fan’s tachometer wire. After the initial bare-bones prototype I used ESPHome to add some bells and whistles. My prototype board is built around a Wemos D1 Mini module, but I think I’ve hit a limit of hardware timers available within its onboard ESP8266 processor. The good news is that I could upgrade to its more powerful sibling ESP32 with this dev board, and its hardware compatibility means I don’t have to change anything on my prototype board to use it.

The most puzzling thing about this particular ESP32 dev format is that I’m not exactly sure where it came from. I found it as “Wemos D1 Mini ESP32” on Amazon where I bought it. (*) I also see it listed by that name as well as “Wemos D1 Mini32” on its Platform.IO hardware board support page, which helpfully links to Wemos as “Vendor”. Except, if we follow that link, we don’t see this exact board listed. The Wemos S2 Mini is very close, but I see fewer pins (32 vs. 40) and their labels indicate a different layout. Did Wemos originate this design, but since removed it for some reason? Or did someone else design this board and didn’t get credit for it?

Whatever the history of this design, putting a unit (right) next to the Wemos D1 Mini design (left) shows it is a larger board. ESP32 has a much greater number of I/O pins, so this module has 40 through-holes versus 16.

Another contributing factor for larger size is the fact all components are on a single side of the circuit board, as opposed to having components on both sides of the board. It leaves the backside open for silkscreened information for pins. Some of the pins were labeled with abbreviations I don’t understand, but probing those lines found the following:

  • Pins connected to onboard flash and not recommended for GPIO use: CMD (IO11), CLK (IO6) SD0/SDD (IO7) SD1 (IO8) SD2 (IO9) and SD3 (IO10).
  • TDI is IO12, TDO is IO15, TCK is IO13, and TMS is IO34. When not used as GPIO, these can be used as JTAG interface pins for testing and debugging.
  • SVP is IO36, and SVN is IO39. I haven’t figured out what “SVP” and “SVN” might mean. These are two of four pins on an ESP32 that are input-only. “GPI” and not “GPIO”, so to speak. (IO34 and IO35 are the other input-only pins.)

Out of these 40 pins, two are labeled NC meaning not connected, leaving 38 connections. An Espressif ESP32 DevKitC has 38 pins and it appears the same 38 are present on this module, including the aforementioned onboard flash pins and three grounds. But physical arrangement has been scrambled relative to the DevKitC to arrive at this four-column format. What was the logic behind this rearrangement? The key insight for me was that a subset of 16 pins were highlighted with white. They were arranged to be physically and electrically compatible with the Wemos D1 Mini ESP8266:

  • Reset pin RST is in the same place.
  • All power pins are at the same places: 5V/VCC, 3.3V, and one of the ground pins.
  • Serial communication lines up with UART TX/RX at the same places.
  • ESP32 can perform I2C on most of its GPIO pins. I see many examples use the convention of pins 21 for SDA and pin 22 for SCL, and they line up here with ESP8266 D1 Mini’s I2C pins D2 and D1.
  • Same deal with ESP32 SPI support: many pins are supported, but convention has uses four pins (5, 18, 19, and 23) so they’ve been lined up to their counterparts on ESP8266 D1 Mini.
  • ESP8266 has only a single pin for analog-to-digital conversion. ESP32 has more flexibility and one of several ADC-supported pins was routed to the same place.
  • ESP8266 supported hardware sleep/wake with a single pin. Again ESP32 is more flexible and one of the supported pins was routed to its place.
  • There’s a LED module hard-wired to GPIO2 onboard the ESP8266MOD. ESP-WROOM-32 has no such onboard LED, so there’s an external LED wired to GPIO2 adjacent to an always-on power LED. Another difference from ESP8266: this LED illuminates when GPIO2 is high instead of ESP8266’s onboard LED which shines when GPIO2 is low.

This is a nice piece of backwards-compatibility work. It means I can physically plug this board’s white-highlighted subset of 16 pins into any hardware expecting the Wemos D1 Mini ESP8266 board, like its ecosystem of compatible shields. Physically it will hang out the sides, but electrically things should work. Software will still have to be adjusted and recompiled for ESP32, changing GPIO numbers to match their new places. But at least those pins are all capable of the digital and analog IO pins in those places.

The only downside I see with this design? It is no longer breadboard-friendly. When all pins are soldered, all horizontally adjacent pins would be shorted together and that’s not going to work. I’m specifically eyeing one corner where reset (RST) would be connected to ground (GND). I suppose we could solder pins to just the compatibility subset of 16 pins and plug that into a breadboard, but this module is too wide for a standard breadboard. A problem shared with another ESP32 dev board format I’ve used.

And finally, like my ESP8266 Wemos D1 Mini board, these came without any pins soldered to the board. Three different types were included in the bag: pins, or sockets, or passthrough. However, the bag only included enough 20 pins of each type, which isn’t enough for all 40 pins. Strange, but no matter. I have my own collection of pins and sockets and passthrough connectors if I want to use them. And my most recent ESP32 project didn’t need these pins at all. In fact, I had to unsolder the pins that module came with. An extra step I wouldn’t need to do again, now I have these “Wemos D1 Mini ESP32” modules on hand for my experiments.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Full Screen White VGA Signal with Bitluni ESP32 Library

Over two and a half years ago, I had the idea to repurpose a failed monitor into a color-controllable lighting panel. I established it was technically feasible to generate a solid color full screen VGA signal with a PIC, then I promptly got distracted by other projects and never dug into it. I still have the weirdly broken monitor and I still want a large diffuse light source, but now I have to concede I’m unlikely to dedicate the time to build my own custom solution. In the interest of expediency, I will fall back to leveraging someone else’s work. Specifically, Bitluni’s work to generate VGA signals with an ESP32.

Bitluni’s initial example has since been packaged into Bitluni’s ESP32Lib Arduino library, making the software side very easy. On the hardware side, I dug up one of my breadboards already populated with an ESP32 and plugged in the VGA cable I had cut apart and modified for my earlier VGAX experiment. Bitluni’s library is capable of 14-bit color with the help of a voltage-dividing resistor array, but I only cared about solid white and maybe a few other solid colors. The 3-bit color mode, which did not require an external resistor array, would suffice.

I loaded up Bitluni’s VGAHelloWorld example sketch and… nothing. After double-checking my wiring to verify it is as expected, I loaded up a few other sketches to see if anything else made a difference. I got a picture from the VGASprites example, though it had limited colors as it is a 14-bit color demo and I had only wired up 3-bit color. Simplifying code in that example step by step, I narrowed down the key difference to be the resolution used: VGAHelloWorld used MODE320x240 and VGASprites used MODE200x150. I changed VGAHelloWorld to MODE200x150 resolution, and I had a picture.

This was not entirely a surprise. The big old malfunctioning monitor had a native resolution of 2560×1600. People might want to display a lower resolution, but that’s still likely to be in the neighborhood of high-definition resolutions like 1920×1080. There was no real usage scenario for driving such a large panel with such low resolutions. The monitor’s status bar said it was displaying 800×600, but 200×150 is one-sixteenth of that. I’m not sure why this resolution, out of many available, is the one that worked.

I don’t think the problem is in Bitluni’s library, I think it’s just idiosyncrasies of this particular monitor. Since I resumed this abandoned project in the interest of expediency, I didn’t particular care to chase down why. All I cared about was that I could display solid white, so resolution didn’t matter. But timing mattered, because VGAX output signal timing was slightly off and could not fill the entire screen. Thankfully Bitluni’s code worked well with this monitor’s “scale to fit screen” mode, expanding the measly 200×150 pixels to its full 2560×1600. An ESP32 is overkill for just generating a full screen white VGA signal, but it was the most expedient way for me to turn this monitor into a light source.

#include <ESP32Lib.h>

//pin configuration
const int redPin = 14;
const int greenPin = 19;
const int bluePin = 27;
const int hsyncPin = 32;
const int vsyncPin = 33;

//VGA Device
VGA3Bit vga;

void setup()
{
  //initializing vga at the specified pins
  vga.init(vga.MODE200x150, redPin, greenPin, bluePin, hsyncPin, vsyncPin); 

  vga.clear(vga.RGB(255,255,255));
}

void loop()
{
}

UPDATE: After I had finished this project, I found ESPVGAX: a VGA signal generator for the cheaper ESP8266. It only has 1-bit color depth, but that would have been sufficient for this. However there seem to be a problem with timing, so it might not have worked for me anyway. If I have another simple VGA signal project, I’ll look into ESPVGAX in more detail.

My BeagleBone Boards Returning to Their Box

I have two BeagleBone boards — a PocketBeagle and a BeagleBone Blue — that had been purchased with ambitions too big for me to realize in the past. In the interest of learning more about the hardware so I can figure out what to do with them, I followed the lead of a local study group to read Exploring BeagleBone, Second Edition by Derek Malloy. I enjoyed reading the book, learned a lot, and thought it was well worth the money. Now that I am better informed, I returned to the topic of what I should do with my boards.

I appreciate the aims of the BeagleBoard foundation, but these boards are in a tough spot finding a niche in the marketplace. Beagle boards have a great out-of-the-box experience with a tutorial page and Cloud9 IDE running by default. But as soon as we try to go beyond that introduction, all too quickly we find that we’re on our own. The Raspberry Pi foundation has been much more successful at building a beginner-friendly software ecosystem to support those trips beyond the introduction. On the hardware side, Broadcom processors on a Pi are far more computationally powerful than CPUs on equivalent beagles. This includes a move to 64-bit capable processors on the Raspberry Pi 3 in 2017, well ahead of BeagleBoard AI-64 that launched this year (2022). That last bit is important for robotics, as ROS2 is focused on 64-bit architectures and there’s no guarantee of support for 32-bit builds.

Beyond the CPU, there were a few advantages to a Beagle board. My favorite are the more extensive (and usable) collection of onboard LEDs and buttons, including a power button for graceful powerup / shutdown that is still missing from a Raspberry Pi. There is also onboard flash memory storage of known quality, which makes their performance far more predictable than random microSD cards people would try to use with their Raspberry Pi. None of those would be considered make-or-break features, though.

What I had considered a definitive BeagleBone hardware advantage are the programmable real-time units (PRU) within Octavo modules, capable of tasks with timing precision beyond the guarantee of a Linux operating system. In theory that sounded like great teaming for many hardware projects, but in Exploring BeagleBone chapter 15 I got a look at the reality of using a PRU and I became far less enamored. Those PRU had their own instructions, their own code building toolchain, their own debugging tools, and their own ways of communicating with the rest of the system. It looked quite convoluted and intimidating for a beginner. Learning to use the PRU is not like learning a little peripheral. It is literally learning an entirely new microcontroller and that knowledge is not portable to any other hardware. I can see the payoff for highly integrated commercial industrial solutions, but that kind of time investment is hard to justify for hobbyist one-off projects. I now understand why BeagleBoard PRUs aren’t used as widely as I had expected them to be.

None of the above sounded great for my general use of BeagleBoard, but what about the robotics-specific focus of the BeagleBoard Blue? It has lots of robot-focused hardware, crammed onto a small board. Their corresponding software is the “Robot Control Library”, and I can get a good feel for its capabilities via the library documentation site. Generally speaking, it looked fine until I clicked on the link to its GitHub repository. I saw the most recent update was more than two years ago, and there is a long backlog of filed issues few people are looking at. Those who put in the effort to contribute code in a pull request could only watch and sit them gather dust. The oldest PR is over two years old and has yet to be merged. All signs of an abandoned codebase.

I like the idea of BeagleBone, but after I took a closer look, I’m not terribly enthused at the reality. At the moment I don’t see a project idea niche where a BeagleBone board would be the best tool for the job. With my updated knowledge, I hope to recognize a good fit for a Beagle board if an opportunity should arise. But until then, my boards are going back into their boxes to continue gathering dust.

Notes on “Exploring BeagleBone” by Derek Molloy

As an electronics hobbyist I’ve managed to collect two different BeagleBone boards, but I’ve never done anything useful with them. In the interest of learning enough to put them to work, I bought the Kindle eBook for Exploring BeagleBone, Second Edition by Derek Molloy. (*) I dusted off my PocketBeagle from the E-ALE hardware kit and started following along. My current level of knowledge is slightly higher than this book’s minimum target audience, so some of the materials I already knew. But there were plenty I did not know!

The first example came quickly. In chapter 2 I learned how to give my PocketBeagle access to the internet. This is not like a Raspberry Pi which had onboard WiFi or Ethernet. In contrast, a PocketBeagle’s had to access the network over its USB connection. At E-ALE I got things up and running once, but SCaLE was a Linux conference so I only received instructions for Ubuntu. This book gave me instructions on how to set up internet sharing over USB in Windows, so my PocketBeagle could download updates for its software.

Chapter 5 Practical Beagle Board Programming is a whirlwind tour of many different programming languages with their advantages and disadvantages. Some important programming concepts such as object-oriented programming was also covered. My background is in software development, so few of the material was new to me. However, this chapter was an important meta-learning opportunity. Because I already knew the subject matter, as I read this chapter I frequently thought: “Wait, but the book didn’t cover [some related thing]” or “the book didn’t explain why it’s done this way”. This taught me a mindset for the whole book: it is a quick superficial overview of concepts that give us just enough keywords for further learning. The title is “Exploring BeagleBone”, not “BeagleBone in Depth”!

On that front, I believe the most impactful thing I learned from this book is sysfs, a mechanism to allow communication with system hardware by treating their various input/output parameters as files. This presents an interface that avoids the risks and pitfalls of going into kernel mode. Sysfs was introduced in chapter 2 and is used throughout the text, culminating in the final chapter 16 where we get a taste of implementing a sysfs interface in our own loadable kernel module. (LKM) But there are many critical bits of knowledge not covered in the book. For example, sysfs was introduced in chapter 2 where we were told the sysfs path /sys/class/leds/beaglebone:green:usr3/brightness will allow us to control brightness of one of BeagleBoard’s onboard LEDs. That led me to ask two questions immediately:

  1. If I hadn’t known that path, how would I find it? (“What is the sysfs path for an onboard LED?”)
  2. If I look at a /sys/ path and didn’t know what hardware parameter it corresponded to, how would I find out? (“What does /sys/[blah] control?”)

The book does not answer these questions. However, it taught me that sysfs interfaces were exposed by loadable kernel modules (LKM, chapter 16) and that LKMs are loaded for specific hardware based on device tree (chapter 6). Given this, I think I have enough background to go and find answers elsewhere.

The book used sysfs for many examples, and the book also covered at least one example where sysfs was not enough. When dealing with high-bandwidth video data, there’s too much overhead for sysfs so the code examples switched to using ioctl.

My biggest criticism of this book is a lax attitude towards network security. In chapter 11 (The Internet of Things) instructions casually tell readers to degrade their GMail account security and to turn off Windows firewall. No! Bad book! Bad! Even worse, there’s no discussion of the risks that are opened up if a naive reader should blindly follow those instructions. And that’s just the reader’s email account and desktop machine. What about building secure networked embedded devices with a BeagleBone? Nothing. No discussion at all, not even a superficial overview. There’s a running joke that “The S in IoT stands for security” and this book is not helping.

Despite its flaws, I did find the book instructive on many aspects of a BeagleBone. And thanks to the programming chapter and lack of security information, I’m also keenly aware there are many more things not covered by this book at all. After reading this book, I pondered what it meant for my own BeagleBone boards.


UPDATE: I was impressed by this application of sysfs: show known CPU hardware vulnerabilities and status of mitigations: grep -r . /sys/devices/system/cpu/vulnerabilities/


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Taking Another Look at BeagleBone

I like the idea behind BeagleBone boards, a series of embedded Linux devices. BeagleBone hardware are built around modules from Octavo Systems, which are designed for easy of integration into custom embedded hardware. BeagleBone are merely one of many Octavo-based devices, but (as far as I know) the only one focused on building an easy on-ramp for learning and hobbyist use. From that aspect they resemble the Raspberry Pi lineup, but sadly they have not found the same degree of success.

One advantage of a BeagleBoard are the onboard LEDs available for experimentation. A Raspberry Pi has onboard LEDs as well, but they already have jobs indicating power and microSD activity and it takes work to reallocate them for a quick experiment. But my favorite BeagleBone advantage is a power button for graceful shutdowns, something that’s always been missing from the Pi since the first version. Even though they are now on the Pi 4, the Raspberry Pi foundation seems uninterested in solving this problem. I’ve read claims that SD corruption from ungraceful shutdowns are rare, but it still makes me grumpy.

I personally own two BeagleBone devices. The first was a PocketBeagle I bought with the intention of taking the E-ALE (Embedded Apprentice Linux Engineer) course that premiered at SCaLE 16X. Unfortunately, between my lack of foundational knowledge and the rough nature of their first run, I didn’t absorb very much information from the course. But I still have the PocketBeagle and Bacon Bits cape that went with the course.

The second was a BeagleBone Blue that I bought after a conversation at SCaLE with Jason Krider, one of the people behind BeagleBone. He saw my Sawppy rover and told me about BeagleBone Blue which was designed with a focus on robotics. He asserted a Blue should be much more suitable for Sawppy than the Raspberry Pi I had been using. I ordered a board and, as soon as I took it out of the box, I knew I had a problem. The physical size of BeagleBone boards is designed to fit in an Altoids mint tin. In order to follow that precedence and cram onboard all the robotics-related output, the Blue used many fine-pitched connectors that aren’t in my usual toolkit of connectors. I looked into either paying for pre-made wiring bundles with the connectors already crimped, or tools to crimp my own, and balked at the cost. I decided to think it over, which stopped my momentum, and it’s been sitting ever since.

Which is a shame, because on paper these are nifty little devices! Now motivated by a local study session meetup, I decided to buy an eBook to help me get a better understanding of BeagleBone. I’m still not comfortable with public gatherings, but I can follow along at home as the study group went through chapters of Exploring BeagleBone, Second Edition by Derek Molloy.

Radeon HD 7950 Video Card (MSI R7950-3GD5/OC BE)

This video card built around a Radeon HD 7950 chip is roughly ten years old. It is so outdated, nobody would pay much for a used unit on eBay. Not even at the height of The Great GPU Shortage. I’ve been keeping it around as a representative for full sized, dual-slot PCIe video cards as I played with custom-built PC enclosures. But I now have other video cards that I can use for the purpose, so this nearly-teenager video card landed on the teardown bench.

Most of its exterior surface is covered by a plastic shroud, but the single fan intake is no longer representative of modern GPUs with two or three fans.

Towards the center of this board is a metal bracket for fastening a heat sink that accounted for most of the weight of this card. In the upper left corner are auxiliary PCIe power supply sockets. The circuit board has provision for a 6-pin connector adjacent to an 8-pin connector, even though only two 6-pin connectors are soldered to this board. Between those connectors and the GPU itself, I see six (possibly seven) sets of components. I infer these are power-handling parts working in parallel to feed a power-hungry chip.

This was my first 4K UHD capable video card, which I used via the mini-DisplayPort connectors on the right. As I recall, the HDMI port only supported up to 1080p Full HD and could not drive a 4K display. Finally, a DVI port supported all DVI capabilities (not all of them do): analog VGA on its DVI-A pins, plus dual-link DVI-D for driving larger displays. I don’t recall if the DVI-D plug could output 4K UHD, but I knew it went beyond 1080p Full HD by driving a 2560×1600 monitor.

The plastic shroud was held by six plastic screws to PCB and two machine screws to metal plate. Once those eight fasteners were removed, shroud came off easily. From here we get a better look at the PCIe auxiliary power connectors on the top right, and the seven sets of capacitors/inductors/etc. that work in parallel to handle power requirements of this chip.

Four small machine screws held the fan shroud to the heat sink. Fan label indicates this fan consumes up to 6 Watts (12V 0.5A) and I recall it can get move a lot of air at full blast. (Or at least, gets very loud trying.) It appears to be a four-wire fan which I only recently understood how to control if I wanted. Visible on the fan’s underside is a layer of fine dust that held on, despite a blast of compressed air I used to clean out dust bunnies before this teardown.

Some more dust had also clung on to these heat sink fins. It seems like a straightforward heat sink with stamped sheet metal fins on an aluminum base, no heat pipes like what we see on many modern GPUs. But if it is all aluminum, and there are no heat pipes, it should be lighter weight than it is.

Unfastening four machine screws from the X-shaped rear bracket allowed me to remove the heat sink, and now we can see the heat sink has a copper core for heat distribution. That explains the weight.

The GPU package is a high-density circuit board in its own right, hosting not just the GPU die itself but also a large collection of supporting components. Based on the repeated theme of power handling, I guess these little tan rectangles are surface mount capacitor arrays, but they might be something else.

Here’s a different angle taken after I cleaned up majority of thermal paste. An HD 7950 is a big silicon die sitting on a big package.

When I cleaned all thermal paste off the heatsink, I was surprised at its contact surface. It seems to be the direct casting mold surface texture with no post-processing. For CPU heatsinks, I usually see a precision machined flat surface, either milling or grinding. Low-power/low-cost devices may skip such treatment for their heatsinks, but I don’t consider this GPU as either low power or low cost. I know this GPU dissipated heat on par with a CPU, yet there was no effort for a precision flat surface to maximize heat transfer.

I think this is a promising module for reuse. Though in addition to the lack of precision flat surface, there’s another problem that the copper core is slightly recessed. The easiest scenario for reuse is to find something that sticks up ~2mm above its surrounding components, but not by more than the 45x45mm footprint of this GPU. This physical shape complicates my top two ideas for reuse: (1) absolute overkill cooling for a Raspberry Pi, or (2) retrofit active cooling to the passively-cooled HP Split X2. If I were to undertake either project, I’d have to add shims or figure out how to remove some of the surrounding aluminum.

Disable Sleep on a Laptop Acting as Server

I’ve played with different ways to install and run Home Assistant. At the moment my home instance is running as a virtual machine inside KVM hypervisor. The physical machine is a refurbished Dell Latitude E6230 running Ubuntu Desktop 22.04. Even though it will be running as a server, I installed the desktop edition for access to tools like Virtual Machine Manager. But there’s a downside to installing the desktop edition for server use: I did not want battery-saving features like suspend and sleep.

When I chose to use an old laptop like a server, I had thought its built-in battery would be useful in case of power failure. But I hadn’t tested that hypothesis until now. Roughly twenty minutes after I unplugged the laptop, it went to sleep. D’oh! The machine still reported 95% of battery capacity, but I couldn’t use that capacity as backup power.

The Ubuntu “Settings” user interface was disappointingly useless for this purpose, with no obvious ability to disable sleep when on battery power. Generally speaking, the revamped “Settings” of Ubuntu 22 has been cleaned up and now has fewer settings cluttering up all those menus. I could see this as a well-meaning effort to make Ubuntu less intimidating to beginners, but right now it’s annoying because I can’t do what I want. To the web search engines!

Looking for command-line tools to change Ubuntu power saving settings brought me to many pages with outdated information that no longer applied to Ubuntu 22. My path to success started with this forum thread on Linux.org. It pointed to this page on linux-tips.us. It has a lot of ads, but it also had applicable information: systemd targets. The page listed four potentially applicable targets:

  • suspend.target
  • sleep.target
  • hibernate.target
  • hybrid-sleep.target

Using “systemctl status” I could check which of those were triggered when my laptop went to sleep.

$ systemctl status suspend.target
○ suspend.target - Suspend
     Loaded: loaded (/lib/systemd/system/suspend.target; static)
     Active: inactive (dead)
       Docs: man:systemd.special(7)

Jul 21 22:58:32 dellhost systemd[1]: Reached target Suspend.
Jul 21 22:58:32 dellhost systemd[1]: Stopped target Suspend.
$ systemctl status sleep.target
○ sleep.target
     Loaded: masked (Reason: Unit sleep.target is masked.)
     Active: inactive (dead) since Thu 2022-07-21 22:58:32 PDT; 11h ago

Jul 21 22:54:41 dellhost systemd[1]: Reached target Sleep.
Jul 21 22:58:32 dellhost systemd[1]: Stopped target Sleep.
$ systemctl status hibernate.target
○ hibernate.target - System Hibernation
     Loaded: loaded (/lib/systemd/system/hibernate.target; static)
     Active: inactive (dead)
       Docs: man:systemd.special(7)
$ systemctl status hybrid-sleep.target
○ hybrid-sleep.target - Hybrid Suspend+Hibernate
     Loaded: loaded (/lib/systemd/system/hybrid-sleep.target; static)
     Active: inactive (dead)
       Docs: man:systemd.special(7)

Looks like my laptop reached the “Sleep” then “Suspend” targets, so I’ll disable those two.

$ sudo systemctl mask sleep.target
Created symlink /etc/systemd/system/sleep.target → /dev/null.
$ sudo systemctl mask suspend.target
Created symlink /etc/systemd/system/suspend.target → /dev/null.

After they were masked, the laptop was willing to use most of its battery capacity instead of just a tiny sliver. This should be good for several hours, but what happens after that? When the battery is almost empty, I want the computer to go into hibernation instead of dying unpredictably and possibly in a bad state. This is why I left hibernation.target alone, but I wanted to do more for battery health. I didn’t want to drain the battery all the way to near-empty, and this thread on AskUbuntu led me to /etc/UPower/UPower.conf which dictates what battery levels will trigger hibernation. I raised the levels so the battery shouldn’t be drained much past 15%.

# Defaults:
# PercentageLow=20
# PercentageCritical=5
# PercentageAction=2
PercentageLow=25
PercentageCritical=20
PercentageAction=15

The UPower service needs to be restarted to pick up those changes.

$ sudo systemctl restart upower.service

Alas, that did not have the effect I hoped it would. Leaving the cord unplugged, the battery dropped straight past 15% and did not go into hibernation. The percentage dropped faster and faster as it went lower, too. Indication that the battery is not in great shape, or at least mismatched with what its management system thought it should be doing.

$ upower -i /org/freedesktop/UPower/devices/battery_BAT0
  native-path:          BAT0
  vendor:               DP-SDI56
  model:                DELL YJNKK18
  serial:               1
  power supply:         yes
  updated:              Fri 22 Jul 2022 03:31:00 PM PDT (9 seconds ago)
  has history:          yes
  has statistics:       yes
  battery
    present:             yes
    rechargeable:        yes
    state:               discharging
    warning-level:       action
    energy:              3.2079 Wh
    energy-empty:        0 Wh
    energy-full:         59.607 Wh
    energy-full-design:  57.72 Wh
    energy-rate:         10.1565 W
    voltage:             9.826 V
    charge-cycles:       N/A
    time to empty:       19.0 minutes
    percentage:          5%
    capacity:            100%
    technology:          lithium-ion
    icon-name:          'battery-caution-symbolic'

I kept it unplugged until it dropped to 2%, at which point the default PercentageAction behavior of PowerOff should have occurred. It did not, so I gave up on this round of testing and plugged the laptop back into its power cord. I’ll have to come back later to figure out why this didn’t work but, hey, at least this old thing was able to run 5 hours and 15 minutes on battery.

And finally: this laptop will be left plugged in most of the time, so it would be nice to limit charging to no more than 80% of capacity to reduce battery wear. I’m OK with 20% reduction in battery runtime. I’m mostly concerned about brief blinks of power of a few minutes. A power failure of 4 hours instead of 5 makes little difference. I have seen “battery charge limit” as an option in the BIOS settings of my newer Dell laptops, but not this old laptop. And unfortunately, it does not appear possible to accomplish this strictly in Ubuntu software without hardware support. That thread did describe an intriguing option, however: dig into the cable to pull out Dell power supply communication wire and hook it up to a switch. When that wire is connected, everything should work as it does today. But when disconnected, some Dell laptops will run on AC power but not charge its battery. I could rig up some sort of external hardware to keep battery level around 75-80%. That would also be a project for another day.

ESP8266 Controlling 4-Wire CPU Cooling Fan

I got curious about how the 4 wires of a CPU cooling fan interfaced with a PC motherboard. After reading the specification, I decided to get hands-on.

I dug up several retired 4-wire CPU fans I had kept. All of these were in-box coolers bundled with various Intel CPUs. And despite the common shape and Intel brand sticker, they were made by three different companies listed at the bottom line of each label: Nidec, Delta, and Foxconn.

I will use an ESP8266 to control these fans running ESPHome, because all relevant code has already been built and ready to go:

  • Tachometer output can be read with the pulse counter peripheral. Though I do have to divide by two (multiply by 0.5) because the spec said there are two pulses per fan revolution.
  • The ESP8266 PWM peripheral is a software implementation with a maximum usable frequency of roughly 1kHz, slower than specified requirement. If this is insufficient, I can upgrade to an ESP32 which has hardware PWM peripheral capable of running 25kHz.
  • Finally, a PWM fan speed control component, so I can change PWM duty cycle from HomeAssistant web UI.

One upside of the PWM MOSFET built into the fan is that I don’t have to wire one up in my test circuit. The fan header pins were wired as follows:

  1. Black wire to circuit ground.
  2. Yellow wire to +12V power supply.
  3. Green wire is tachometer output. Connected to a 1kΩ pull-up resistor and GPIO12. (D6 on a Wemos D1 Mini.)
  4. Blue wire is PWM control input. Connected to a 1kΩ current-limiting resistor and GPIO14. (D5 on Wemos D1 Mini.)

ESPHome YAML excerpt:

sensor:
  - platform: pulse_counter
    pin: 12
    id: fan_rpm_counter
    name: "Fan RPM"
    update_interval: 5s
    filters:
      - multiply: 0.5 # 2 pulses per revolution

output:
  - platform: esp8266_pwm
    pin: 14
    id: fan_pwm_output
    frequency: 1000 Hz

fan:
  - platform: speed
    output: fan_pwm_output
    id: fan_speed
    name: "Fan Speed Control"

Experimental observations:

  • I was not able to turn off any of these fans with a 0% duty cycle. (Emulating pulling PWM pin low.) All three kept spinning.
  • The Nidec fan ignored my PWM signal, presumably because 1 kHz PWM was well outside the specified 25kHz. It acted the same as when the PWM line was left floating.
  • The Delta fan spun slowed linearly down to roughly 35% duty cycle and was roughly 30% of full speed. Below that duty cycle, it remained at 30% of full speed.
  • The Foxconn fan spun down to roughly 25% duty cycle and was roughly 50% of the speed. I thought it was interesting that this fan responded to a wider range of PWM duty cycles but translated that to a narrower range of actual fan speeds. Furthermore, 100% duty cycle was not actually the maximum speed of this fan. Upon initial power up, this fan would spin up to a very high speed (judged by its sound) before settling down to a significantly slower speed that it treated as “100% duty cycle” speed. Was this intended as some sort of “blow out dust” cleaning cycle?
  • These are not closed-loop feedback devices trying to maintain a target speed. If I set 50% duty cycle and started reducing power supply voltage below 12V, the fan controller will not compensate. Fan speed will drop alongside voltage.

Playing with these 4-pin fans were fun, but majority of cooling fans in this market do not have built-in power transistors for PWM control. I went back to learn how to control those fans.

CPU Cooling 4-Wire Fan

Building a PC from parts includes keeping cooling in mind. It started out very simple: every cooling fan had two wires, one red and one black. Put +12V on the red wire, connect black go ground, done. Then things got more complex. Earlier I poked around with a fan that had a third wire, which proved to be a tachometer wire for reading current fan speed. The obvious follow-up is to examine cooling fans with four wires. I first saw this with CPU cooling fans and, as a PC builder, all I had to know was how to plug it in the correct orientation. But now as an electronics tinkerer I want to know more details about what those wires do.

A little research found the four-wire fan system was something Intel devised. Several sources cited URLs on http://FormFactors.org which redirects to Intel’s documentation site. Annoyingly, Intel does not make the files publicly available, blocking it with a registered login screen. I registered for a free account, and it still denied me access. (The checkmark next to the user icon means I’ve registered and signed in.)

Quite unsatisfying. But even if I can’t get the document from official source, there are unofficial copies floating around on the web. I found one such copy, which I am not going to link to because the site liberally slathered the PDF with advertisements and that annoys me. Here is the information on the title page which will help you find your own copy. Perhaps even a more updated revision!

4-Wire Pulse Width Modulation
(PWM) Controlled Fans
Specification
September 2005
Revision 1.3

Reading through the specification, I learned that the four-wire standard is backwards compatible with three-wire fans as those three wires are the same: GND, +12V supply, and tachometer output. The new wire is for a PWM control signal input. Superficially, this seems very similar to controlling fan speed by PWM modulating the +12V supply, except now the power supply stays fixed at +12V and the PWM MOSFET is built into the fan. How is this better? What real-world problems are solved by using an internal PWM MOSFET? The spec did not explain.

According to spec, the PWM control signal should be running at 25kHz. Fan manufacturers can specify a minimum duty cycle. Fan speed for duty cycle less than the minimum is open for interpretation by different implementations. Some choose to ignore lower duty cycles and stay running at minimum, some interpret it as a shutoff signal. The spec forbids pull-up or pull-down resistor on the PWM signal line external to the fan, but internal to the fan there is a pull-up resistor. I interpret this to mean that if the PWM line is left floating, it will be pulled up to emulate 100% duty cycle PWM.

Reading the specification gave me the theory of operation for this system, now it’s time to play with some of these fans to see how they behave in practice.