Overkill Options: A-Frame, Three.js and D3.js

After getting input controls sorted out on my AS7341 interface project, it’s time for the fun part: visualizing the output! Over the past few years of reading about web technologies online, I’ve collected a list of things I wanted to play with. My AS7341 project is not the right fit for these tools, so this list awaits a project with the right fit.

At this point I’ve taken most of Codecademy’s current roster of courses under their HTML/CSS umbrella. One of the exceptions is “Learn A-Frame (VR)“. I’m intrigued by the possibilities of VR but putting that in a browser definitely feels like something ahead of its time. “VR in a browser” has been ahead of its time since 1997’s VRML, and people have kept working to make it happen ever since. A brief look at A-Frame documentation made my head spin: I need to get more proficient with web technologies and have a suitable project before I dive in.

If I have a project idea that doesn’t involve full-blown VR immersion (AS7341 project does not) but could use 3D graphics capability (still does not) I can still access 3D graphics hardware from the browser via WebGL. Which by now is widely supported across browsers. In the likely case working directly with WebGL API is too nuts-and-bolts for my taste, there are multiple frameworks that help take care of low-level details. One of them is Three.js, which was the foundation for a lot of cool-looking work. In fact, A-Frame is built on top of Three.js. I’ve dipped my toes in Three.js when I used it to build my RGB332 color picker.

Dropping a dimension to land of 2D, several projects I’ve admired were built using D3.js. This framework for building “Data-Driven Documents” seems like a great way to interactively explore and drill into sets of data. On a similar front, I’ve also learned of Tableau which is commercial software covering many scenarios for data visualization and exploration. I find D3.js more interesting for two reasons. First, I like the idea of building a custom-tailored solution. And second, Tableau was acquired by Salesforce in 2019. Historically speaking, acquisitions don’t end well for hobbyists on a limited budget.

All of the above frameworks are overkill for what I need right now for an AS7341 project: there are only a maximum of 11 different sensor channels. (Spectral F1-F8 + Clear + Near IR + Flicker.) And I’m focusing on just the color spectra F1-F8. A simple bar chart of eight bars would suffice here, so I went looking for something simpler and found Chart.js.

Brief Look at a LinuxCNC Pendant

Trying to build a little CNC is definitely a learn-as-I-go project. Moving the motor control box was a simple (though necessary) mechanical change, but not the only idea prompted by initial test runs. I also thought it would be nice to have a handheld pendant to help with machine setup, instead of going to the laptop all the time. I got a chance to look over a CNC pendant to see how I might integrate one.

This particular unit was purchased from this eBay vendor listing, but there are many other similar listings across different online marketplaces. Judging by this listing’s title, the popular keyword salad included: CNC Mach3 USB MPG Pendant Handwheel. I knew what CNC, USB, pendant and handwheel referred to. MPG in this context means “Manual Pulse Generator” referring to the handwheel that generates pulses to signal the CNC controller to move individual steps. And finally, Mach3 is a Windows software package that turns a PC into CNC machine controller.

My first draft CNC controller was built on an ESP32 without USB host capability, so there’s little hope of integrating this USB pendant. The most likely path would involve LinuxCNC, a freeware alternative to Mach3. Poking around documentation for control pendants, the first hit was this link which seems to be talking about units that connected via parallel port. Follow-up searches kept coming across this link for wireless pendants which I didn’t think was relevant. After coming across it for the fifth or sixth time, I decided to skim the page and saw that it also included information about a wired USB pendant. It’s not a direct match, though. Here’s information from Ubuntu’s dmesg tool after I plugged in this pendant.

[ 218.491640] usb 1-1: new full-speed USB device number 2 using xhci_hcd
[ 218.524920] usb 1-1: New USB device found, idVendor=10ce, idProduct=eb93
[ 218.524926] usb 1-1: New USB device strings: Mfr=1, Product=0, SerialNumber=0
[ 218.524931] usb 1-1: Manufacturer: KTURT.LTD
[ 218.538131] generic-usb 0003:10CE:EB93.0004: hiddev0,hidraw3: USB HID v1.10 Device [KTURT.LTD] on usb-0000:00:10.0-1/input0

The key here are USB identifiers idVendor and idProduct, 0x10CE and 0xEB93. I could change those values in the associated udev rule:

ATTRS{idVendor}=="10ce", ATTRS{idProduct}=="eb93", MODE="666", OWNER="root", GROUP="users"

But that was not enough. I dug deeper to find relevant source code and it is explicitly looking for idVendor:idProduct of 0x10CE:0xEB70.

dev_handle = libusb_open_device_with_vid_pid(ctx, 0x10CE, 0xEB70);

Oh well, getting this to run would go beyond just configuration files, there will need to be code changes and recompiles. Looks like some people are already looking at it, a search for eb93 found this thread. I don’t know enough LinuxCNC to contribute or even understand what they are talking about. I returned this USB pendant to its owner and set this idea aside. There are plenty of CNC pendant offerings out there I can revisit later, some of which are even bundled with an entire CNC control package.

Using TCL 55S405 TV as Computer Monitor

I just bought a LG OLED55B2AUA for my living room, displacing a TCL 55S405. I have several ideas on what I could do with a retired TV, and the first experiment was to use it as a computer monitor. In short, it required adjusting a few TV settings and even then, there are a few caveats for using it with a Windows PC. Using it with a Mac was more straightforward.

As expected, it is ludicrously large sitting on my desk. And due to the viewing angles of this (I think VA) panel, the edges and corners are difficult to read. I see why some people prefer their large monitors to be curved.

I noticed a delay between moving my mouse and movement of onscreen cursor. This delay was introduced by TV’s image processing hardware. During normal TV programs, the audio can be delayed in order to stay in sync with the video. But that trick doesn’t work for interactive use, which is why TVs have a “Game Mode” to disable such processing. For this TV, it was under “TV settings” / “Picture settings” / “Game mode”. Turning it on allowed the mouse to feel responsive again.

The next problem was brightness. Using it as a monitor, I would sit much closer than I would a TV and there is too much light causing eyestrain. First part of the solution is to choose “Darker” option of “TV settings” / “Picture settings” / “TV brightness”. Then I went to “TV settings” / “Picture settings” / “Fine tune picture” where I could turn “Backlight” down to zero. Not only did this make the screen more comfortable it reduced electrical power consumption as well.

According to my Kill-A-Watt meter, this large TV consumed only 35 watts once I turned the backlight down to minimum. This is actually slightly lower than the 32″ Samsung computer monitor I had been using. Surprisingly, half of that power was not required to run the screen at all. When I “turn off” the TV, the screen goes dark but Kill-A-Watt still registered 17 watts, burning power for purposes unknown. Hunting around in the Settings menu, I found “System” / “Power” / “Fast TV Start” which I could turn off. When this TV is no longer set for fast startup, turning the TV off seems to really turn it off. Or at least, close enough that the Kill-A-Watt read zero watts. This is far better than my 32″ Samsung which read 7W even in low-power mode.

Since this is a TV, I did not expect high framerate capabilities. I knew it had a 24 FPS (frames-per-second) mode to match film speed and a 30 FPS mode for broadcast television. When I checked my computer video card settings, I was pleasantly surprised to find that 60Hz refresh rate was an option. Nice! This exceeded my expectations and is perfectly usable.

On the flipside, since this is a TV I knew it had HDCP (High-bandwidth Digital Content Protection) support. But when I start playing protected content (streaming Disney+ on Microsoft Edge for Windows 11) the TV would choke and fail over to its “Looking for signal…” screen. Something crashed hard and the TV could not recover. To restore my desktop, I had to (1) stop my Disney+ playback and (2) power cycle the TV. Not just pressing the power button (that didn’t work) I had to pull the power plug.

The pixels on this panel were crisp, and 4K UHD resolution actually worked quite well. 3840×2160 resolution at 55″ diagonal works out to 80 DPI (dots per inch), which is right within longtime computer monitor norms. For many years I had used a 15″ monitor at 1024×768 resolution, which worked out to 85DPI. Of course, 80DPI is pretty lackluster compared with “High DPI” displays (Apple “Retina Display”, etc.) now on the market with several hundred dots (or pixels) per inch. Despite crisp pixels at sufficient density, text on this panel isn’t always legible under Windows because it doesn’t work well with Microsoft’s ClearType subpixel rendering. ClearType takes advantage of typical panel subpixel orientation, where the red/green/blue elements are laid out horizontally for each pixel. This panel, unfortunately, have its elements laid out vertically for each pixel, foiling ClearType trying to be clever. In order for this panel to take advantage of ClearType rendering, I would have to rotate the screen 90 degrees to portrait orientation. This isn’t terribly practical, so I turned ClearType off.

For comparison, a brief test with my Apple MacBook Air (M1) saw the following behavior:

  • Same 3640×2160 resolution and 60Hz refresh rates were available.
  • I could activate HDR mode, an option that was grayed out and not available with the NVIDIA drivers on my Windows desktop. I lack MacOS HDR content so I don’t know whether it actually works.
  • Streaming Disney+ on Firefox for MacOS showed video at roughly standard-definition DVD quality. This is consistent with behavior for non-HDCP displays, and much preferable to crashing the TV so hard I need to power cycle it.
  • MacOS font rendering does not use color subpixels like Microsoft ClearType, so text looks good without having to turn off anything.

It appears this TV is a better monitor for a MacOS computer than a Windows machine.

LG OLED Look Gorgeous But webOS Is Horrid

Thanks to Black Friday discounts, I acquired an OLED TV which I had coveted for many years. I decided on a LG OLED55B2AUA purchased through Costco (Item #9755022). LG’s “B” line sits between the more affordable “A” and the more expensive “C” lines and it was a tradeoff I liked. (There are a few additional lines higher than “C” priced above my budget.) The TV replaced a TCL 55S405 and while they are both 55″ TVs, there is a dramatic difference in image quality. There are reviews out there for full information, my blog post here concentrates on the items that mattered to me personally.

The Good

  • The main motivation is image quality. OLED panel advantage comes from their self-illuminating pixels leading to great contrast and vibrant colors. The “C” line uses panels with a higher peak brightness, but I haven’t found brightness lacking. When the filmmaker intentionally includes something bright (flashlight in a dark room, etc.) this “B” panel is bright enough to make me squint.
  • HDMI 2.1 with variable refresh rate capability and a higher maximum frame rate (120FPS) so I can see all the extra frames my new Xbox Series X can render. On this year’s “B” units, HDMI 2.1 is supported on two of four HDMI ports, which is enough for me. HDMI 2.1 is supported on all four ports of “C” line, and none of “A” line because they are missing high framerate features entirely.
  • The LG “magic remote” has an accelerometer to let us move an on-screen cursor by tilting the remote. This is far better than the standard up/down/left/right keypads of a TV remote and, combined with responsive UI, makes navigation less of a chore. This is the only good thing about LG’s user interface.

The Bad

For reasons I failed to diagnose, the TOSLINK output audio port could not send sound to my admittedly old Sony STR-DN1000 receiver. Annoyingly, LG designed this TV without analog audio output. Neither a headphone jack (as is on my TCL) nor classic white and red RCA audio jacks. In order to use my existing speakers, I ended up buying a receiver with HDMI eARC support. This is money I would have rather not spent.

The Ugly

The internal operating system is LG’s build of webOS, which they have turned into a software platform for relentless, shameless, and persistent monetization efforts. My TCL Roku TV also served ads, but not nearly as intrusively as this LG webOS TV. That powerful processor which gave us snappy and responsive user interface isn’t going to just sit idle while we watch a movie. Oh no, LG wants to put it to work making money for LG.

Based on the legal terms & conditions they wanted me to agree to, the powerful processor of this TV wants to watch the same things I watch. It wants to listen to the audio to listen for keywords that “help find advertisements that are relevant to you”. That’s creepy enough, but there’s more: it wants to watch the video as well! The agreement implies there are image recognition algorithms at work looking for objects onscreen for the same advertising purpose. That’s a lot of processing power deployed in a way that provides no benefit to me. I denied them permission to spy on me, but who knows if they respected my decision.

Ad-centric design continues to the webOS home screen. The top half is a huge banner area for advertisement. I found an option to turn off that ad but doing so did not free up space for my use. It just meant a big fixed “webOS” banner taking up space. Next row down, the leftmost item represents the most recently used input port, which in my case is the aforementioned Xbox Series X. The rest of that row are filled with more advertising, which I haven’t found a way to turn off. The third and smallest row includes all the apps I care about and even more that I did not. Overall, only about 1/8 of the home screen surface area are under my control, the rest paid LG to be on my home screen.

I’m frankly impressed at how brazenly LG turned a TV into an ad-serving spyware device. I understand the financial support role advertisements play, but I’m drawing a line for my own home: as long as the ads stay in the menus and keep quiet while I’m actively watching TV, I will tolerate their presence. But if an LG ad of any type interrupts my chosen programming, or if an LG ad proved they’re spying on me despite lacking permission, I am unplugging that Ethernet cable.

UPDATE (two days later): Well, that did not take long. I was in the middle of watching Andor on Disney+ (Andor is very good) when I was interrupted by a pop-up notification on the bottom of the screen advertising free trial to a service I will not name. (Because I refuse to give them free advertising.) I will not tolerate ads that pop up in the middle of a show. Struggling to find an upside I can say this: that advertised service appeared to have no relation to Disney+ or anything said or shown in Andor, so the ad was probably not spying on me.

I was willing to let LG earn a bit of advertising revenue from me, as Roku did for my earlier TV, but LG’s methods were far too aggressive. Now LG will earn no ad revenue from me at all because this TV’s Ethernet cable has been unplugged.

AMS AS7341 11-Channel Multi-Spectral Digital Sensor

An interesting sensor module came to my attention recently thanks to the experiments of my talented friend Emily Velasco. She’s been building a contraption whose sound output is dependent on color. At first, the sensor module didn’t capture my attention because I’ve seen color sensors before. They’ve been available for fun projects like a M&M candy sorter, and many robotics/electronics kits like LEGO Mindstorms included their own. However, not all color sensors are equal. Once I looked into the AMS AS7341 sensor she was using, I learned it was far more capable than I had originally thought.

Instead of mapping color hue into a single reading, which is what I had expected, this sensor reports data across eleven channels. Eight of the channels are mapped to various wavelengths in the human visible spectrum, implemented via filters placed over optical sensors. The remaining three channels report color-independent data. One channel has no color-specific filter (“clear”) and would report an overall brightness value. One channel is sensitive to near infrared (NIR), outside visible spectrum. And the final channel is specialized for detecting common flicker frequencies 50Hz and 60Hz.

AMS product page for this sensor stated intended use cases for this sensor included color calibration tools. This sensor is intended to be a fundamental part of precision color instruments! All color sensors answer the “What color is it” question to varying degrees of precision. This sensor is aimed at the highly precise end of that spectrum. Note that the sensor by itself is not a color calibration tool, that will depend on the rest of the supporting electronics, software, and procedures for use. “How to calibrate the calibration tool” is a big field all by itself and critical for instrument accuracy in addition to precision.

I have very little background in color science, so I will start by looking at the sensor as a precision instrument of unknown accuracy. Even with that disclaimer I think it is a good project candidate.

Google Pixel 7 Camera Off-Axis Blur in Closeups

Thanks to Black Friday sales, I have upgraded my phone to a Google Pixel 7. My primary motivation was its camera, because most of the photographs posted to this blog were taken with my cell phone (Pixel 5a) camera. Even though I have a good Canon camera, I’ve rarely pulled it out because the cell phone photos have been good enough for use here. By upgrading to the Pixel 7, I hope to narrow the gap between the phone camera and a real Canon. So far it has been a great advancement on many fronts. There are other phone camera review sites out there for all the details, but I wanted to point out one trait worse than my Pixel 5a. It is specific to the kind of photos I take for this blog and not usually covered by photography reviews: with close-up shots, the image quality quickly degrades as we move off-axis.

I took this picture of an Adafruit Circuit Playground Express with the Pixel 7 roughly fifteen centimeters (~6 inches) away. This was about as close as the Pixel 7 camera was willing to focus.

The detail captured in the center of the image is amazing!

But as we get to the edges, clarity drops off a cliff. My Pixel 5a camera’s quality also dropped off as we moved off-axis, but not this quickly and not this badly.

For comparison, I took another picture with the same parameters. But this time, that GND pad is the center of the image.

Everything is sharp and crisp. We can even see the circuit board texture inside the metal plated hole.

Here are the two examples side by side. I hypothesize this behavior is a consequence of design tradeoffs for a camera lens small enough to fit within a cell phone. This particular usage scenario is not common, so I’m not surprised if it was de-emphasized in favor of other camera improvements. For my purposes I would love to have a macro lens on my phone, but I know I’m in the minority so I’m not holding my breath for that to happen.

In the meantime, I could mitigate this effect by taking the picture from further away. This keeps more of the subject in a narrow angle from the main axis, reducing the off-axis blur. I would sacrifice some detail, but I still expect the quality to be good enough for this blog. And if I need to capture close-up detail, I will have to keep this off-axis blur in mind when I compose the photo. I would love a sharp close-up photo from frame to frame, but I think I can work with this. And everything else about this Pixel 7 camera is better than the Pixel 5a camera, so it’s all good!

Old Xbox One Boots Up in… čeština?

As a longtime Xbox fan, I would have an Xbox Series X by now if it weren’t for the global semiconductor supply chain in disarray. In the meantime, I continue to play on my Xbox One X which was 4K UHD capable variation that launched in 2017. It replaced my first-generation Xbox One, which has been collecting dust on a shelf. (Along with its bundled Kinect V2.) But unlike my Xbox 360, that old Xbox One is still part of the current Xbox ecosystem. I should still be able play my Xbox One game library, though I’d be limited to digital titles because its optical drive is broken. (One of the reasons I retired it.)

I thought I would test that hypothesis by plugging it in and downloading updates, I’m sure there have been many major updates over the past five years. But there was a problem. When I powered it up, it showed me this screen in a language I can’t read.

Typing this text into Google Translate website, language auto-detection told me this is in Czech and it is a menu to start an update. Interesting… why Czech? It can’t be a geographical setting in the hardware, because it is a US-spec Xbox purchased in the state of Washington. It can’t be a geolocation based on IP address, either, as I’m connected online via a US-based ISP. And if there was some sort of system reset problem, I would have expected the default to be either English or at least something at the start of an alphabetical list like Albanian or Arabic or something along those lines. Why Czech?

Navigating the next few menus (which involved lots of typing into Google Translate) I finally completed required update process and reached the system menu where I could switch language to English. Here I saw the language was set to “čeština” which was at the top of this list. Aha! My Xbox had some sort of problem and reset everything. Including language setting to the top of the list of languages it had installed. I don’t know what the root problem was, but at least that explains how I ended up with Czech.

After I went through all of this typing, I learned I was an idiot. I should have used the Google Translate app on my Android phone instead of the website. I thought using the website on my computer was faster because I had a full-sized keyboard for typing where my phone did not. But the phone has a camera, and the app can translate visually with no typing at all. Here I’m running it on the screen capture I made of the initial bootup screen shown above.

Nice! It looks like the app runs optical character recognition on the original text, determine the language was Czech, perform the translation, and superimposes translated English text on top of original text. The more I thought about what is required to make this work, the more impressed I am. Such as the fact display coordinate transforms had to be tracked between language representations so the translated text can be superimposed at the correct location. I don’t know how much of this work is running on my phone and how much is running on a Google server. Regardless of workload split, it’s pretty amazing this option was just sitting in my pocket.

What was I doing? Oh, right: my old Xbox One. It is up and running with latest system update, capable of downloading and running my digitally purchased titles. In US-English, even. But by then I no longer cared about Xbox games, the translation app is much more interesting to play with.

Notes on “Make: Design for CNC” by Filson, Rohrbraher, and Kaziunas France

After skimming through Maker Media’s Bluetooth book, I did the same for their Design for CNC: Furniture Projects & Fabrication Technique (*) published in 2017. The cover listed authors as Anne Filson, Gary Rohrbacher, and Anna Kaziunas France. Bill Young didn’t get on the cover but was included in “About the Authors” section at the end. The focus is on building human-scale furniture by using CNC routers to cut flat pieces out of 4′ x 8′ sheets of plywood. Details are given for some (but not all) of the pieces we see on the authors’ site AtFAB, and readers without all the equipment (which includes me) are encouraged to join the 100kGarages community for help to turn ideas into reality.

CAD software used for examples was SketchUp 2015, that particular version is no longer available. While there is still a free Sketchup tier, it is limited to their browser-based release. CAM software in the book is Vectric VCarve, which never had a free tier. The authors’ CNC router is a ShopBot PRSalpha and discussion on cutters mostly referenced Onsrud. Obviously, a reader with more in common with authors’ setup will have an easier time following along. I have none of it, but I skimmed the book to see what I can learn. Here are the bits that I thought worth jotting down:

Chapter 2 had several sections that are valuable to anyone building structures out of flat sheets of material, whether CNC routing big pieces out of plywood or laser-cutting small things out of acrylic. They describe some basic joints, that lead to assemblies, leading to styles of structures. These were the building blocks for projects later in the book and are applicable to building 3D things out of 2D pieces no matter what tools (software or hardware) we use.

Chapter 3 describes their design process using SketchUp. Some of the concepts are applicable to all CAD software, some are not. Explanations are sometimes lacking. The author used something called the Golden Ratio without explaining what it is or why it is applicable, so we have no idea when it would be appropriate to use in our own designs. We are shown how CAD helps keep various views of the same object in sync, but at certain places the book also says to use “Make Unique” to break this association without explaining why it was necessary. I had hoped to see automated tooling to support managing 3D structures and their 2D cutting layout, but no such luck. This workflow used a “Model” layer to work in 3D and a “Flat” layer to lay out the same shapes in 2D space for cutting followed by a “Cut” layer with just 2D vectors to export to CAM software. It feels like a motivated software developer can help automate this process. (Perhaps someone has in the past five years! I just have to find it.)

I noticed a trend of information becoming less generally applicable as the book went on. By the time we got to CAM in Chapter 7, it was very specific to VCarve with few generalizations that we can take and apply to other CAM software. One missed opportunity was a discussion on climb milling versus conventional milling. The book explains that there are so many variables involved (the material, the cutter, the machine) a specific setup may work better one way versus the other. The only way to know is to try both approaches and use whichever one works better. Problem: they never explained what “better” means in this context. What do we look for? What metrics do we use to decide if one is better than the other? The authors would have a lot of experience seeing various results firsthand. That would have been valuable and applicable no matter what CAM software we use, but they didn’t share that knowledge and just left us hanging. Or perhaps they have seen so much, it never occurred to them that beginners would have no idea how to judge.

Another disappointment was in the area of parametric design. In chapter 5 they covered the fact that plywood is not made to precise dimensions, and we’d need to adjust accordingly. However, the recommended default method of adjustment is to scale the entire project rather than adjust a thickness parameter. Later in chapter 12 they showed how to modify a few of their designs by plugging parameters into an app written in Processing. However, the app is limited to the variables allowed by the authors, and each app is unique to a project. The book doesn’t cover how to do parametric design in SketchUp. (Maybe it can’t?) But more disappointingly, the book doesn’t cover the ins and outs of how to write parametric capability for our own designs. The authors started this book by saying how designing and customizing for our own purposes is a huge part of what makes CNC routed projects preferable to generic designs from IKEA, so it was a huge letdown to see nothing about making our own parametric designs.

I would have appreciated more information on working with wood grain. Wood grain is discussed mostly as a cosmetic concern and not a structural one. I guess using plywood mitigates most of those worries? I would have also wanted to see more actual finished pieces. Most of the images in this book were 3D renders and not real pictures, another letdown.

Despite these disappointments I felt I learned a lot from this book generally applicable to building 3D structures from 2D shapes. The resources section at the end looked promising for more information on designing for CNC that go beyond wooden furniture. And finally, unrelated to the topic or the book content, the colophon pointed me to AsciiDoc, which is something I might look into later for future Sawppy documentation.

(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Notes on “Make: Bluetooth” by Allan, Coleman, and Mistry

As a part of a Humble Bundle package of books published by Maker Media, I had the chance to read through Make: Bluetooth (*) by Alasdair Allan, Don Coleman & Sandeep Mistry. This book covers a series of projects that can be built by the Make audience: by assembling development breakout boards and discrete components on prototype breadboards.

One of the first things this book covers is that these projects all use Bluetooth LE and not “Classic” Bluetooth. They share two things: (1) they both communicate over 2.4GHz range of RF spectrum, and (2) they are both administered by the Bluetooth Special Interest Group. Other than that, they are completely different wireless communication protocols named for maximum customer confusion.

For each project, this book provides a detailed step-by-step guide from beginning to end, covering just what we need for each project. This is both the book’s greatest strength and leads to my biggest criticism. Minimizing extraneous information not pertinent to the current project avoids confusing beginners, but if that beginner wants to advance beyond being a beginner, this book doesn’t provide much information to guide their future study. This problem gets worse as the book ages, because we’re not given the background information necessary to adapt. (The book is copyrighted 2016, this post is written in 2022.)

The first example is the Bluetooth LE module they used for most of the book: Adafruit product #1697, Bluefruit LE – Bluetooth Low Energy (BLE 4.0) – nRF8001 Breakout. The book never covers why this particular BLE module was chosen. What if we can’t get one and need to find a substitute? We’re not just talking about a global chip shortage. It’s been years since the book was written and Adafruit has discontinued product #1697. Fortunately, Adafruit is cool, and added a link to their replacement products built around the nRF51822 chip. But if Adafruit hadn’t done that, the reader would have been up a creek trying to figure out a suitable replacement.

Another example was the phone interaction side of this book, which is built using Adobe PhoneGap to produce apps for either iOS or Android phones. And guess what, Adobe has discontinued that product as well. While most of the codebase is also available in the open-source counterpart Apache Cordova, Adobe’s withdrawal from the project means a big cut of funding and support. A web search for Apache Cordova will return many links titled “Is Apache Cordova Dead?” Clearly the sentiment is not optimistic.

The Bluetooth LE protocol at the heart of every project in this book was given similarly superficial coverage. There were mentions of approved official BLE characteristics, and that we are free to define our own characteristic UUID. But nothing about how to find existing BLE characteristics, nor rules on defining our own UUID. This was in line with the simplified style of the rest of the book, but at least we have a “Further Reading” section at the back of the book pointing to two books:

  1. Getting Started with Bluetooth Low Energy (*) by Townsend, Cufí, Akiba, and Davidson.
  2. Bluetooth Low Energy: The Developer’s Handbook (*) by Heydon

I like the idea of a curated step-by-step guide to building projects with Bluetooth LE, but when details are out of date and there’s nothing to help the reader adapt, such a guide is not useful. I decided not to spend the time and money to build any of these projects.

(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Monoprice Monitor Internals: Round 2 (10734)

Leveraging Bitluni’s work, I was able to convert one of my ESP32 into a VGA signal generator that outputs full-screen white. This gave me a low-impact way to convert a malfunctioning monitor into a lighting fixture. But the low-impact way is definitely not the optimal way, because it meant I would need a VGA cable dangling outside of the screen, connected to an ESP32, which needs its own power supply. What are my other options? The first time I opened up this monitor, I didn’t understand very much of what I had looked at. A few years of tinkering lessons have been added to my brain, so I’ll open it up again for another look.

This display was spared from the Great Backlight Liberation because it could still be powered on, but once I had it open, I wanted to examine its backlight in light (ha ha) of new knowledge. I found the likely wire harness for this panel’s backlight, a respectable bundle of twelve wires. Flipping over the circuit board, I see those wires were labeled with:

  • G_LED1-
  • G_LED2-
  • G_LED+
  • B_LED+
  • B_LED1-
  • B_LED2-
  • B_LED3-
  • B_LED4-
  • B_LED+
  • G_LED1
  • G_LED3-
  • G_LED4-

Based on these labels, we can infer there are four “G” LED strings and four “B” LED strings, each with their own “-” wire. There are two wires for “B_LED+”, but the “G” LEDs have separate “G_LED+” and “G_LED1”. I don’t know why they were labelled differently, but my multimeter found electrical continuity between “G_LED+” and “G_LED1” so they are wired in parallel, as are those two “B_LED+” wires. Leading me to believe that “G” and “B” LEDs each have two “+” wires corresponding to four “-” wires. So far, so good. I then turned on the monitor to probe voltage levels of these wires. I had expected something in the neighborhood of the 24V DC power supply that feeds this monitor, but my meter said the voltage level is actually in the neighborhood of 64V to 68V DC. Yikes! That’s well above maximum voltage of any boost converter I have on hand, so driving the backlight without this board wouldn’t be my top choice.

I see inductors and capacitors that are likely the boost conversion circuit for this backlight, but I didn’t see a promising chip that might be a standalone LED driver like I see in some laptop panel teardowns. I think it is all controlled by that central main chip sitting under a heatsink. I couldn’t make it drive the backlight with a PWM signal like I could the laptop panel, so I have to stay with the ESP32 VGA signal generator.

The next question is then: could I use this board to drive just the backlight? To test this possibility, I unplugged these two cables connecting to the LCD array. Some of these wires carry power, the rest carry LVDS pixel data. When fed with VGA data from my ESP32, this control board happily powered up the backlight even when it couldn’t communicate with the LCD array. This is a very promising find, but I’m not ready to commit to a destructive separation just yet.

By itself, without an incoming video signal, this monitor quickly goes to sleep mode. I know that my ESP32 VGA signal can keep it awake past that initial sleep mode, but I’m not yet confident everything else will continue running for the long term. The only diagnostic channel I have for this system is the on-screen display, and if I should separate the LCD from its backlight, I would no longer be able to read the on-screen display.

It’s very tempting to separate them now, because I know a lot of light is trapped back there. Look at the brightness difference when I compared a bare backlight with the same non-broken (and non-separated) Chromebook panel. I expect there to be a very bright backlight behind this LCD. But for the sake of doing things incrementally, I’ll leave the LG display module intact for now and focus on integrating my ESP32 VGA signal generator.

Pinout for Asiahorse 120mm Fan (Magic-i 120 V2)

The Asiahorse Magic-i 120 V2 bundle included three 120mm cooling fans with integrated addressable RGB LEDs. These fans have a six-wire connector designed to be plugged into a hub that was included in the bundle, along with a remote control to change the light shown performed by those LEDs. Most users just need to plug those fans into the included hub, but some users like myself want to bypass the hub and control each fan directly. For this audience, I present the fan connector pinout derived from an exploratory session on my electronics workbench.

Since this was reverse engineered without original design documents, I don’t know which side is considered “pin #1” by the engineers who designed this system. These connectors appear to be JST-PH, whose datasheet does point to one side as “Circuit #1”. But there’s no guarantee the engineers followed JST convention. To avoid potential confusion, I’ll call them only by name.

+12VFanHigh side of fan motor. Hub connects this wire directly to +12V power input.
Motor LowFanLow side of fan motor. Use a power transistor between this wire and ground to control fan speed.
GroundFan + LEDPower return for LED circuit and can be used for fan motor low side as well. Hub connects this wire directly to power input ground.
Data InLEDInput control signal for addressable RGB LED. Compatible with WS2812/”NeoPixel” protocol.
+5VLEDPower for LED circuit. Hub connects this wire directly to +5V power input.
Data OutLEDControl signal for addressable RGB LED beyond the end of LED string inside the fan. Useful for chaining multiple units together by connecting this wire to Data In of the next device in line.

Now that I understand its pinout, I will build my own control circuit to replace the default Asiahorse hub.

Exploring 6-Wire Connector of Asiahorse Magic-i 120 V2

I was curious about PC accessories with embedded addressable RGB LED, so I bought the cheapest item available on Newegg that day. I have verified it works as originally intended, and now I’m going to dig deeper. This Asiahorse Magic-i 120 V2 is a three-pack of 120mm fans with integrated LEDs. All three fans plug into a hub that has a corresponding remote control for me to select from a list of programmed patterns. This bundle is fine if I’m satisfied with those patterns, but I want display patterns of my own.

Each fan connects to the hub through this six-wire connector. The distance between each pin is 2mm. Judging by the pitch and physical appearance, I guess they are JST-PH or a clone. (I don’t have any 6-conductor JST-PH to verify.) This is mildly inconvenient because my workbench is setup to work with 0.1″ pitch (~2.54mm) connectors so it’s not very easy for me to probe those signals as-is.

The solution is a quick soldering project to give me an exploration board. I cut the bundle of six wires and inserted a small piece of perforated prototype board. Each of the six wires are then bridged with an exposed length of solid wire, easy for me to clip probes onto.

Trying the easy thing first, I probed for continuity between these six wires and the power input wires. This gave me location for +12V source, +5V source, and ground. Armed with this information, I soldered capacitors to smooth out both power rails, because the AC adapter I’m using is designed for far higher wattage than a few LEDs and it’s not unusual for switching power supplies to be noisy at low power levels. (And the cheap ones are always noisy at all power levels…)

With three out of six wires identified for power, this left me with three more wires to decipher. Here are my candidates:

  • Fan control: it may be one (or none) of the following:
    • Fan motor high side: the fan may be internally connected to the ground wire, and the high side wire is left exposed here for external PWM or on/off control with a power transistor.
    • Fan motor low side: the same idea but reverse: fan motor is internally connected to +12V and the low side is exposed here for external PWM or on/off control with a power transistor.
    • Fan motor PWM: Neither of the above. Instead of leaving either high or low side unconnected for external power transistor, a suitable power transistor is built into the fan and controlled with a 25kHz PWM signal as used in 4-wire fans.
  • Fan tachometer like the type I found in 3-wire fans.
  • LED Data In: addressable RGB signal input.
  • LED Data Out: signal that has passed through the LED string inside this device and ready to be passed on to other LEDs in other devices down the chain.

To decipher which wires are which of those candidate capabilities, I connected my Saleae Logic 8 to the three unknown wires. I started an analog waveform capture session and used the fan remote control to command that all fans show a solid green.

The top line in white stayed at 0V through the entire session. This may be the tachometer wire, or it may be fan motor low side. To determine which, I disconnected everything other than the +12V and ground wires. The fan did not move. I connected the wire corresponding to this white line to ground, and the fan started spinning. Conclusion: this wire is fan motor low side.

The middle line in brown shows a distinct repeating pattern. The bottom line in red shows the same repeating pattern, but delayed by 12 cycles. Since there are 12 LEDs in a fan, that means the middle brown line is LED Data In and the red line is LED Data Out.

To verify LED Data In, I connected +5V, ground, plus this wire to the data pin of a Pixelblaze. After I configured the Pixelblaze to emit control signal for a string of 12 x WS2812 (NeoPixel) LEDs, I saw them light up appropriately on the fan.

To verify LED Data Out, I connected it to the data input pin of an array of 64 WS2812 LEDs. I configured the Pixelblaze for 12 + 64 = 76 pixels. After colorful pixels cycle through the fan, they marched onwards to the array as appropriate for LED Data Out.

With these functions verified, I’m confident enough to describe this Asiahorse fan pinout.

Asiahorse Magic-i 120 V2

I wanted to play with a set of PC case cooling fans with embedded addressable RGB LEDs, with the intent of learning how to control them for a future project. For extra challenge, I got a multipack that combined both fan and LED controls into a single (probably proprietary) connector that plugged into a bundled hub. Using the selection criteria of “Lowest bidder of the day” I bought a three-pack of fans: the Asiahorse Magic-i 120 V2 and I look forward to seeing how it works.

Before I start cutting things up, I need to verify the product worked as originally designed. I won’t need a computer for this as this multipack came with a remote control for the hub that allows operation without a computer. This lets me explore its signals without the risk of damaging a computer. I just need to supply power in the 4-pin accessory format popular with pre-SATA hard drives and optical disks. I didn’t need a computer here, either, as I had an AC adapter with this plug that originally came in a kit that turned internal HDDs into USB HDDs.

There were no instructions in the box, but things were straightforward. Three fans plugged into the hub, and a power cable connected my AC adapter to the hub. As soon as I turned on the power, all three fans started spinning. The LED light show didn’t start until I pressed the “On” button on the remote.

RGB LEDs in this fan are mounted in the hub, on the outside perimeter of the motor control board. I count 12 LEDs and they aimed along motor axis upwards into the center portion of translucent fan blades. These colorful lights are then diffused along length of the blade, resulting in a colorful spinning disk. While shopping on Newegg I saw other arrangements. Some fans have LEDs around the outside perimeter instead, and some fans illuminate both the hub and the perimeter. Each manufacturer hoping to capture the attention of a buyer with novelty of their aesthetic.

This remote control allowed me to cycle between various colorful programs or choose from a set of solid colors. I had hoped the colorful programs would ripple across the fans, but all three fans appear to display identical light sequence. I could control LED brightness or turn all the lights off, but I didn’t seem to have any control over fan speed. I guess this is where an instruction manual would have been useful.

If I wanted to build something bright and colorful that circulates air, almost everything is already here and ready to go. I just have to wire up a power switch to turn everything on/off, and the remote can take care of the rest. But I didn’t buy this just to have some lights. I wanted full control and I’m not afraid to start cutting things up to get there.

Shopping for PC Cooling Fans with RGB LED

I’ve decided to investigate controlling the RGB LEDs embedded in aesthetics-based PC accessories. I’m not interested in using them for my PC, but as research for a yet to be determined future electronics project. I wanted something that is a standardized commodity with a large range of variety in the ecosystem and have some usefulness beyond just looking shiny. I settled on 120mm PC cooling fans.

There are many common sizes for cooling fans, but I’ve found 120mm to be the most common for aftermarket cooling. They’re larger than average for CPU cooling, but not too large especially for heat-pipe based cooling towers. But they’re typically installed for general cooling in tower cases, whose cooling vents are cut for 120mm fans. Covering both popular use cases mean more options.

Looking around on my NewEgg, I find that fans sold individually typically have two separate connectors. One for LED and one for fan control. To the rest of the computer, these fans look like two separate peripherals: the LED and the fan. They just happened to coexist in the same device. The fan control connector sometimes just have two wires for +12V and ground. Some have a third tachometer wire for reporting speed, and some have a fourth wire for built-in PWM control. Here’s an example of a CPU cooler the Vetroo V5 whose fan has two connectors: a 4-pin CPU cooler fan control connector and a JRAINBOW RGB LED plug. These should be simple and straightforward to interface.

More challenging are fans that use an intermediate hub. The hub has a connector for power and for JRAINBOW, consolidating those signals into a proprietary connector. I started contemplating this particular Rosewill RGBF-S12001 three-pack of fans which use such a design. I think I can decipher roles of each wire so I could bypass the hub and control each fan directly. This multipack also had a remote control for direct control of the hub without a computer. This is appealing to me, because independent control meant I didn’t need a PC involved as I probed how it worked. If I should make a fatal mistake (say, accidentally short-circuited something) it should only kill the hub or the fan and not an entire computer.

As I scrolled down, though, Newegg showed me several other items under “similar products”. I saw an even more discounted three-pack of fans: the Asiahorse Magic-i 120 V2. Three fans for fourteen bucks, well within my impulse buy range. I’ll buy the pack and see what it does.

Repurposing PC RGB LED Accessories

I’m quite comfortable poking around inside the tower case of a DIY PC. I’ve built a few PCs from components, and I’ve bought a few that came prebuilt by a shop. In my PC experience I’ve focused on the functional side of things and haven’t paid much attention to the aesthetics side. There’s a whole segment of the market enchanted with flashy LEDs. As an electronics hobbyist, it had been hard for me to look at those accessories seriously. I know how little addressable RGB LED modules cost in bulk, and it is quite clear those PC accessories were sold at a huge profit margin. I would be more inclined to build my own LED creations like Glow Flow than to pay a premium just for overpriced flashy lights in my PC.

But what if I looked beyond products’ MSRP? Since this particular market is about novelty, just like the clothing fashion industry there is a high turnover of products. The huge profit margin entices startups hoping to make it big, and most don’t. High product turnover means there are those who upgrade to the latest look. Each of these scenarios can lead to products sold well below MSRP: (1) clearance sales on unsold inventory of “old looks” (2) liquidation sales from bankrupt companies, and (3) secondhand markets (eBay/Craigslist) for those who have upgraded. A bargain hunter can find LED-bedazzled gear well below the price of new equipment, and in extreme cases even lower than price of buying new WS2812 modules directly.

Well, now I’m interested! Not for my PC, but for potential future electronics projects. Which means looking at these products and try to figure out how I can repurpose them. I started by looking at the product pages for a few PC hardware component companies and their advertising spiel for RGB LED accessories.

  • Corsair uses the iCUE branding as an umbrella covering aesthetics-based accessories. Some are the addressable LEDs I care about, but not all of them.
  • Gigabyte uses the name RGB Fusion.
  • Asus calls theirs Aura.
  • MSI calls theirs Mystic Light.

I hit a gold mine on MSI’s Mystic Light site, because their FAQ included an entry “What is Mystic Light Extension” that gave the following description:

Mystic Light Extension is a feature of Mystic Light software which allows user to control colors and effects of partner’s product such as RGB LED Strips, RGB PC Fans or RGB PC Case via on-board JRGB / JRainbow / JCorsair pin header.

    JRGB (4-Pin / PIN-definition: 12V/G/R/B): The JRGB pin header provides up to 3A (12V) power supply for non-addressable 5050 RGB LED solution showing single color.
    JRAINBOW (3-Pin / PIN-definition: 5V/D/-/G): The JRainbow pin header provides up to 3A (5V) power supply for addressable WS2812 RGB LED (ARGB) solution showing rainbow color.
    JCORSAIR (3-Pin / PIN-definition: 5V/D/G): The JCorsair pin header provides up to 3A (5V) power supply to Mystic Light software compatible CORSAIR devices.

This tells me products that use the JRGB header are colorful but not individually addressable. Products using JRAINBOW or JCORSAIR are 5V devices that uses a single data pin and no clock. This is a very strong hint these products are made of LEDs made of either WS2812 (“NeoPixel”) or alternatives that understand the same control signals. I will go look for a bargain and try one out.

TMP36 Temperature Sensor + ESP8266 = Not a Great Team

After successfully building a small circuit for 3-pin fan PWM control, I decided to add a temperature sensor. Now I have the option to make it smarter about adjusting speed (or stopping entirely) based on temperature. There are many temperatures sensor options, I decided to start simply and inexpensively with a sensor that returns an analog voltage representing temperature. A batch of TMP36 (*) seemed like a good starting point.

According to the Analog Devices datasheet, TMP36 output pin voltage is 0.75V at 25 degrees Celsius. For every degree of temperature rise, voltage increases 0.01V. For my first draft I wired it to my Wemos D1 Mini module’s analog input pin. But I had to adjust the scaling because a D1 Mini includes a voltage divider to scale input of 0-3.3V down to ESP8266 ADC range of 0-1V. This voltage divider (and math necessary to convert it back) added error to the reading. Since I intend to use this sensor for measuring room temperature, I do not expect to measure above 50 degrees Celsius (122 Farenheit) which corresponded to 1 Volt. Thus, I soldered a wire to connect TMP36 signal directly to ESP8266 module analog input, bypassing the voltage divider.

I noticed that there appears to be a startup time period where the temperature reading is 2-3 degrees too high but, after about 5-10 minutes, it will drop to a steady state temperature and stay there. I’m not sure if this startup behavior is from the TMP36 sensor or from the ESP8266 ADC. Either way, it meant I could not use this sensor in a sleep/read/sleep/read cycle because such a quick read will always result in a too-high value from this startup behavior.

With the initial breadboard test complete, I built a dedicated temperature sensor node with just an ESP8266 with a TMP36. In the interest of compactness, I decided to solder the sensor directly to ESP8266 module pins.

Upon startup, I saw that it reported temperature that was a few degrees too high, but I thought that was just the startup behavior I already noticed. But instead of dropping, it had kept going up. I thought I had a bad TMP36 until I realized it was accurately reading heat generated by a running ESP8266. According to my USB power meter, it consumed less than a third of a Watt, but that’s plenty of heat for a directly-mounted TMP36 to pick up.

If I wanted to measure a room’s air temperature and not temperature of a running ESP8266, I needed to give the sensor some distance. But even then, the readings weren’t reliable.

A little web research taught me that the ESP8266 ADC isn’t very precise nor is it calibrated. For most applications, being off by a few hundredth of a volt is a negligible error, but here every hundredth of a volt represents an entire degree of temperature which is decidedly not negligible. Taking multiple values and averaging them did help with the precision, but not accuracy. Knowing what I know now, in hindsight I should have done this with an ESP32. Those chips (or at least newer units) have their ADCs calibrated at Espressif factory. Though it is more likely that I will try a different temperature sensor in a future project. Either way, right now I have TMP36 on hand with a circuit board suitable for controlling 3-wire PC cooling fans. Time to put them together to do something useful.

Even though it doesn’t work very well, here’s an ESPHome YAML excerpt anyway. This will read a TMP36 every second and report average value every five minutes. TMP36 signal is assumed to have been soldered directly to ESP8266 analog input, bypassing Wemos D1 Mini voltage divider.

  - platform: adc
    pin: A0
    name: "Mobile Node Temperature"
    unit_of_measurement: "°C"
    update_interval: 1s
    accuracy_decimals: 2
      - multiply: 100
      - offset: -50
      - sliding_window_moving_average:
          window_size: 450
          send_every: 300
          send_first_at: 15

(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

ESP8266 Controlling 4-Wire CPU Cooling Fan

I got curious about how the 4 wires of a CPU cooling fan interfaced with a PC motherboard. After reading the specification, I decided to get hands-on.

I dug up several retired 4-wire CPU fans I had kept. All of these were in-box coolers bundled with various Intel CPUs. And despite the common shape and Intel brand sticker, they were made by three different companies listed at the bottom line of each label: Nidec, Delta, and Foxconn.

I will use an ESP8266 to control these fans running ESPHome, because all relevant code has already been built and ready to go:

  • Tachometer output can be read with the pulse counter peripheral. Though I do have to divide by two (multiply by 0.5) because the spec said there are two pulses per fan revolution.
  • The ESP8266 PWM peripheral is a software implementation with a maximum usable frequency of roughly 1kHz, slower than specified requirement. If this is insufficient, I can upgrade to an ESP32 which has hardware PWM peripheral capable of running 25kHz.
  • Finally, a PWM fan speed control component, so I can change PWM duty cycle from HomeAssistant web UI.

One upside of the PWM MOSFET built into the fan is that I don’t have to wire one up in my test circuit. The fan header pins were wired as follows:

  1. Black wire to circuit ground.
  2. Yellow wire to +12V power supply.
  3. Green wire is tachometer output. Connected to a 1kΩ pull-up resistor and GPIO12. (D6 on a Wemos D1 Mini.)
  4. Blue wire is PWM control input. Connected to a 1kΩ current-limiting resistor and GPIO14. (D5 on Wemos D1 Mini.)

ESPHome YAML excerpt:

  - platform: pulse_counter
    pin: 12
    id: fan_rpm_counter
    name: "Fan RPM"
    update_interval: 5s
      - multiply: 0.5 # 2 pulses per revolution

  - platform: esp8266_pwm
    pin: 14
    id: fan_pwm_output
    frequency: 1000 Hz

  - platform: speed
    output: fan_pwm_output
    id: fan_speed
    name: "Fan Speed Control"

Experimental observations:

  • I was not able to turn off any of these fans with a 0% duty cycle. (Emulating pulling PWM pin low.) All three kept spinning.
  • The Nidec fan ignored my PWM signal, presumably because 1 kHz PWM was well outside the specified 25kHz. It acted the same as when the PWM line was left floating.
  • The Delta fan spun slowed linearly down to roughly 35% duty cycle and was roughly 30% of full speed. Below that duty cycle, it remained at 30% of full speed.
  • The Foxconn fan spun down to roughly 25% duty cycle and was roughly 50% of the speed. I thought it was interesting that this fan responded to a wider range of PWM duty cycles but translated that to a narrower range of actual fan speeds. Furthermore, 100% duty cycle was not actually the maximum speed of this fan. Upon initial power up, this fan would spin up to a very high speed (judged by its sound) before settling down to a significantly slower speed that it treated as “100% duty cycle” speed. Was this intended as some sort of “blow out dust” cleaning cycle?
  • These are not closed-loop feedback devices trying to maintain a target speed. If I set 50% duty cycle and started reducing power supply voltage below 12V, the fan controller will not compensate. Fan speed will drop alongside voltage.

Playing with these 4-pin fans were fun, but majority of cooling fans in this market do not have built-in power transistors for PWM control. I went back to learn how to control those fans.

CPU Cooling 4-Wire Fan

Building a PC from parts includes keeping cooling in mind. It started out very simple: every cooling fan had two wires, one red and one black. Put +12V on the red wire, connect black go ground, done. Then things got more complex. Earlier I poked around with a fan that had a third wire, which proved to be a tachometer wire for reading current fan speed. The obvious follow-up is to examine cooling fans with four wires. I first saw this with CPU cooling fans and, as a PC builder, all I had to know was how to plug it in the correct orientation. But now as an electronics tinkerer I want to know more details about what those wires do.

A little research found the four-wire fan system was something Intel devised. Several sources cited URLs on http://FormFactors.org which redirects to Intel’s documentation site. Annoyingly, Intel does not make the files publicly available, blocking it with a registered login screen. I registered for a free account, and it still denied me access. (The checkmark next to the user icon means I’ve registered and signed in.)

Quite unsatisfying. But even if I can’t get the document from official source, there are unofficial copies floating around on the web. I found one such copy, which I am not going to link to because the site liberally slathered the PDF with advertisements and that annoys me. Here is the information on the title page which will help you find your own copy. Perhaps even a more updated revision!

4-Wire Pulse Width Modulation
(PWM) Controlled Fans
September 2005
Revision 1.3

Reading through the specification, I learned that the four-wire standard is backwards compatible with three-wire fans as those three wires are the same: GND, +12V supply, and tachometer output. The new wire is for a PWM control signal input. Superficially, this seems very similar to controlling fan speed by PWM modulating the +12V supply, except now the power supply stays fixed at +12V and the PWM MOSFET is built into the fan. How is this better? What real-world problems are solved by using an internal PWM MOSFET? The spec did not explain.

According to spec, the PWM control signal should be running at 25kHz. Fan manufacturers can specify a minimum duty cycle. Fan speed for duty cycle less than the minimum is open for interpretation by different implementations. Some choose to ignore lower duty cycles and stay running at minimum, some interpret it as a shutoff signal. The spec forbids pull-up or pull-down resistor on the PWM signal line external to the fan, but internal to the fan there is a pull-up resistor. I interpret this to mean that if the PWM line is left floating, it will be pulled up to emulate 100% duty cycle PWM.

Reading the specification gave me the theory of operation for this system, now it’s time to play with some of these fans to see how they behave in practice.

Computer Cooling Fan Tachometer Wire

When I began taking apart a refrigerator fan motor, I expected to see simplest and least expensive construction possible. The reality was surprisingly sophisticated, including a hall effect sensor for feedback on fan speed. Seeing it reminded me of another item on my to-do list: I’ve long been curious about how computer cooling fans report their speed through that third wire. The electrical details haven’t been important to build PCs, all I needed to know was to plug it the right way into a motherboard header. But now I want to know more.

I have a fan I modified for a homemade evaporator cooler project, removing its original motherboard connector so I could power it with a 12V DC power plug. The disassembled connector makes it unlikely to be used in future PC builds and also makes its wires easily accessible for this investigation.

We see an “Antec” sticker on the front, but the actual manufacturer had its own sticker on the back. It is a DF1212025BC-3 motor from the DF1212BC “Top Motor” product line of Dynaeon Industrial Co. Ltd. Nominal operating power draw is 0.38A at 12V DC.

Even though 12V DC was specified, the motor spun up when I connected 5V to the red wire and grounded the black wire. (Drawing only 0.08 A according to my bench power supply.) Probing the blue tachometer wire with a voltmeter didn’t get anything useful. Oscilloscope had nothing interesting to say, either.

To see if it might be an open collector output, I added a 1kΩ pull-up resistor between the blue wire and +5V DC on the red wire.

Aha, there it is. A nice square wave with 50% duty cycle and a period of about 31 milliseconds. If this period corresponds to one revolution of the fan, that works out to 1000/31 ~= 32 revolutions per second or just under 2000 RPM. I had expected only a few hundred RPM, so this is roughly quadruple my expectations. If this signal was generated by a hall sensor, it would be consistent with multiple poles on the permanent magnet rotor.

Increasing the input voltage to 12V sped up the fan as expected, which decreased the period down to about 9ms. (The current consumption went up to 0.22 A, lower than the 0.38 A on the label.) The fan is definitely spinning at some speed far lower than 6667 RPM. I think dividing by four (1666 RPM) is in the right ballpark. I wish I had another way to measure RPM, but regardless of actual speed the key observation today is that the tachometer wire is an open-collector output that generates a 50% duty cycle square wave whose period is a function of the RPM. I don’t know what I will do with this knowledge yet, but at least I now know what happens on that third wire!

[UPDATE: After buying a multichannel oscilloscope, I was able to compare fan tachometer signal versus fan behavior and concluded that a fan tachometer wire signals two cycles for each revolution. Implying this fan was spinning at 3333 RPM which still seems high.]

Notes on “Data Oriented Design” Textbook

I spent a lot of time playing Hardspace:Shipbreaker because I enjoyed the game, and I learned about it as an example of Unity’s Data Oriented Technology Stack (DOTS) in action. I was curious about the promised benefits of DOTS and wanted to know more about it. Unity’s learning portal has published several guides relating to the topic including DOTS Best Practices. Among the list of pointers to various background primers is the book Data-Oriented Design by Richard Fabian. (*) The physical book and/or its various digital editions give a full treatment of the topic, but for those who just want to skim through to see the basic concepts, the author has actually made the raw text available online.

And by “raw text” I mean that almost literally. This is a bare-bones site (“HTML 1.0”) with just contents in paragraphs. (Yes, HTML <P> tags.) Tables of information are in actual <TABLE> which sees little use on modern HTML sites. The CSS style sheet is used for its original purpose: declare styling on text with no layout or funny business with <div> mutations. The only images I see are actually code listings a.k.a. pictures of more text. I enjoyed reading such a feed of raw knowledge and here are a few notes.

If I were to summarize the book, I would go with “Why Game Developers Should Learn from Database Gurus.” To be clear, this is not a book about databases, but it discusses many concepts from the world of databases because they are obviously data oriented in nature. Plus, databases have a tremendous history developing methods to work in parallel, taking advantage of multicore processors where most modern applications struggle to do the same. This requires certain ways of thinking about problems, and we can learn from those lessons.

While it would be silly to say every game should be built on SQL queries, it makes sense to consider data-oriented approaches for certain domains. After all, databases underlie a lot of what we do with computers. So many computer applications are merely custom variations of database CRUD pattern that we now have products like Amazon Honeycode built around the concept of letting people create CRUD apps without code. For core game loop data, a general database would incur too much overhead. But we can tailor game code to adopt database concepts where it makes sense to improve game performance.

This book has a focus on game development, which I was surprised by given the lack of mention in the title or in some of the abstracts. It does talk about Unity at places, but only because Unity is a game engine. This book predates Unity’s currently ongoing DOTS overhaul, so DOTS is not mentioned. Code examples are in C/C++ and not the C# used by Unity, but as they are there mostly to illustrate concepts, the specific programming language is less important.

The text starts with a lot of thick theoretical foundations that I found difficult to chew through, with many words I found ambiguous. Things didn’t start to click for me until some concrete examples were shown. There were examples of how object-oriented programming becomes unwieldy in large projects, and I find myself nodding a lot from my own career experience. Data-oriented design approaches were then given as examples of how many of those problems can be solved, and they certainly sound good in theory! But I have yet to see them in practice for large projects and, more importantly, this book didn’t spend any time talking about how DOD might stumble into its own unwieldy problems. I don’t know how to avoid them if I don’t even know what they look like! Another way to present my wariness is this table:

Development ApproachGreatness in TheoryHurdles in Reality
Object-Oriented DesignDiscussedDiscussed
Data-Oriented DesignDiscussed…?

After reading this book, I’m convinced enough of merits to give data-oriented design a try, but I will have to keep my eyes open for where it stumbles. Because I’ve been writing code long enough to be sure of one thing: every nifty new solution to existing problems will bring its own new and unique problems. In time, with practice, I’m sure I’ll learn where DOD is the wrong tool for the job. The good news is that, thanks to the generalized nature of this book, I don’t necessarily have to try applying these concepts inside Unity. Which is good because Unity is still rolling through a multi-year transition to DOD.

(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.