Vue.js Beginner Learning Checkpoint

I’ve just finished reading the top-level set of Vue.js guides published on their documentation site. This followed a quick hands-on run through their Quick Start application and tutorial. I learned a lot about what Vue is (and just as importantly, what it is not.) As an added bonus, it hasn’t been too long since I went through their Angular framework counterparts. (Shopping cart on StackBlitz, Tour of Heroes hands on, and a few developer guides.) Running through these two frameworks back-to-back lets me compare and contrast their design decisions. See how they solve the common problems web developers encounter.

Vue.js is very flexible in how it can be deployed, making it a great option for developers who find Angular’s all-in-one package too constrictive for their preferences. Vue is also lighter weight: an empty Vue app is a tiny fraction of the size of equivalent empty Angular app, and this was possible because Vue is a newer framework with finer-grained modularization. Newer means there’s less legacy compatibility code to carry forward, and some framework level features are no longer necessary because they are included in newer browsers. More extensive modularization means some features inherent to Angular (and thus must be part of an empty app) are optional components in Vue and can be excluded.

But such flexibility also comes with downsides for a beginner, because every “you are free to do what you want” is also a “you have to figure it out on your own.” This was why I got completely lost looking at Polymer/Lit. I thought Vue occupied a good middle ground between the restrictive “do it our way” Angular design and the disorienting “do whatever you want” of Polymer/Lit. In the short term I think I will continue learning web development within Angular, because it is a well-defined system I can use. If I stick with learning web development, I expect I’ll start feeling Angular’s rigidity cramps my style. When that happens, I’ll revisit Vue.

Fun with Magnetic Field Viewing Film

It was fun to visualize magnetic field with an array of retired Android phones. It was, admittedly, a gross underutilization of hardware power. It was more a technical exercise than anything else. There are much cheaper ways to visualize magnetic fields. I learned from iron filings in school way back when, but that got extremely messy. Ferrofluid in a vial of mineral oil is usually much neater, unless the vial leaks. I decided to try another option: magnetic field viewing films.

Generally speaking, such films are a very thin layer of oil sandwiched between top and bottom sheets, suspending magnetic particles within. The cheap ones look like they just use fine iron particles, and we see the field among slightly different shades of gray caused by different orientation of uniformly colored particles. I decided to pay a few extra bucks and go a little fancier. Films like the unit I bought (*) have much higher visual contrast.

As far as I can tell, the particles used in these films present different colors depending on orientation of magnetic field. When the field is perpendicular to the film, as when one of the magnet poles, the film shows green. When the field is parallel, we see yellow. Orientation between those two extremes show different colors within that spectrum. When there’s no magnetic field nearby, we see a muddy yellow-green.

Playing with a Hall switch earlier, I established that this hard drive magnet has one pole on one half and another pole on the other half. Putting the same magnet under this viewing film confirms those findings, and it also confirms this film doesn’t differentiate between north or south poles: they both show as green.

This was the simplest scenario: a small disc salvaged from an iPad cover shows a single pole on the flat face.

Similarly simple is the magnet I salvaged from a Microsoft Arc Touch Mouse.

This unexpected complex field was generated by a magnet I salvaged from a food thermometer. I doubt this pattern was intentional, as it does nothing to enhance its mission of keeping the food thermometer stuck to the side of my refrigerator. I assume this complex pattern was an accidental result of whatever-was-cheapest magnetization process.

The flat shape of this film was a hinderance when viewing the magnetic field of this evaporator fan rotor, getting in the way of the rotor shaft. The rotor is magnetized so each quarter is a magnetic pole. It’s difficult to interpret from a static picture, but moving the rotor around under the film and seeing it move interactively makes the relationship more visible. It is also higher resolution and responds faster than an array of phones.

This disc was one of two that came from a 1998 Toyota Camry’s stock tape deck head unit. I don’t know exactly where it came from because I didn’t perform this particular teardown.

We can see it laid out on the tabletop in this picture, bottom row center next to the white nylon gears.

And finally, the motor that set me on this quest for magnetic viewing film: the rotor from a broken microwave turntable motor. Actually looking at the plastic hub color, I think this is the broken rotor that got replaced by the teardown rotor sometime later.

And since I’m on the topic, I dug up its corresponding coil from that turntable motor teardown. Curious if I would see a magnetic field, I connected it to my benchtop DC power supply and slowly increased the voltage. I could only get up to 36V DC and didn’t notice anything. This coil was designed for 120V AC, so I wasn’t terribly surprised. I’ll have to return to this experiment when I can safely apply higher voltage to this coil.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Array of Android Magnetometers

Curious about magnetometers, I wrote a small web app that used a Chrome preview feature that allowed me to access three-axis magnetometer hardware integrated into many Android phones. I showed that three-dimension data in the form of a virtual compass needle and had fun seeing it react to holding magnets near my phone.

Then for more fun, I ran my web app on more phones! I pulled out my entire stockpile of retired Android phones. This is a collection of my own retired phones and those retired by my friends for one reason or another. (I ask my friends for them after they moved on to new phones.) It’s a collection of cracked screens, flaky touch input, exhausted batteries, and other reasons why people decide to get a new phone. I found the subset that could boot up and run my web app. I had to go to “chrome://flags” on all of them to activate the preview feature, of course, but I believe that was still less onerous than if I had written a native Android app. To install my own native app, I would have to put the phone into developer mode and sideload my app via USB, which would have been a more complex procedure.

First round of this experiment with seven running old phones exposed some problems, immediately visible by the fact these virtual compass needles were pointing in wildly different directions. They should all be aligned to Earth’s magnetic field! Tracking down various ideas, I found two made the biggest difference:

  1. The values given to web apps were apparently “calibrated” values, but the calibration routine has not yet been run. This is something that happens behind the scenes. All I had to do was to pick up each phone and do the figure-8 twirl. My app continued running while this was occurring and, once I set the phone back down, my virtual compass needle pointed in a different direction than it had a minute earlier.
  2. The phones needed to be spaced further apart. Obvious in hindsight: there are a few strong magnets inside a phone for their speakers and possibly other peripherals. While each phone might have properly isolated their magnetometer from their own magnets, they aren’t necessarily isolated from another phone sitting nearby.

Things look better on the second round of this experiment. After taking account for those two factors and waking up an eighth phone to join the fun. They still don’t completely agree but at least they all point in the same general direction. And when I wave a strong magnet through the air, all of those virtual needles react and point to my magnet. It was more fun than I had expected, even if it was ridiculously underutilizing the capabilities of these old phones and not anywhere near the best tool for the job. If the goal was to visualize magnetic fields, we have far easier and cheaper ways to do it. Like using a sheet of magnetic field viewing film.


My exploratory project is publicly availble on GitHub to run on your own Android phone (or phones)

Visualizing Magnetometer Data with Three.js

I’m happy that my simple exploratory web app was able to obtain data from my phone’s integrated magnetometer. I recognize there are some downsides to how easy it was, but right now I’m happy I can move forward. Ten times a second (10Hz is the maximum supported update rate) my callback receives x, y, and z values in addition to auxiliary data like a timestamp. That’s great, but the underlying meaning is not very intuitive for me to grasp. What I want next is a visualization of that three-axis magnetometer data.

I turned to the JavaScript 3D graphics library Three.js. The last time I used Three.js was to visualize the RGB332 color space, using a 2D projection to help me make sense of data along three dimensions of color: a cylinder representing HSV color space and a rectangular solid representing RGB. Now I want to visualize a single vector in three-dimensional space representing the local magnetic field as reported by my phone’s magnetometer. I was a little intimidated by the math for calculation 3D transforms. I tried to make my RGB332 color app transition between HSV and RGB color space but it never looked right because I didn’t understand the 3D transform math.

Fortunately, this time I didn’t have to do any of my own math at all. Three.js has a built-in function that accepts the x, y, and z components of a target coordinate and calculates the rotation required to have a 3D project look at that point. My responsibility is to create an object that will convey this information. I chose to follow the precedence of an analog compass which is built out of a small magnetic needle. Shaped like a narrow diamond with one half painted red and the other half painted white. For this 3D visualization I created a shape out of two cones, one red and one white. When this shape looks at the magnetometer vector, it functions very similarly to the sliver of magnet inside a compass.

As a precaution, I added a check for WebGL before firing up Three.js code. I was pretty confident any Android Chrome that supported the magnetometer API would support WebGL as well, but it was good practice to confirm.

One thing I’m not doing (but should) is to account for screen orientation. Chrome developers have added a feature to automatically adjust for screen orientation but right now I’m just going to deactivate auto-rotate on my phone (or… phones!)


Source code for my exploratory project is publicly available on GitHub

Magnetometer API Privacy Concerns

Many Android phones have an integrated magnetometer available to native apps. Chrome browser for Android also makes that capability available to web apps, but right now it is hidden by default as a feature preview. Once I enabled that feature, I was able to follow some sample code online and quickly obtain access to magnetometer data in my own web app. That was so easy! Why was it blocked by default?

Apparently, the answer (or at least a part of it) was that it was too easy. Making magnetometer and other hardware sensor data freely available to web apps would feed into hardware-based browser fingerprinting. Even though magnetometer data by itself might be innocuous, it could be combined with other seemingly-innocent data to uniquely identify users thereby piercing privacy protections. This is bad, and purportedly why Apple has so far declined to support sensor APIs.

That article was in 2020, though, and the web moves fast. When I read up on magnetometer API on MDN (Mozilla Developer Network) I encountered an entire section on obtaining user permission to read hardware sensor data. Since I didn’t have to do any of that for my own test app to obtain magnetometer data, I guess this requirement is only present in Mozilla’s own Firefox browser. Or perhaps it was merely a proposal hoping to satisfy Apple’s official objection to supporting sensor API.

I found no mention of Mozilla’s permission management framework in the official magnetometer API specification. There’s a “Security and Privacy Considerations” section but it’s pretty thin and I don’t see how it would address fingerprinting concerns. For what it’s worth, “limiting maximum sample frequency” was listed as a potential mitigation, and Chrome 111 only allows up to 10Hz.

Today users like myself have to explicitly activate this experimental feature. And at the top of “chrome://flags” page where we do so, there’s an explicit warning that enabling experimental features could compromise privacy. In theory, people opting-in to magnetometer today is aware of potential abuse, but that risk has to be addressed before it’s rolled out to everyone. In the meantime, I have opted in and I’m going to have some fun.

Magnetometer Quick Look

Learning about Hall effect switches & sensors led to curiosity about more detailed detection of magnetic fields. It’s always neat to visualize something we could not see with our eyes. Hall sensors detect magnetic field along a single axis at a single point in space. What if we can expand beyond those limits to see more? Enter magnetometers.

I thank our cell phones for high volume magnetometer production, as a desire for better on-device mapping led to magnetometer integration: sensitive magnetometers can detect our planet’s magnetic field to act as a digital compass to better show a map on our phones. Since a phone is not always laid flat, these are usually three-axis magnetometers that give us a direction as well as magnitude for the detected magnetic field.

But that’s still limited to a single point in space. What if we want to see the field across an area, or a volume? I started dreaming of a project where I build a large array of magnetometers and plot their readings, but I quickly bogged down in details that made it clear I would lose interest and abandon such a project before I could bring it to completion.

Fortunately, other people have played with this idea as well. My friend Emily pointed me to Ted Yapo’s “3D Magnetic Field Scanner” project which mounted a magnetometer to a 3D printer’s print head carriage. Issuing G-code motion commands to the 3D printer control board, this allowed precise positioning of the sensor within accessible print volume of the 3D printer. The results can then be plotted out for a magnetic field visualization. This is a great idea, and it only needs a single sensor! The downside is that such a scheme only works for magnetic fields that stay still while the magnetometer is moved all around it. I wouldn’t be able to measure, say, the fields generated by an electric motor’s coils as it is running. But it is still a fun and more easily accessible way to see the magnetic world.

I started window shopping magnetometer breakout boards from Adafruit, who has an entire section in their store dedicated to such devices. Product #5579 is built around the MMC5603 chip, whose sensitivity is designed for reading the Earth’s magnetic field. For non-compass scenarios, such sensitivity would quickly become saturated near a magnet. Adafruit recommended product #4366 built around the TLV493D chip for use with strong magnets.

I thought it would be interesting to connect one of these sensors to an ESP8266 and display its results on a phone web interface, the way I did for the AS7341 spectral color sensor. I was almost an hour into this line of thought before I realized I was being silly: why buy a magnetometer to connect to an ESP8266 to serve its readings over HTTP to display in a phone browser interface? My Android phone has a magnetometer in it already.

Hall Effect Sensors Quick Look

Learning about brushless direct current (BLDC) motors, I keep coming across Hall-effect sensors in different contexts. It was one of the things in common between a pair of cooling fans: one from a computer and another from a refrigerator.

Many systems with BLDC motors can run “sensorless” without a Hall sensor but I hadn’t known how that was possible. I’ve learned they depend on an electromagnetic effect (“back EMF”) that only comes into play once the motor is turning. To start from a stop, sensorless BLDC motors depend on an open-loop control system “running blind”. But if the motor behaves differently from what that open-loop control expected, startup sequence fails. This explains the problem that got me interested in BLDC control! From that, I conclude a sensor of some sort is required for reliable BLDC motor startup when motor parameters are unknown and/or the motor is under unpredictable physical load.

Time to finally sit down an learn more about Hall effect sensors! As usual I start with Wikipedia for general background, then moving on to product listings and datasheets. Most of what I’ve found can be more accurately called Hall effect switches. They report a digital (on/off) response to their designed magnetic parameters. Some of them look for magnetic north, some look for magnetic south, others look for a change in magnetic field rather than a specific value. Some are designed to pick up weak fields of distant magnets, others are designed for close contact with a rare earth magnet’s strong field. Sensors designed to detect things like a laptop lid closing don’t need to be fast, but sensors designed for machinery (like inside a brushless motor!) need high response rates. All of these potential parameters multiply out to hundreds or thousands of different product listings on an electronic component supplier website like Digi-Key.

With a bit of general background, I dug up a pair of small Hall effect sensor breakout boards (*) in my collection of parts. The actual sensor has “44E” printed on it, from there I found datasheets telling me it is a digital switch that grounds the output pin when it sees one pole of a magnet. If it sees the other pole, or if there is no magnet in range at all, the output pin is left floating. Which pole? Um… good question. Either I’m misunderstanding the nomenclature, someone made a mistake in one of these conflicting datasheets, or maybe manufacturers of “44E” Hall switches aren’t consistent in which pole triggers pulling down the output pin.

Fortunately, the answer doesn’t matter for me right now. This breakout board was intended for use with microcontrollers like Arduino projects, and it also has an onboard LED to indicate its output status. This is good enough for me to start. I connected the 5V to center pin, ground to pin labeled “-“, and left the “S” pin unconnected. The onboard LED illuminated when I held it up against one pole. When held up against the opposite pole, or when there’s no magnet nearby, the LED stays dark.

I also knew there was a Hall sensor integrated into an ESP32. This one is not just an on/off switch, it can be connected to one of ESP32’s ADC channels to return an analog value. Sounds interesting! But ESP32 forums report the sensor is only marginally useful on the type of ESP32 development board I use. The ESP32 chip itself is packed tightly alongside other chips, under a metal RF shield, resulting in a very noisy magnetic environment.

Looking more into similar standalone sensors, I learned some keywords. To get more data about a nearby magnet, I might want an “analog” sensor that detects a range of values instead of on/off relative to some threshold. Preferably the detected output value changes in “linear” response to magnetic field, and to tell the difference between either pole or no magnet at all I’d want a “bipolar” sensor. Searching on Digi-Key for these parameters and sorted by lowest cost found the TI DRV5053: analog bipolar hall effect sensors with linear output. Available in six different levels of sensitivity and two different packages. And of course, there are other companies offering similar products with their own product lines.

They might be fun to play with, but they only detect magnet field strength at a single point along a single axis. What if I wanted to detect magnetic fields along more than one axis? That thought led me to magnetometers.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Brushless Motors with Two(?) Phases

During teardowns, I usually keep any motors I come across, even if I have no idea how I’d use them. Recently I got a pair of hard drive motors spinning again, a good step forward in my quest to better understand brushless direct current (BLDC) motors. I’ve also learned enough to spot some differences between motors. Those that I’ve successfully spun up are three-phase motors, with three sets of coils energized 120 degrees out of phase with each other to turn the rotor. But not all of the motors I’ve disassembled fit this description.

There’s a class of motors with only two sets of coils. Based on what I know of three-phase brushless motors, that would imply two sets of coils are 180 degrees out of phase. A naive implementation would have no control over which direction the rotor would spin, but I’ve found these in cooling fans, where the direction of spin is critical, so there must be more to this story. (If it doesn’t matter which direction the motor spins, we only need a single coil.)

What I’ve observed so far is that a Hall-effect sensor is an important part of this mystery, because I looked up this control chip found inside a computer cooling fan and read it had an integrated Hall sensor.

A Hall sensor is also part of this refrigerator evaporator fan motor control circuit.

Searching online for an explanation of how these motors worked, I found this thread “How do single phase BLDC motors start in proper direction?” on Electronics StackExchange. I don’t fully understand the explanation yet, but I do understand these motors aren’t as symmetric as they look. A slight asymmetry allows enforcing the correct turn direction. The hall sensor adds a bit of cost, but I guess it is cheaper than additional coils.

Even better, that thread included a link to Electric Drive Fundamentals on Electropaedia. This page gives an overview of fundamental electromagnetic principles that underpin all electric motors. I knew some of these, but not all of them, and certainly not enough to work through the mathematical formulas. But I hope studying this page will help me better understand the motors I find as I take things apart.

Independent of building an understanding of electromagnetic fundamentals, I also want to better understand Hall sensors critical to running certain motors.

Two Hard Drive Motors on BLHeli_S Controller

I bought some brushless motor controllers meant for multirotor aircraft: A four-pack intended for quadcoptor drones. But instead of running a motor turning a propeller, I started with a small motor salvaged from a laptop optical drive. (CD or DVD I no longer remember.) It spun up with gusto, and I watched the motor control signals under an oscilloscope. That was interesting, and I shall continue these experiments with more of my teardown detritus: computer hard drive motors.

2.5″ Laptop Hard Drive

This hard drive was the performance bottleneck in a Windows 8 era tablet/laptop convertible. The computer was barely usable with modern software until this drive was replaced with an SSD. Once taken out of use, I took it apart to see all the intricate mechanical engineering necessary to pack hard drive mechanicals into 5mm of thickness. The detail important today are these three pads on the back, where its original control board made electrical contact with the spindle motor.

They are now an ideal place for soldering some wires.

This motor is roughly the same size as CD/DVD motor. But because I never figured out how to remove the hard drive platter, there is significantly more inertia. This probably contributed to the inconsistent startup behavior I saw. Sometimes the drone motor controller could not spin up the motor, it would just twitch a few times and give up. I have to drop PWM control signal back down to zero throttle and try again. Something (maybe the platter inertia, maybe something else) is outside the target zone of the drive controller’s spin-up sequence. This has been a recurring issue and my motivation to learn more about brushless motors.

Side note: This motor seemed to have a curious affinity for the “correct” orientation. I had thought brushless motors didn’t care much which direction they spun, but when I spun the platter by hand (without power) it would coast for several seconds in the correct orientation but stop almost immediately if I spun it backwards. There’s a chance this is just my wrist, applying more power in one direction versus another, or there might be something else. It might be fun to sit down and get scientific about this later.

3.5″ Desktop Hard Drive

I then switched out the 2.5″ laptop hard drive motor for a 3.5″ desktop computer hard drive motor. This isn’t the WD800 I took apart recently, but another desktop drive I took apart even further back.

I dug it out of my pile because it already had wires soldered to the motor from a previous experiment trying to use the motor as a generator. The data storage platters had been removed from this hard drive so I expected less problems here, but I was wrong. It was actually more finicky on startup and, even if it starts up, never spins up very fast. If I try turning up the speed control signal beyond a certain (relatively slow) point, the coil energizing sequence falls out of sync with the rotor which jerkily comes to a halt.

I was surprised at that observation, because this motor is closest size-wise to the brushless outrunner motors I’ve seen driving propellers. There must be one or more important differences between this 3.5″ hard drive motor and motors used for multirotor aircraft. I’m adding this to a list of observations I hope I can come back to measure and articulate those important differences. Lots to learn still, but at least I know enough now to notice something’s very different with brushless motors built from even fewer wires and coils.

CD/DVD Motor on BLHeli_S Controller Under Oscilloscope

I dug up a brushless motor salvaged from a laptop optical drive and wired it up so I could connect it to a cheap brushless motor controller running BLHeli_S firmware. This firmware supports multiple control protocols, but I’ll be sending classic RC servo control pulses generated by an ESP32 I had programmed to be a simple servo tester.

I saw what looked like a WS2812-style RGB LED on the motor controller circuit board and was disappointed when I didn’t see it light up at all. I had expected to see different colors indicating operating status. Instead, the firmware communicates status by pulsing the motor coils to create a buzzing tone. I found an explanation of these beeps on this Drones StackExchange thread titled “When I power up my flight controller and ESC’s, I hear a series of beeps. What do all of the beeps mean?” The answer says the normal sequence is:

  • Powered-Up: A set of three short tones, each higher pitch than the previous.
  • Throttle signal detected (arming sequence start): One longer low pitch tone.
  • Zero throttle detected (arming sequence end): One longer high pitch tone.

(The author of that answer claimed this information came from BLHeli_S manual, but the current edition I read had no such information I could find. Was it removed? If so, why?)

Once the “arming sequence end” tone ends, I could increase my throttle PWM setting to spin up the little motor. It spins up quite enthusiastically! It responded exactly as expected, speeding up and slowing down in response to change in motor control PWM signal.

Oscilloscope Time

Once I verified the setup worked, I added my oscilloscope probes to the mix. I wanted to see what the motor power waveforms looked like, and how it compares against what I saw from an old Western Digital hard drive.

That might be a bit jumbled, so here are the phases separately. We can see they are in a sequence, one after another. They all peak at battery power level, and there’s a transition period between peaks. The transitions don’t look as smooth as the Western Digital controller and I don’t have a center wye tap to compare voltage levels against.

Next experiment: try the same controller with different motors.

Potential Brushless DC Motor (BLDC) Starting Points

Learning more about brushless DC motor control have been on my to-do list for several years, ever since I learned of another maker’s problems with motor control algorithms that did not suit the task. I’m still not ready to dive into this field yet, but I thought I’ve collected enough potential sources of information that it’s worth writing them down.

ESP32 MCPWM

When I looked at ESP32, one of the features that caught my eye was its motor control pulse-width modulation (MCPWM) peripheral. I used it for my Micro Sawppy rover project which used it to control DC gearmotors in each of the rover’s six wheels. But it is also applicable for controlling two brushless DC motors. Espressif’s ESP32 BLDC control example project looked interesting. Putting it to work directly without modification requires a motor with integrated hall sensor and additional hardware (MOSFETs and gate drivers) I don’t have at the moment. I’ve noticed that MCPWM seems to be missing from more recent Espress microcontrollers, does this mean customer demand hasn’t been high enough to justify continued development? If so, that’s a shame but also a warning I might not want to get too invested in a feature of uncertain future.

SimpleFOC

For something more fully-featured than an Espressif demo, I could look at SimpleFOC. It’s a library for Arduino framework that supports many microcontrollers with Arduino cores. Ranging from the original ATMega328P to STM32 to ESP32. (ESP32 port claims to use its MCPWM peripheral.) This is promising, but SimpleFOC documentation seems to be aimed at people who already knew a lot about brushless motors and know what they want. I need to do more studying before I can absorb information within.

There are a few “reference implementations” for power circuits controlled by SimpleFOC software, but it looks like most of them are out of stock due to the global semiconductor shortage. There are also projects that build on SimpleFOC like this custom wheeled robot controller featured on Hackaday.

VESC

Another open-source project for brushless motor control is VESC. Its documentation is even more opaque to me than SimpleFOC was. (One of them could be built on top of the other for all I know.) What I got out of VESC site is that its primary focus are on brushless motors of sizes suitable for carrying a person around: electric skateboards, scooters, etc.

Theory of Operation

Looking online for beginner-level introductions, I first found How to Make Advanced BLDC Motor Controllers which, despite the name, stayed quite beginner-friendly and did not get very advanced. Here I learned terminology like “trapezoidal drive” which is easier to implement than “sinusoidal drive” which is what field-oriented controllers like SimpleFOC and VESC does. I also learned of dangers like “shot-through condition” which risks short-circuiting and blowing out our controller circuit and/or our power supply.

Another beginner-friendly article is How to Power and Control Brushless DC Motors. This is the second time I got a beginner’s primer from Digi-Key’s Article Library. The previous lesson involved USB power delivery. And with these two successes, I think I should look around that library to find more educational resources. It didn’t take long before I found Controlling Sensorless, BLDC Motors via Back EMF.

Of course, this study syllabus is only important because I want to understand what’s going on behind the scenes. If I just want to spin a motor, the easiest thing to do is to buy a commercial controller. Which I’ve also done to see what I can learn.

Hard Drive (WD800) Motor Control on Oscilloscope

I took apart a Fantom Drives FDU80 and found within a Western Digital WD800 3.5″ hard disk drive with 80GB capacity. I’m sure that was a lot of space in its day, but it’s quite small by current standards. Combined with the fact that it used now-outdated PATA interface, there’s no point trying to put this drive back into service storing data. However, it is still a marvel of engineering and manufacturing and I want to poke around to learn what I can.

One thing that caught my attention was the motor power interface, showing four contact points. My past hard drive adventures always found three contact points on the brushless motor, why is this different? My experience with four-conductor motors are stepper motors used in 3D printers. Is this an electrical cousin?

Removing four Torx screws allowed circuit board removal, which was easy because electrical connection between the board and mechanical drive bits are done with springy metal fingers. Now I can probe electrical resistance between these four points, named via their label on the circuit board E50 through E53.

Resistance (Ohms)E50E51E52E53
E5001.11.11.1
E511.101.91.9
E521.11.901.9
E531.11.91.90

Measuring small resistance values is a little tricky, we’re getting into margins of errors. But it is clear this is not an electrical cousin of a stepper motor. A 4-wire stepper would have two pairs of wires that are electrically independent from each other, but these wires all have connections to each other. It appears E50 has the same resistance to the remaining three, and the resistance between any of those three are roughly double the resistance to E50. This is consistent with a brushless DC motor with “Wye” winding style with E50 as the center.

I wanted to see what the motor control signals look like under an oscilloscope, so I soldered wires to each motor point and connected them to my Siglent four-channel oscilloscope. I also soldered a wire to the ground wire of power input and connected all four probes’ ground reference alligator clip to that ground wire.

  • E50 center of the Wye is connected to yellow channel 1
  • E51 to magenta channel 2
  • E52 to cyan channel 3
  • E53 to green channel 4

With everything hooked up, I powered up the hard drive and the oscilloscope. Siglent has an “Auto Setup” button that can quickly configure the scope for simple tasks. If this were easy, I expect to see three sinusoidal waves 120 degrees out of phase.

The jumble told me Auto Setup couldn’t handle this task. There’s enough pattern for me to see It’s not random noise but I don’t know how to interpret what I see. Trying to make sense of this plot, I started by giving all four channels identical vertical scale. A repeating pattern emerged, and I zoomed in a little bit.

From here I can see E50 (channel 1 yellow) spends most of its time either at 0V or 6V, and when it is at 0V all the other channels are usually at 0V as well. Beyond that, this trace is still quite noisy with other channels anywhere from 0V to 12V. There’s a lot happening and, trying to decipher things one at a time, I pressed “Run/Stop” to see individual snapshots. I mostly get a trace I don’t understand, but occasionally I see a simple picture consistent with a state I do:

The motor control board energizes coils in a sequence to keep the rotor spinning. Sometimes this means peak voltage difference across two of the coils while the third is close to same voltage as center of the wye winding. (Yellow channel 1.) This could happen in one of two directions. Three coils * two directions = six possibilities, after pressing “Run/Stop” enough times I could catch examples of all six.

Most of the time, though, it doesn’t look that obvious. as we’re in some intermediate state transitioning between those six endpoints. I couldn’t see any pattern at this timescale, I had to zoom out from 5us to 500us.

Looking at which coil spends time at 12V, I can see them cycling through a repeating pattern magenta-cyan-green (or channel 2/3/4). This is a cycle, but not a sinusoidal one like I had expected. The key here is noticing what matters here is not voltage relative to power supply ground, but the voltage relative to yellow center 1: the center of the wye. It spends a lot of time near 6V but doesn’t stay at exactly 6V. Looking at how yellow voltage level varies we can see how that would approximate a sine wave relative to each of the coils. It may not be a perfect sine wave, but I guess it’s close enough to drive this brushless motor. I had expected the wye center to be connected to ground and each of the coils given positive or negative sinusoidal voltage. But this controller connects the coils to either ground or 12V and vary voltage of Wye center. I’m sure the engineering team decided on this approach for good reasons, but I don’t understand motors well enough (yet) to see them.

Where might I learn more about these oscilloscope traces? I went looking for datasheets corresponding to microchips used on the control board.

Overkill Options: A-Frame, Three.js and D3.js

After getting input controls sorted out on my AS7341 interface project, it’s time for the fun part: visualizing the output! Over the past few years of reading about web technologies online, I’ve collected a list of things I wanted to play with. My AS7341 project is not the right fit for these tools, so this list awaits a project with the right fit.

At this point I’ve taken most of Codecademy’s current roster of courses under their HTML/CSS umbrella. One of the exceptions is “Learn A-Frame (VR)“. I’m intrigued by the possibilities of VR but putting that in a browser definitely feels like something ahead of its time. “VR in a browser” has been ahead of its time since 1997’s VRML, and people have kept working to make it happen ever since. A brief look at A-Frame documentation made my head spin: I need to get more proficient with web technologies and have a suitable project before I dive in.

If I have a project idea that doesn’t involve full-blown VR immersion (AS7341 project does not) but could use 3D graphics capability (still does not) I can still access 3D graphics hardware from the browser via WebGL. Which by now is widely supported across browsers. In the likely case working directly with WebGL API is too nuts-and-bolts for my taste, there are multiple frameworks that help take care of low-level details. One of them is Three.js, which was the foundation for a lot of cool-looking work. In fact, A-Frame is built on top of Three.js. I’ve dipped my toes in Three.js when I used it to build my RGB332 color picker.

Dropping a dimension to land of 2D, several projects I’ve admired were built using D3.js. This framework for building “Data-Driven Documents” seems like a great way to interactively explore and drill into sets of data. On a similar front, I’ve also learned of Tableau which is commercial software covering many scenarios for data visualization and exploration. I find D3.js more interesting for two reasons. First, I like the idea of building a custom-tailored solution. And second, Tableau was acquired by Salesforce in 2019. Historically speaking, acquisitions don’t end well for hobbyists on a limited budget.

All of the above frameworks are overkill for what I need right now for an AS7341 project: there are only a maximum of 11 different sensor channels. (Spectral F1-F8 + Clear + Near IR + Flicker.) And I’m focusing on just the color spectra F1-F8. A simple bar chart of eight bars would suffice here, so I went looking for something simpler and found Chart.js.

Brief Look at a LinuxCNC Pendant

Trying to build a little CNC is definitely a learn-as-I-go project. Moving the motor control box was a simple (though necessary) mechanical change, but not the only idea prompted by initial test runs. I also thought it would be nice to have a handheld pendant to help with machine setup, instead of going to the laptop all the time. I got a chance to look over a CNC pendant to see how I might integrate one.

This particular unit was purchased from this eBay vendor listing, but there are many other similar listings across different online marketplaces. Judging by this listing’s title, the popular keyword salad included: CNC Mach3 USB MPG Pendant Handwheel. I knew what CNC, USB, pendant and handwheel referred to. MPG in this context means “Manual Pulse Generator” referring to the handwheel that generates pulses to signal the CNC controller to move individual steps. And finally, Mach3 is a Windows software package that turns a PC into CNC machine controller.

My first draft CNC controller was built on an ESP32 without USB host capability, so there’s little hope of integrating this USB pendant. The most likely path would involve LinuxCNC, a freeware alternative to Mach3. Poking around documentation for control pendants, the first hit was this link which seems to be talking about units that connected via parallel port. Follow-up searches kept coming across this link for wireless pendants which I didn’t think was relevant. After coming across it for the fifth or sixth time, I decided to skim the page and saw that it also included information about a wired USB pendant. It’s not a direct match, though. Here’s information from Ubuntu’s dmesg tool after I plugged in this pendant.

[ 218.491640] usb 1-1: new full-speed USB device number 2 using xhci_hcd
[ 218.524920] usb 1-1: New USB device found, idVendor=10ce, idProduct=eb93
[ 218.524926] usb 1-1: New USB device strings: Mfr=1, Product=0, SerialNumber=0
[ 218.524931] usb 1-1: Manufacturer: KTURT.LTD
[ 218.538131] generic-usb 0003:10CE:EB93.0004: hiddev0,hidraw3: USB HID v1.10 Device [KTURT.LTD] on usb-0000:00:10.0-1/input0

The key here are USB identifiers idVendor and idProduct, 0x10CE and 0xEB93. I could change those values in the associated udev rule:

ATTRS{idVendor}=="10ce", ATTRS{idProduct}=="eb93", MODE="666", OWNER="root", GROUP="users"

But that was not enough. I dug deeper to find relevant source code and it is explicitly looking for idVendor:idProduct of 0x10CE:0xEB70.

dev_handle = libusb_open_device_with_vid_pid(ctx, 0x10CE, 0xEB70);

Oh well, getting this to run would go beyond just configuration files, there will need to be code changes and recompiles. Looks like some people are already looking at it, a search for eb93 found this thread. I don’t know enough LinuxCNC to contribute or even understand what they are talking about. I returned this USB pendant to its owner and set this idea aside. There are plenty of CNC pendant offerings out there I can revisit later, some of which are even bundled with an entire CNC control package.

Using TCL 55S405 TV as Computer Monitor

I just bought a LG OLED55B2AUA for my living room, displacing a TCL 55S405. I have several ideas on what I could do with a retired TV, and the first experiment was to use it as a computer monitor. In short, it required adjusting a few TV settings and even then, there are a few caveats for using it with a Windows PC. Using it with a Mac was more straightforward.

As expected, it is ludicrously large sitting on my desk. And due to the viewing angles of this (I think VA) panel, the edges and corners are difficult to read. I see why some people prefer their large monitors to be curved.

I noticed a delay between moving my mouse and movement of onscreen cursor. This delay was introduced by TV’s image processing hardware. During normal TV programs, the audio can be delayed in order to stay in sync with the video. But that trick doesn’t work for interactive use, which is why TVs have a “Game Mode” to disable such processing. For this TV, it was under “TV settings” / “Picture settings” / “Game mode”. Turning it on allowed the mouse to feel responsive again.

The next problem was brightness. Using it as a monitor, I would sit much closer than I would a TV and there is too much light causing eyestrain. First part of the solution is to choose “Darker” option of “TV settings” / “Picture settings” / “TV brightness”. Then I went to “TV settings” / “Picture settings” / “Fine tune picture” where I could turn “Backlight” down to zero. Not only did this make the screen more comfortable it reduced electrical power consumption as well.

According to my Kill-A-Watt meter, this large TV consumed only 35 watts once I turned the backlight down to minimum. This is actually slightly lower than the 32″ Samsung computer monitor I had been using. Surprisingly, half of that power was not required to run the screen at all. When I “turn off” the TV, the screen goes dark but Kill-A-Watt still registered 17 watts, burning power for purposes unknown. Hunting around in the Settings menu, I found “System” / “Power” / “Fast TV Start” which I could turn off. When this TV is no longer set for fast startup, turning the TV off seems to really turn it off. Or at least, close enough that the Kill-A-Watt read zero watts. This is far better than my 32″ Samsung which read 7W even in low-power mode.

Since this is a TV, I did not expect high framerate capabilities. I knew it had a 24 FPS (frames-per-second) mode to match film speed and a 30 FPS mode for broadcast television. When I checked my computer video card settings, I was pleasantly surprised to find that 60Hz refresh rate was an option. Nice! This exceeded my expectations and is perfectly usable.

On the flipside, since this is a TV I knew it had HDCP (High-bandwidth Digital Content Protection) support. But when I start playing protected content (streaming Disney+ on Microsoft Edge for Windows 11) the TV would choke and fail over to its “Looking for signal…” screen. Something crashed hard and the TV could not recover. To restore my desktop, I had to (1) stop my Disney+ playback and (2) power cycle the TV. Not just pressing the power button (that didn’t work) I had to pull the power plug.

The pixels on this panel were crisp, and 4K UHD resolution actually worked quite well. 3840×2160 resolution at 55″ diagonal works out to 80 DPI (dots per inch), which is right within longtime computer monitor norms. For many years I had used a 15″ monitor at 1024×768 resolution, which worked out to 85DPI. Of course, 80DPI is pretty lackluster compared with “High DPI” displays (Apple “Retina Display”, etc.) now on the market with several hundred dots (or pixels) per inch. Despite crisp pixels at sufficient density, text on this panel isn’t always legible under Windows because it doesn’t work well with Microsoft’s ClearType subpixel rendering. ClearType takes advantage of typical panel subpixel orientation, where the red/green/blue elements are laid out horizontally for each pixel. This panel, unfortunately, have its elements laid out vertically for each pixel, foiling ClearType trying to be clever. In order for this panel to take advantage of ClearType rendering, I would have to rotate the screen 90 degrees to portrait orientation. This isn’t terribly practical, so I turned ClearType off.

For comparison, a brief test with my Apple MacBook Air (M1) saw the following behavior:

  • Same 3640×2160 resolution and 60Hz refresh rates were available.
  • I could activate HDR mode, an option that was grayed out and not available with the NVIDIA drivers on my Windows desktop. I lack MacOS HDR content so I don’t know whether it actually works.
  • Streaming Disney+ on Firefox for MacOS showed video at roughly standard-definition DVD quality. This is consistent with behavior for non-HDCP displays, and much preferable to crashing the TV so hard I need to power cycle it.
  • MacOS font rendering does not use color subpixels like Microsoft ClearType, so text looks good without having to turn off anything.

It appears this TV is a better monitor for a MacOS computer than a Windows machine.

LG OLED Look Gorgeous But webOS Is Horrid

Thanks to Black Friday discounts, I acquired an OLED TV which I had coveted for many years. I decided on a LG OLED55B2AUA purchased through Costco (Item #9755022). LG’s “B” line sits between the more affordable “A” and the more expensive “C” lines and it was a tradeoff I liked. (There are a few additional lines higher than “C” priced above my budget.) The TV replaced a TCL 55S405 and while they are both 55″ TVs, there is a dramatic difference in image quality. There are reviews out there for full information, my blog post here concentrates on the items that mattered to me personally.

The Good

  • The main motivation is image quality. OLED panel advantage comes from their self-illuminating pixels leading to great contrast and vibrant colors. The “C” line uses panels with a higher peak brightness, but I haven’t found brightness lacking. When the filmmaker intentionally includes something bright (flashlight in a dark room, etc.) this “B” panel is bright enough to make me squint.
  • HDMI 2.1 with variable refresh rate capability and a higher maximum frame rate (120FPS) so I can see all the extra frames my new Xbox Series X can render. On this year’s “B” units, HDMI 2.1 is supported on two of four HDMI ports, which is enough for me. HDMI 2.1 is supported on all four ports of “C” line, and none of “A” line because they are missing high framerate features entirely.
  • The LG “magic remote” has an accelerometer to let us move an on-screen cursor by tilting the remote. This is far better than the standard up/down/left/right keypads of a TV remote and, combined with responsive UI, makes navigation less of a chore. This is the only good thing about LG’s user interface.

The Bad

For reasons I failed to diagnose, the TOSLINK output audio port could not send sound to my admittedly old Sony STR-DN1000 receiver. Annoyingly, LG designed this TV without analog audio output. Neither a headphone jack (as is on my TCL) nor classic white and red RCA audio jacks. In order to use my existing speakers, I ended up buying a receiver with HDMI eARC support. This is money I would have rather not spent.

The Ugly

The internal operating system is LG’s build of webOS, which they have turned into a software platform for relentless, shameless, and persistent monetization efforts. My TCL Roku TV also served ads, but not nearly as intrusively as this LG webOS TV. That powerful processor which gave us snappy and responsive user interface isn’t going to just sit idle while we watch a movie. Oh no, LG wants to put it to work making money for LG.

Based on the legal terms & conditions they wanted me to agree to, the powerful processor of this TV wants to watch the same things I watch. It wants to listen to the audio to listen for keywords that “help find advertisements that are relevant to you”. That’s creepy enough, but there’s more: it wants to watch the video as well! The agreement implies there are image recognition algorithms at work looking for objects onscreen for the same advertising purpose. That’s a lot of processing power deployed in a way that provides no benefit to me. I denied them permission to spy on me, but who knows if they respected my decision.

Ad-centric design continues to the webOS home screen. The top half is a huge banner area for advertisement. I found an option to turn off that ad but doing so did not free up space for my use. It just meant a big fixed “webOS” banner taking up space. Next row down, the leftmost item represents the most recently used input port, which in my case is the aforementioned Xbox Series X. The rest of that row are filled with more advertising, which I haven’t found a way to turn off. The third and smallest row includes all the apps I care about and even more that I did not. Overall, only about 1/8 of the home screen surface area are under my control, the rest paid LG to be on my home screen.

I’m frankly impressed at how brazenly LG turned a TV into an ad-serving spyware device. I understand the financial support role advertisements play, but I’m drawing a line for my own home: as long as the ads stay in the menus and keep quiet while I’m actively watching TV, I will tolerate their presence. But if an LG ad of any type interrupts my chosen programming, or if an LG ad proved they’re spying on me despite lacking permission, I am unplugging that Ethernet cable.

UPDATE (two days later): Well, that did not take long. I was in the middle of watching Andor on Disney+ (Andor is very good) when I was interrupted by a pop-up notification on the bottom of the screen advertising free trial to a service I will not name. (Because I refuse to give them free advertising.) I will not tolerate ads that pop up in the middle of a show. Struggling to find an upside I can say this: that advertised service appeared to have no relation to Disney+ or anything said or shown in Andor, so the ad was probably not spying on me.

I was willing to let LG earn a bit of advertising revenue from me, as Roku did for my earlier TV, but LG’s methods were far too aggressive. Now LG will earn no ad revenue from me at all because this TV’s Ethernet cable has been unplugged.

AMS AS7341 11-Channel Multi-Spectral Digital Sensor

An interesting sensor module came to my attention recently thanks to the experiments of my talented friend Emily Velasco. She’s been building a contraption whose sound output is dependent on color. At first, the sensor module didn’t capture my attention because I’ve seen color sensors before. They’ve been available for fun projects like a M&M candy sorter, and many robotics/electronics kits like LEGO Mindstorms included their own. However, not all color sensors are equal. Once I looked into the AMS AS7341 sensor she was using, I learned it was far more capable than I had originally thought.

Instead of mapping color hue into a single reading, which is what I had expected, this sensor reports data across eleven channels. Eight of the channels are mapped to various wavelengths in the human visible spectrum, implemented via filters placed over optical sensors. The remaining three channels report color-independent data. One channel has no color-specific filter (“clear”) and would report an overall brightness value. One channel is sensitive to near infrared (NIR), outside visible spectrum. And the final channel is specialized for detecting common flicker frequencies 50Hz and 60Hz.

AMS product page for this sensor stated intended use cases for this sensor included color calibration tools. This sensor is intended to be a fundamental part of precision color instruments! All color sensors answer the “What color is it” question to varying degrees of precision. This sensor is aimed at the highly precise end of that spectrum. Note that the sensor by itself is not a color calibration tool, that will depend on the rest of the supporting electronics, software, and procedures for use. “How to calibrate the calibration tool” is a big field all by itself and critical for instrument accuracy in addition to precision.

I have very little background in color science, so I will start by looking at the sensor as a precision instrument of unknown accuracy. Even with that disclaimer I think it is a good project candidate.

Google Pixel 7 Camera Off-Axis Blur in Closeups

Thanks to Black Friday sales, I have upgraded my phone to a Google Pixel 7. My primary motivation was its camera, because most of the photographs posted to this blog were taken with my cell phone (Pixel 5a) camera. Even though I have a good Canon camera, I’ve rarely pulled it out because the cell phone photos have been good enough for use here. By upgrading to the Pixel 7, I hope to narrow the gap between the phone camera and a real Canon. So far it has been a great advancement on many fronts. There are other phone camera review sites out there for all the details, but I wanted to point out one trait worse than my Pixel 5a. It is specific to the kind of photos I take for this blog and not usually covered by photography reviews: with close-up shots, the image quality quickly degrades as we move off-axis.

I took this picture of an Adafruit Circuit Playground Express with the Pixel 7 roughly fifteen centimeters (~6 inches) away. This was about as close as the Pixel 7 camera was willing to focus.

The detail captured in the center of the image is amazing!

But as we get to the edges, clarity drops off a cliff. My Pixel 5a camera’s quality also dropped off as we moved off-axis, but not this quickly and not this badly.

For comparison, I took another picture with the same parameters. But this time, that GND pad is the center of the image.

Everything is sharp and crisp. We can even see the circuit board texture inside the metal plated hole.

Here are the two examples side by side. I hypothesize this behavior is a consequence of design tradeoffs for a camera lens small enough to fit within a cell phone. This particular usage scenario is not common, so I’m not surprised if it was de-emphasized in favor of other camera improvements. For my purposes I would love to have a macro lens on my phone, but I know I’m in the minority so I’m not holding my breath for that to happen.

In the meantime, I could mitigate this effect by taking the picture from further away. This keeps more of the subject in a narrow angle from the main axis, reducing the off-axis blur. I would sacrifice some detail, but I still expect the quality to be good enough for this blog. And if I need to capture close-up detail, I will have to keep this off-axis blur in mind when I compose the photo. I would love a sharp close-up photo from frame to frame, but I think I can work with this. And everything else about this Pixel 7 camera is better than the Pixel 5a camera, so it’s all good!

Old Xbox One Boots Up in… čeština?

As a longtime Xbox fan, I would have an Xbox Series X by now if it weren’t for the global semiconductor supply chain in disarray. In the meantime, I continue to play on my Xbox One X which was 4K UHD capable variation that launched in 2017. It replaced my first-generation Xbox One, which has been collecting dust on a shelf. (Along with its bundled Kinect V2.) But unlike my Xbox 360, that old Xbox One is still part of the current Xbox ecosystem. I should still be able play my Xbox One game library, though I’d be limited to digital titles because its optical drive is broken. (One of the reasons I retired it.)

I thought I would test that hypothesis by plugging it in and downloading updates, I’m sure there have been many major updates over the past five years. But there was a problem. When I powered it up, it showed me this screen in a language I can’t read.

Typing this text into Google Translate website, language auto-detection told me this is in Czech and it is a menu to start an update. Interesting… why Czech? It can’t be a geographical setting in the hardware, because it is a US-spec Xbox purchased in the state of Washington. It can’t be a geolocation based on IP address, either, as I’m connected online via a US-based ISP. And if there was some sort of system reset problem, I would have expected the default to be either English or at least something at the start of an alphabetical list like Albanian or Arabic or something along those lines. Why Czech?

Navigating the next few menus (which involved lots of typing into Google Translate) I finally completed required update process and reached the system menu where I could switch language to English. Here I saw the language was set to “čeština” which was at the top of this list. Aha! My Xbox had some sort of problem and reset everything. Including language setting to the top of the list of languages it had installed. I don’t know what the root problem was, but at least that explains how I ended up with Czech.

After I went through all of this typing, I learned I was an idiot. I should have used the Google Translate app on my Android phone instead of the website. I thought using the website on my computer was faster because I had a full-sized keyboard for typing where my phone did not. But the phone has a camera, and the app can translate visually with no typing at all. Here I’m running it on the screen capture I made of the initial bootup screen shown above.

Nice! It looks like the app runs optical character recognition on the original text, determine the language was Czech, perform the translation, and superimposes translated English text on top of original text. The more I thought about what is required to make this work, the more impressed I am. Such as the fact display coordinate transforms had to be tracked between language representations so the translated text can be superimposed at the correct location. I don’t know how much of this work is running on my phone and how much is running on a Google server. Regardless of workload split, it’s pretty amazing this option was just sitting in my pocket.

What was I doing? Oh, right: my old Xbox One. It is up and running with latest system update, capable of downloading and running my digitally purchased titles. In US-English, even. But by then I no longer cared about Xbox games, the translation app is much more interesting to play with.

Notes on “Make: Design for CNC” by Filson, Rohrbraher, and Kaziunas France

After skimming through Maker Media’s Bluetooth book, I did the same for their Design for CNC: Furniture Projects & Fabrication Technique (*) published in 2017. The cover listed authors as Anne Filson, Gary Rohrbacher, and Anna Kaziunas France. Bill Young didn’t get on the cover but was included in “About the Authors” section at the end. The focus is on building human-scale furniture by using CNC routers to cut flat pieces out of 4′ x 8′ sheets of plywood. Details are given for some (but not all) of the pieces we see on the authors’ site AtFAB, and readers without all the equipment (which includes me) are encouraged to join the 100kGarages community for help to turn ideas into reality.

CAD software used for examples was SketchUp 2015, that particular version is no longer available. While there is still a free Sketchup tier, it is limited to their browser-based release. CAM software in the book is Vectric VCarve, which never had a free tier. The authors’ CNC router is a ShopBot PRSalpha and discussion on cutters mostly referenced Onsrud. Obviously, a reader with more in common with authors’ setup will have an easier time following along. I have none of it, but I skimmed the book to see what I can learn. Here are the bits that I thought worth jotting down:

Chapter 2 had several sections that are valuable to anyone building structures out of flat sheets of material, whether CNC routing big pieces out of plywood or laser-cutting small things out of acrylic. They describe some basic joints, that lead to assemblies, leading to styles of structures. These were the building blocks for projects later in the book and are applicable to building 3D things out of 2D pieces no matter what tools (software or hardware) we use.

Chapter 3 describes their design process using SketchUp. Some of the concepts are applicable to all CAD software, some are not. Explanations are sometimes lacking. The author used something called the Golden Ratio without explaining what it is or why it is applicable, so we have no idea when it would be appropriate to use in our own designs. We are shown how CAD helps keep various views of the same object in sync, but at certain places the book also says to use “Make Unique” to break this association without explaining why it was necessary. I had hoped to see automated tooling to support managing 3D structures and their 2D cutting layout, but no such luck. This workflow used a “Model” layer to work in 3D and a “Flat” layer to lay out the same shapes in 2D space for cutting followed by a “Cut” layer with just 2D vectors to export to CAM software. It feels like a motivated software developer can help automate this process. (Perhaps someone has in the past five years! I just have to find it.)

I noticed a trend of information becoming less generally applicable as the book went on. By the time we got to CAM in Chapter 7, it was very specific to VCarve with few generalizations that we can take and apply to other CAM software. One missed opportunity was a discussion on climb milling versus conventional milling. The book explains that there are so many variables involved (the material, the cutter, the machine) a specific setup may work better one way versus the other. The only way to know is to try both approaches and use whichever one works better. Problem: they never explained what “better” means in this context. What do we look for? What metrics do we use to decide if one is better than the other? The authors would have a lot of experience seeing various results firsthand. That would have been valuable and applicable no matter what CAM software we use, but they didn’t share that knowledge and just left us hanging. Or perhaps they have seen so much, it never occurred to them that beginners would have no idea how to judge.

Another disappointment was in the area of parametric design. In chapter 5 they covered the fact that plywood is not made to precise dimensions, and we’d need to adjust accordingly. However, the recommended default method of adjustment is to scale the entire project rather than adjust a thickness parameter. Later in chapter 12 they showed how to modify a few of their designs by plugging parameters into an app written in Processing. However, the app is limited to the variables allowed by the authors, and each app is unique to a project. The book doesn’t cover how to do parametric design in SketchUp. (Maybe it can’t?) But more disappointingly, the book doesn’t cover the ins and outs of how to write parametric capability for our own designs. The authors started this book by saying how designing and customizing for our own purposes is a huge part of what makes CNC routed projects preferable to generic designs from IKEA, so it was a huge letdown to see nothing about making our own parametric designs.

I would have appreciated more information on working with wood grain. Wood grain is discussed mostly as a cosmetic concern and not a structural one. I guess using plywood mitigates most of those worries? I would have also wanted to see more actual finished pieces. Most of the images in this book were 3D renders and not real pictures, another letdown.

Despite these disappointments I felt I learned a lot from this book generally applicable to building 3D structures from 2D shapes. The resources section at the end looked promising for more information on designing for CNC that go beyond wooden furniture. And finally, unrelated to the topic or the book content, the colophon pointed me to AsciiDoc, which is something I might look into later for future Sawppy documentation.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.