Installing Node-RED on Samsung 500T

When I saw that Node-RED community contributions included a general battery state-of-charge reporting node, I thought that would be a practical and useful first step into playing with contribution nodes. I wanted to give that specific ndoe a try because given my past miserable user experience on my Samsung 500T tablet, I was not terribly motivated to put in the effort required to write my own code to extract battery information. But I’m willing to see if that Node-RED node can work its magic here, and the first step of is, obviously, see if we can even install Node-RED on the unloved machine.

There were many reasons to be pessimistic. Windows was and is still seen as a second-class citizen in many web-centric products. And not only is the tablet running Windows, it is stuck on an old version of Windows because of the whole Intel Clover Trail mess preventing modern Windows 10 support. Furthermore, Clover Trail is a 32-bit only chip without support for x86-64 a.k.a. amd64 instructions that are starting to become a requirement on modern software frameworks. And it only has 1GB of RAM. And we’re still dealing with that molasses-slow eMMC storage. The list goes on.

Fortunately Node.JS still has a 32-bit Windows installer, though installation failed after nearly an hour of grinding away. I was first disappointed but then relieved to see it was “merely” a timeout installing one of the auxiliary components. This was not a huge surprise given how slow this machine is, so I waited for the system to settle down (lots of backlog waiting on that eMMC) before retrying. Node.JS installation succeeded the second time and after that I installed Node-RED without drama.

Launching Node-RED faced another unnecessary hurdle in the form of Windows Firewall. Annoyingly, this was one of the differences between Windows 1607 and modern builds, so I had none of the menu options I’ve become familiar with. But eventually I was able to let Node.JS open up a port to serve the Node-RED user interface, where I could install node-red-contrib-battery.

I dropped three nodes into the blank flow: an inject node, a debug node, and the newly installed battery node between them. I hit “Deploy” and clicked to see if that battery node can extract any information and… it can! Not a lot, as most of the fields are blank or null, but it did return the current battery voltage and the charge percentage. This is all I need to observe this tablet’s charge and discharge behavior.

Arduino Interface for Mitutoyo SPC Data Port

I started looking for an inexpensive electronic indicator with digital output port, and ended up splurging for a genuine Mitutoyo. Sure it is over five times the cost of the Harbor Freight alternative, but I thought it would be worth the price for two reasons. One: Mitutoyo is known for high quality precision instruments, and two: they are popular enough that the data output port should be documented somewhere online.

The second point turned out to be moot because the data output port was actually documented by pamphlet in the box, no need to go hunting online. But I went online anyway to get a second opinion, and found this project on Instructables. Most of the information matched up, but the wiring pinout specifically did not. Their cable schematic had a few apparent inconsistencies. (Example: one end of the cable had two ground pins and the other end did not.) They also had a “Menu” button that I lacked. These may just be the result of different products, but in any case it is information on the internet to be taken with a grain of salt. I took a meter to my own cable to ensure I have the pinout described by the pamphlet in my own instrument.

Their Arduino code matched the pamphlet description, so I took that code as a starting point. I then released my derivative publicly on GitHub with the following changes:

  • Calculate distance within numeric domain instead of converting to string and back.
  • Decimal point placement with a single math expression instead of a list of six if statements.
  • Their code didn’t output if value is inch or millimeter, I added units.

A limitation of their code (that I did not fix) is a recovery path, should the Arduino falls out of sync. The Mitutoyo protocol was designed with a recovery provision: If the communication gets out of sync, we can sync back up using the opening 0xFFFF. But since there’s no code watching for that situation, if it falls out of sync our code would just be permanently confused until reset by the user.

For debugging I added the capability to output in raw hex. I was going to remove it once I had the distance calculation and decimal point code figured out, but I left it in place as a compile-time parameter just in case that would become handy in the future. Sending just hexadecimal data and skipping conversion to human-readable text would allow faster loops.

Note that this Mitutoyo Statistical Process Control (SPC) protocol has no interactive control — it just reports whatever is on the display. Switching units, switching direction, zeroing, all such functions are done through device buttons.

Once it all appears to work on the prototyping breadboard, I again soldered up a compact version and put it inside a custom 3D printed enclosure.

This image has an empty alt attribute; its file name is mitutoyo-spc-arduino-compact.jpg

Mitutoyo 543-783B Indicator with SPC Data Port

Freshly encouraged by data gathering via Node-RED with serial communications, I investigated getting another set of data points. In the packing bubble squish experiment I could see pressure data from the load cell, which showed me the bubble relaxing (and thus reducing pressure) over time. What I could not see was the physical displacement corresponding to that reduction in pressure. I assume the Z-axis carriage did not move, so the reduction likely took the form of flex in the acrylic plate. How might I measure that kind of data for future experiments?

An answer could be found from the field of machining, indicators are used to measure linear displacement precisely. How precise? In the world of machining, I can have any precision I want, but precision costs money. How precise do I want to afford for this project? I started by looking at cheap electronic indicators like Harbor Freight item #63613. ($30) But while the manual hinted at a data output port, there’s no further information about it.

I started looking further and further up the food chain and, while I could find indicators with digital output, they have the similar problem of either a poorly documented or proprietary undisclosed format. Eventually I passed the $100 mark and I started getting discouraged. I was not willing to spend that kind of money on an instrument made by a company I have not known for quality precision.

And that’s when a brand I have known for quality precision popped up in my search: Mitutoyo. I know that name well from my machining course and other precision contexts, but they have all been very expensive at several hundred dollars and up. I didn’t know they made a low-end model (with corresponding lower precision) available at around $150. Certainly many times more than the Harbor Freight item, but it is a name I trust to be precise, and popular enough that details of their data port (called SPC or Statistical Process Control port) would be documented somewhere.

For extra reassurance I decided to pay a little extra to buy from known vendor McMaster-Carr, and when it arrived I got my first surprise: the data port interface instructions were in the box! This was a great good start to a successful project connecting Mitutoyo SPC data port.

Failed Attempt At Carriage Tool Bracket

My project to squish a packing material air bubble was a simple Hello World type of exercise done with what I already had on hand. Part of this meant pushing on the bubble with bare metal plate of the retired Geeetech A10 X-axis carriage. This (probably stamped) piece of metal used to hold a 3D printing nozzle, but that component is absent when I received this gift. I don’t know the history, only that I can see a few pieces of plastic remained.

While using this plate directly was enough for an air bubble exercise, I knew I’ll eventually need to attaching something more functional to this carriage. What would that something be? I have no idea! It will likely depend on the specific project at hand, and thus highly variable.

Which naturally led to the thought of a modular system where I have a fixed base bolted to this carriage and a set of quick-switch accessories for a wide variety of tasks that can be easily swapped out as needed. I thought I could accomplish this by a little dovetail that accessories could grip onto.

Things did not go well. I made a mistake in measurement, so the bottom screw holes didn’t fit. But even ignoring that, the dovetail turned out to be far too small and my test placeholder accessories were too wobbly. There’s a lot more to an interchangeable tool head than just printing a dovetail, perhaps I should adopt an existing open source tool changer for the next draft rather than try to reinvent this particular wheel.

Packing Bubble Squish Test Data

I didn’t expect much out of a silly “Hello World” test of a machine that squishes packing material, but I underestimated how much of a geek I am for data. Raw numbers out of the load cell didn’t mean much, partly because it was so noisy. But since it was trivial to send raw HX711 readings to a Node-RED chart for visualization, I plotted load cell pressure data over time and was surprised at what I could see in that graph!

The most obvious thing is that we can definitely see each downward stroke of the machine represented as a sharp downward spike in the graph. After that initial shock, though, the air bubble started to relax and we can see a reduction in pressure transferred to the plate. This is a trend that I couldn’t see just looking at raw numbers flying by, and a good visual (numerical?) representation of what happens with “items may have settled in shipping”.

What I did not expect ahead of time, but was pretty obvious in hindsight, is the visible trend from one stroke to the next. The bubble bounced back incompletely when the machine released. Therefore each stroke resulted in a lower transmitted force than the last, with a degradation curve across multiple strokes that echoes the pressure reduction visible within each stroke.

So this packing bubble squish data actually turned out to be far more interesting than I initially expected, all from the happy accident of sending noisy load cell data to a Node-RED graph just because it was easily available. If I had to write my own code to graph the data, I probably would not have done it, and missed that interesting insight into the pressures of life as a packing bubble. This is a win for Node-RED.

The next challenge is to figure out how I could have captured, analyzed, and extracted that data programmatically. Human visual insight is very useful, but it requires that we think of the right way to graph data in a way that is useful. This is hard when we don’t necessarily know what we are looking for. I stumbled across this happy accident today, how might I make sure I don’t miss interesting insights tomorrow? Something to ponder…

In the meantime I have a more mundane question to answer: how do I maintain a record of work I’ve done in a Node-RED program?

Packing Bubble Squish Test

I wrote down my first impressions of Node-RED Dashboard, here I describe the project I used to explore and exercise my new tools in the Node-RED toolbox. It is a silly little thing that tests the squishiness of a plastic air bubble used as packing material. The bubble isn’t important, the main objective was to build my first nontrivial Node-RED flow and have it interface with my two recent hardware projects: the 3-axis motion control chassis of a retired Geeetech A10 printer, and the load cell kit mounted where the printer’s build plate used to be.

Both of these hardware components use USB connections that show up on the computer as serial ports, which made it easy to interface with Node-RED via node-red-node-serialport. Once installed, I had three new nodes on my palette. “Serial in” is what I needed to read the stream of data coming in from the Arduino handling the HX711 reading the load cell strain gauges. “Serial request” is what I needed for bidirectional data transfer with the 3D printer control board: sending G-code commands and reading status like position. (The third one, “Serial out”, is not applicable to this project.)

To keep the project simple, my X- and Y-axis motion are hard coded and I only hooked up a few Dashboard buttons to control my 3D printer motion in the Z-axis. This allowed me to fine tune the height of the carriage. I added buttons to remember two set heights A and B, and a toggle to automatically cycle between those positions.

I set my plastic bubble test subject down on the platform and used my flow to make the machine repeatedly press down on the bubble. Pressure reported by the load cell is sent to a Node-RED chart to graph its change over time. I was more interested in the exercise than any real results, but the graph actually turned out to be interesting.

Compacting Load Cell Electronics

After being pleasantly surprised by the performance of a low cost strain gauge load cell built from a kit sold on Amazon, I decided it was worth the effort of making a more compact version of the circuit. Small enough so it can be installed on the Y-axis carriage of a Geeetech A10 chassis alongside the strain gauges being read.

First of all, the prototyping breadboard had to go. It is far too large and bulky and serves no purpose once the wiring scheme has been confirmed and would actually be a source of failure if jump wires fall out. I don’t need the Arduino Nano mounted on that breadboard, either. It has two full rows of pins which I won’t need. I could spend the time to desolder those pins, but it is much easier to pull a new unit out of its box as they come without the pins. I can solder wires directly to the vias matching what I need for power, ground, data, and clock.

I did, however, need to desolder the four pins on the HX711 interface board, they are no longer necessary. Once they were removed, I could put the Arduino Nano and the HX711 board side by side and the four short wires between them.

Finally, a small 3D-printed bracket whose only purpose was to hold the two PCBs together, removing any strain from the four wires connecting the two PCBs. The idea is that I may want to explore different ways to mount this assembly, but I always need to have the two boards next to each other. Thus the motivation for a separate bracket for actual mounting to Y-axis carriage.

The Y-axis carriage clip didn’t work as well as I had hoped, but for the moment I’m not going to worry about redoing it. A little tape is enough for me to move on to the next step: feeding its data output to a computer system.

Surprising Precision and Consistency from Load Cell

I got far enough with my low cost load cell project to start receiving readings. Advertised to measure up to 200kg, I doubted it would be very precise when measuring the light weights I expect to be placed on my former 3D printer. I would have not been surprised if it consistently returned “less than 1kg” and no further. Every measurement instrument has an optimal range where it works best. Grossly exceeding that range can sometimes result in irreparable damage, but the situation usually isn’t as dire for going under. However, we shouldn’t expect very useful answers.

The test setup was far from helpful for this. I 3D-printed four rectangular blocks of plastic to hold the four strain gauges, and a thin sheet of acrylic is placed on top of them to act as surface. It was crude, but like everything else in the setup, it was just to get an idea of feasibility.

I didn’t even worry very much about accuracy. The HX711 library has various capabilities for calibration and scaling, but I skipped all of that and just dumped the raw value without conversion to any actual units. This is all I need to start characterizing behavior of this load cell. Plotting those values out on a graph, I was not surprised to see the value was pretty noisy measuring analog values within a very narrow range. However, for the most part it does stay within a certain range.

I placed the small PCB ruler I used earlier on the surface, wondering if its addition would be lost in the noise. I was surprised to see its presence was clearly visible. This item was only several grams and I did not expect it to be so clearly visible on a 200kg scale! Bringing my kitchen scale into the picture, I tried various household items to get a better feel of its sensitivity.

Empirically, values must be at least 5 grams before it would stand out from the noise, and differences should be at least 15 grams before they are reliable. This is not bad at all. However, I foresee a lot of challenges with trying to correlate raw ADC values to real units because of its sensitivity to other factors. Strain gauge readings are affected by temperature and this is especially noticeable in the middle of a heat wave. As my home heats up in the afternoon and cools down in the evening, the strain gauge average value moves in sync.

That problem might be solvable, but it’s only the first of many problem with this low cost load cell. I also observed that, at unpredictable times, the reading would be wildly (several orders of magnitude) out of range. I don’t have a good explanation, but I’m willing to tolerate it given the low price point. This is helped by how extreme those values are, making it a simple matter to ignore them.

Happy with the performance of the load cell, and satisfied that the problems I see up front is manageable, I proceeded to move the circuit off of a prototyping breadboard and onto something smaller and more permanent.

HX711 Library on Arduino Nano via PlatformIO

I’m building a strain gauge load cell kit that used a HX711 chip, and found publicly available code to interface with a HX711 in the form of a PlatformIO project. This motivated me to investigate PlatformIO. Installation was straightforward from with Visual Studio Code. I brought up the extensions marketplace, searched for PlatformIO, clicked install, and a few minutes later I was ready to go. This was a very promising start.

But while I’ve found PlatformIO to largely live up to its advertised ease of use, there were a few bumps climbing the learning curve. I typed in the simple Arduino introductory tutorial to blink the on board LED and hit Build All. That took almost half an hour as PlatformIO downloaded a whole bunch of tools and then executed them, even though they seemed completely irrelevant to my project.

After the excessively long procedure completed, I scrolled back to investigate what happened. I eventually figured out building everything meant building my Arduino sketch for every piece of hardware PlatformIO supported for use with Arduino framework. So it didn’t just build for the ATmega328P chip on my Arduino Nano, it also downloaded tools and built for SAMD-based Arduinos. Then downloaded and built for ESP32 Arduino. Then NXP, etc. So on and so forth.

And to add insult to injury, it didn’t even build for the specific Arduino I wanted to use. A little web sleuthing found this forum thread, where I learned I needed to add platform descriptor for an Arduino Nano with the “new” bootloader. But once I figured it out, I could build just for my board (taking only a few seconds this time) and upload for a blinking LED.

With that procedure figured out, I moved on to the HX711 project. Adding an entry for Arduino Nano with “new” bootloader, I was able to get it up and running to read values from my load cell.

There are a lot of other PlatformIO features I want to come back and explore later in more depth. The most exciting of which is debugger support, something sorely lacking in Arduino IDE. It also has support for ESP32, a dev board I want to spend more time exploring. Not just compile and upload, either, but infrastructure like unit test and debugging, the latter as long as I have a JTAG adapter and I don’t use the associated pins for something else.

But that is in the future. For now, this is enough of a detour into PlatformIO. With the HX711 talking to the Arduino, attention returns to the machine work surface project because I want to better understand all this data now flowing from the HX711.

HX711 Interface Library as Introduction to PlatformIO

Whenever the goal is to find something quick and easy to talk with a piece of hardware, the standard operating procedure is to search for an Arduino library for that hardware. Hence after I soldered connectors for a HX711 board my search landed at the page for an Arduino HX711 library.

There was, however, a minor twist: this Arduino library is not in the form of an INO sketch or a downloadable ZIP file for the “Libraries” folder of the standard Arduino IDE. It uses an alternative in the form of a PlatformIO project. Normally requiring a new piece of software would make me hesitate and maybe continue searching for an alternative, but PlatformIO had been on my to-do list for some time and I thought this is a good place to dip my toes in.

PlatformIO is available in several different forms, the most appealing for me is as an extension to Visual Studio Code. I’ve already been using VSCode for a few projects, even a few Arduino projects! In a strictly workaround sort of way, that is. There have been a few instances where an Arduino project got too annoying to use in the limited Arduino IDE so I copied the source file into VSCode, did my work, then copied it back into Arduino IDE for compilation and upload.

With PlatformIO installed as a VSCode extension, I shouldn’t need to do that convoluted workaround. I can build and upload directly from within VSCode. That sounded really promising earlier, but not quite enough for me to pause my project the first time. Now that PlatformIO and I have crossed paths again, I’ll take a pause for a closer look.

Connecting HX711 Amplifier ADC Board

After I finished wiring the strain gauge array for a load cell, I pulled the bundled circuit board out of its anti static bag. According to the product listing, this is built around a HX711 amplifier and analog-to-digital (ADC) converter. All that information I read earlier about putting excitation voltages into a Wheatstone bridge to interpret small changes in strain gauge resistance? All that magic is done inside this chip.

The bundle included some classic 0.1″ pitch pins to solder to the circuit board, but I thought I had a better idea. I pulled out my JST-XH connector kit and used the six-position wire-to-board unit for my strain gauge array connection. JST-XH is polarized to help ensure I don’t plug it in backwards. However, it is bigger than plain unadorned headers so it didn’t fit with the surface mount components already on the board, requiring that I mount it to the flat backside instead.

I didn’t perform the same JST-XH replacement for the digital data connection, because I wanted the flexibility to use jumper wires to connect this board to something I can use to read data from a HX711. Looking around for software libraries online, I found a HX711 library for Arduino so I pulled out my prototyping breadboard with an Arduino Nano already on board. This is as good of a starting point as any.

Four jumper wires were needed: power, ground, data, and clock. The hardware is ready to go, so I switched gears to software and today’s little plot twist of PlatformIO.

Start Simple With Low Cost Load Cell

I now have ambition to give my project machine’s work surface the ability to act as a weight/pressure sensitive scale. The first stop, as always, is Wikipedia which filled in the fundamental knowledge of a strain gauge load cell plus links to associated topics like the electronic design called Wheatstone bridge used to read the values of such contraptions. Seeking a little more detail, a web search found a page on All About Circuits that clarified a few fuzzy points.

With a basic understanding of what such a project would entail, I headed over to Amazon to see my load cell options (*). I knew industrial-grade precision load cells can sell for thousands of dollars, but electronic bathroom scales are sold for less than $20 USD. With such a wide range I wasn’t sure what to expect to see for a hobbyist friendly load cell kit. The answer: under $10 USD! (*)

This price is low enough I’m not tempted to try salvaging components from an electronic bathroom scale, I bought that kit and went to work. The advertised weight range up to 200 kilograms is an appropriate range for a bathroom scale. That is far more than I expect to (intentionally) put on my machine, but it is the cheapest option. I suspect the measurement granularity/precision would be pretty crude at the range I expect to encounter, but hey, it’s cheap and a good first step into this world of sensors.

Soldering the four strain gauges together into a Wheatstone bridge network wasn’t difficult, just tedious and involved a lot of double-checking to ensure I’ve connected the wires properly. But I was not able to verify the circuit with my multimeter. The strain gauge change resistance in response to load, but the change is far too small for my multimeter to pick up. All I could check is to verify I haven’t accidentally short circuited anything or left an open circuit somewhere.

So while I prefer to verify a circuit before powering it on, in this particular case I didn’t have the right tools to do so. The cheapest option is to proceed with hooking it up to the companion circuit board and hope for the best.

(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

A Weight And Pressure Sensitive Work Surface

I’m thankful I was gifted a retired and incomplete Geeetech A10 3D Printer, and I regard the missing pieces as features not bugs. Their absence meant I’m more likely to think of interesting ideas beyond the world of 3D printing. This was certainly the case when I looked at the Y-axis carriage where a heated print bed used to be.

The machine will still need a work surface of some sort, and whatever it is will still needs to be mounted and I felt there was an opportunity here. Some of my first thoughts were to make it motorized so I could dynamically control its angle. And while a fancy auto bed leveling (or deliberately non-level) mechanism might be an interesting challenge, it was not bold enough of a step away from the shadow of its previous life as a 3D printer.

Thought then moved away from mechanical actuation output and towards some sort of sensing input. I have many ideas for a 3-axis motion control chassis, and the visual measurement device was just the first idea I tried. What kind of smarts are worth exploring for a flat horizontal surface?

Being a technology geek, my thoughts went immediately towards touchscreen monitors and how I might turn the whole surface into a touch pad that can report events and their X,Y coordinates. While this would indeed be exciting, a little research indicates this wouldn’t be a very easy and I should scale down my ambitions for a first draft.

The answer came to me during my morning routine, stepping on to the bathroom scale: I can incorporate a strain gauge load cell like those inside electronic body scales. (Or postage scales, or kitchen scales…) This collapses the grand concept of a large touch pad reporting X,Y coordinates down to a single reading distributed across the entire area, but it should be an interesting starting point to have a work surface that can report weight or pressure upon it.

Old Machine Needs A Work Surface

My first prototype of a video-based measurement instrument was a bit of a bust, as I discovered my motion control precision was poor and my camera can’t resolve to the level of detail I wanted. But there’s another problem: I don’t have a working surface on this former 3D printer. For the initial test, I taped a ruler to the Y-axis carriage and that was enough to get some data, but the Y-axis carriage is not an adequate working surface given how it has protrusions from bolts holding Y-axis rollers and the ends of the Y-axis belt.

In a fully functioning Geeetech A10 3D printer, there is a heated print bed bolted to the Y-axis carriage. However, I got this machine in a partially disassembled state and I do not have that print bed or any of the associated hardware. Since I don’t intend to use this as a 3D printer, I don’t need a full-on replacement heated bed. So my replacement surface doesn’t need an electric heater. I don’t yet know if it’ll be useful to have adjustments to level the bed, it will likely depend on the precision required by whatever project happens to be at hand.

To take the next step, I need something relatively flat that is approximately the correct size. Digging through my graveyard of past projects, I found my candidate: the front panel from the first draft of my FreeNAS box project. Easily removed, as it was only taped in place. And since it had a rectangular slot cut in the middle already, it was easy to break off at the slot to form a roughly rectangular piece of acrylic at approximately the right size.

The pandemic has cut off my access to a laser cutter, so I could not cut holes for mounting bolts. Drilling brittle acrylic requires specialized drill bits or risk shattering the piece. While contemplating alternative ways to support this work surface, I started thinking it would be cool if these supports could be more than just spacers. What might I do to make it a little smarter and more interesting?

Virtual Lunar Rovers May Help Sawppy Rovers

Over a year ago I hand-waved a grandiose wish that robots should become smarter to compensate for their imperfections instead of chasing perfection with ever more expensive hardware. This was primarily motivated by a period of disillusionment as I wanted to make use of work by robotics research only to find that their published software tend to require hardware orders of magnitude more expensive than what’s feasible for my Sawppy.

Since then, I’ve noticed imperfection is something that’s coming up more and more frequently. I had my eyes on the DARPA Subterranean Challenge (SubT) for focusing researcher attention towards rugged imperfect environments. They’ve also provided a very promising looking set of tutorials for working with the ROS-based SubT infrastructure. This is a great resource on my to-do list.

Another interesting potential that I wasn’t previously aware of is NASA Space Robotics Phase 2 competition. While phase 1 is a simulation of a humanoid robot on Mars, phase 2 is about simulated rovers on the moon. And just like SubT, there will be challenges with perception making sense of rugged environments and virtual robots trying to navigate their way through. Slippery uneven surfaces, unreliable wheel odometry, all the challenges Sawppy has to deal with in real life.

And good news, at least some of the participants in this challenge are neither big bucks corporations nor secretive “let me publish it first” researchers. One of them, Rud Merriam, is asking questions on ROS Discourse and, even more usefully for me, breaking down the field jargon to language outsiders can understand on his blog. If all goes well, there’ll be findings here useful for Sawppy here on earth! This should be fun to watch.

First Project Prototype Has Poor Precision

I’ve been bringing pieces together to build a machine to take distance measurements visually, with the primary motivation of measuring dimensions of circuit board features. Mechanically the machine is the three-axis motion control of a retired 3D printer, with a webcam sitting where the print nozzle used to be. It is controlled from a PC attached via USB, running software that I wrote as an exercise to learn UWP development. Once I figured out enough of UWP layout engine, I could put some user interface controls and take the thing on its first test drive.

Verdict: The idea has promise, but this first draft implementation is a bucket of fail.

For the first test, I taped a Digi-Key PCB ruler onto the Y-axis carriage where the print bed used to be installed. The ruler has clearly labeled dimensions on board representative of components on a circuit board. The first and easiest test is to make sure my movement distances match the ruler distance and this machine flunked its first test.

I have added a little circle in the middle of the camera field of view to serve as reference. I placed that circle at the 10 cm mark and commanded a move of 1 cm along the negative X axis. I expect the little circle to sit above the 9 cm mark as a result, but it is actually sitting at roughly 8.95 cm, a distance of 1.05 cm and roughly 5% longer than expected.

Camera control test 8.95mm

The first hypothesis is that this is an effect of the camera’s USB cable tugging on the camera as the print carriage moved, changing the viewing angle. It is, after all, merely held by tape on this first draft. So I repeated the experiment along the Y axis, which does not move the camera carriage and would eliminate flexible tape as a variable. Again I see a 5-6% overshoot.

When two measurement tools disagree, bring in a third opinion to break the tie. I pulled out my digital caliper and measured the ruler markings and they match, indicating the problem is indeed with the printer mechanicals. For whatever reason, this motion control carriage is moving further than commanded. Perhaps the belts had stretched out? Whatever the reason, this behavior could very well be why the printer was retired. I think I can compensate by changing the steps-per-millimeter setting in printer firmware, all I need is a precise measurement of actual movement.

Which brings up the other half of my problem: I can only get plus or minus half a millimeter precision with this webcam configuration. I can’t bring the camera any closer to the workpiece, because this webcam’s autofocus fails to focus at such close ranges.

I see two paths forward to address the optical precision shortcomings:

  1. Use another camera, such as a “USB microscope”. Most of the cheap ones are basically the electronics of a webcam paired with optics designed for short distance focus.
  2. Keep the camera but augment the optics with a clip-on macro lens. These are sold with the intent they let cell phone cameras focus on objects closer than they normally could.

Either should allow me to measure more accurately and tune the steps-per-millimeter value. While I contemplate my options, I went back into my UWP application to play around with a few other features.

Quick Notes on UWP Layout

Layout is another big can of worms in UWP application development, but having spent far too much time on keyboard navigation I’m postponing the big lessons until later. Today I’m going to learn just enough to get what I want on screen.

The first is controlling that shape I drew earlier. By default simple shapes (Ellipse, Rectangle, etc) dynamically adjusts to layout size, but there doesn’t seem to be a way to specify complex shapes that would be similarly dynamic. They are specified via X,Y coordinates and I didn’t find a way to specify “X is 25% of ActualWidth” in markup.

The fallback is to listen to SizeChanged event and recalculate coordinates based on ActualHeight and ActualWidth. I get my little camera preview overlay graphics on screen, but that’s only the start. I wanted to draw other on screen directional controls to augment the arrow keys.

While working to position the shape and on screen controls, I ran into a frustrating problem: there are two different ways to specify an on screen color for rendering. We have Windows.UI.Color and then we have an entirely different System.Drawing.Color. I’m sure there’s a good explanation on the history here, but right now it’s just annoying to have the “Color” class be ambiguous.

Rendering the user controls outside of camera got a tiny bit tricker, because now I have to track what happens when, including when an element is loaded for me to kick off other events relating to serial communication. Thanks to this Stack Overflow thread, I learned there are three different candidates depending on need. There’s Loaded, or LayoutUpdated, or SizeChanged. And people are constantly surprised when one of them does something unexpected, it seems like none of the three does exactly what people would want. This is just one of many parts making UWP layout confusing.

When I added controls by hand, they were fully adjacent to each other with no space in between them. I knew I needed to specify either a margin, or a padding, but couldn’t figure out which is which. I still don’t… they do slightly different things under different circumstances. To ensure elements inside a grid don’t butt up against each other, I have to use Padding. To ensure the video preview doesn’t butt up against edge of the window frame, I have to use Margin. I have yet to build an intuition on which is the right tool for the job, which I hope will come with practice.

But never mind little layout details… I have my G-code controlled camera, and I want to know if it works like I wanted. (Spoiler: it didn’t.)

Quick Notes on UWP Drawing

Because I wanted to handle keyboard events, I created an UserControl that packages the CaptureElement displaying the camera preview. Doing this allowed an easy solution to another problem I foresaw but didn’t immediately know how to solve: How do I draw reference marks over the camera preview? I’d definitely need something to mark the center, and maybe additional marks for horizontal/vertical alignment and if I’m ambitious, an on screen ruler to measure distance.

With an UserControl, drawing these things became trivial: I can include graphical drawing elements as a peer of CaptureElement in my UserControl template, and we are off to the races.

Or so I thought. It is more accurate to say I was off to an entirely different set of problems. The first was making marks legible regardless of camera image. That means I can’t just use a bright color, because that would blend in on a bright background. Likewise, a dark color would be lost in a dark background. What I need is a combination of high contrast colors to ensure they are visible independent of background characteristics. I had thought: easy! Draw two shapes with different stroke thickness. I first draw a rectangle with a thicker stroke, like this blue rectangle:


I then drew a yellow rectangle with half the stroke thickness, expecting it to sit in the center of the blue stroke. But it didn’t! The yellow covered the outer half leaving the inner half, instead of my expectation of a yellow line with blue on either side. But even though this was unexpected, it was still acceptable because that gave me the contrasting colors I wanted.


This only barely scratches the surface of UWP drawing capability, but I have what I need for today. I’ve spent far too much time on UWP keyboard navigation and I’m eager to move forward to make more progress. Drawing a single screen element is fun, but to be useful they need to coexist with other elements, which means layout comes into the picture.

User Interface Taking Control

Once I finally figured out that keyboard events require objects derived from UWP’s Control class, the rest was pretty easy. UWP has a large library of common controls to draw from, but none really fit what I’m trying to present to the user.

What came closest is a ScrollViewer, designed to present information that does not fit on screen and allows the user to scroll around the full extents much as my camera on a 3D printer carriage can move around the XY extents of the 3D printer. However, the rendering mechanism is different. ScrollViewer is designed to let me drop a large piece of content (example: a very large or high resolution image) into the application and let ScrollViewer handle the rest independently. But that’s not what I have here – in order for scrolling to be mirrored to physical motion of 3D printer carriage, I need to be involved in the process.

Lacking a suitable fit in the list of stock controls, I proceeded to build a simple custom control (based on the UserControl class) that is a composite of other existing elements, starting with the CaptureElement displaying the webcam preview. And unlike on CaptureElement, the OnKeyDown and OnKeyUp event handlers do get called when defined on a UserControl. We are in business!

Once called, I have the option to handle it, in this case translating directional desire into G-code to be sent to the 3D printer. My code behavior fits under the umbrella of “inner navigation”, where a control can take over keyboard navigation semantics inside its scope.

I also have the ability to define special keys inside my scope, called accelerators ([Control] + [some key]) or access ([Alt] + [some key]) keys. I won’t worry about it for this first pass, but they can be very powerful when well designed and a huge productivity booster for power users. They also have a big role in making an application keyboard accessible. Again while it is a very important topic for retail software, it’s one of the details I can afford to push aside for a quick prototype. But it’ll be interesting to dive in sometime in the future, it’s a whole topic in and of itself. There’s literally a book on it!

In the meantime, I have a custom UserControl and I want to draw some of my own graphics on screen.

My Problem Was One Of Control

For my computer-controlled camera project, I thought it would be good to let the user control position via arrow keys on the keyboard. My quick-and-dirty first attempt failed, so I dived into UWP documentation. After spending a lot of time reading about nuts and bolts of keyboard navigation, I finally found my problem and it’s one of the cases when the answer has been in my face the whole time.

When my key press event handlers failed to trigger, the first page I went to is the Keyboard Events page. This page has a lot of information up front and center about eligibility requirements to receive keyboard events, here’s an excerpt from the page:

For a control to receive input focus, it must be enabled, visible, and have IsTabStop and HitTestVisible property values of true. This is the default state for most controls.

My blindness was reading the word “control” in the general sense of a visual element on the page for user interaction. Which is why I kept overlooking the lesson it was trying to tell me: if I want keyboard events, I have to use something that is derived from the UWP Control object. In other words, not “control” in the generic language case but “Control” as a specific proper name in the object hierarchy. I would have been better informed about the distinction if they had capitalized Control, or linked to the page for the formal Control object, or any of a number other things to differentiate it as a specific term and not a generic word. But for whatever reason they chose not to, and I failed to comprehend the significance of the word again and again. It wasn’t until I was on the Keyboard Accessibility page did I see this requirement clearly and very specifically spelled out:

Only classes that derive from Control support focus and tab navigation.

The CaptureElement control (generic name) used in the UWP webcam example is not derived from Control (proper name) and that’s why I have not been receiving the keyboard events. Once I finally understood my mistake, it was easy to fix.