One Thousand Posts

I just learned WordPress puts up a special milestone notification when a blog site has one thousand posts, because I triggered that notification with yesterday’s post about vaguely attainable somewhat humanoid robots.NewScrewdriver 1000 posts

It’s pretty common for a personal blog to have only a handful of posts — sometimes just one — before it goes dormant. My first attempt ended after less than a dozen. The second attempt had more than a dozen, but not by much. Fortunately for me, they have stopped taunting me as they have been erased by no actions of my own: both of them were hosted on small startup blog hosting services that have since gone out of business. Maybe fragments have survived in Google caches and what not, but I haven’t felt inclined to go searching for them.

I had no reason to expect the results would be any different with this third attempt, so again I started with the free tier of service. Except this time I started with a more established host:, the commercial hosting counterpart whose revenue helps support the free open-source blog software available from When I felt that I’ve found my groove and can keep this going, I upgraded to the “Personal” plan so I can have my own domain and remove WordPress ads.

So far I have felt no need to upgrade beyond the Personal tier. Most of the higher tier features are tailored to people trying to make money in one way or another but I have no revenue goals for this blog. This is mostly documentation for my own aims, and if my notes are useful for someone else, that’s just a happy coincidence. One way I’ve described this site to friends is “a diary with zero expectation of privacy”. My content is not tailored to maximize traffic and, in fact, is the wrong medium to do so: consumer traffic (and corresponding ad revenue) are migrating towards video and away from text.

But I want text. I like to read and learn at my own pace. While I’m glad YouTube (and other video sites) have implemented ability to adjust playback speed of a video, having to go and change that setting is still a hassle. And finally: as documentation for myself, I want to be able to search through my notes and that’s a lot easier with text than video.

But there are some things more suited to a video than the written word, and for them I’ve shot video footage and created a New Screwdriver YouTube channel to host them. Right now I see the YouTube channel as roughly analogous to my first few aborted blogging efforts: an exploration into the medium looking for a way to make this work. Hopefully it won’t go dormant, but the YouTube channel certainly won’t be my focus for the foreseeable future.

Attainable(ish) Humanoid(ish) Robots

There are lots of people who are interested in robotics software but lack the resources or the interest to build their own robot from scratch. There is no shortage of robot hardware platforms that love software attention, but most of them are focused on mechanical functionality and thus are shaped like tools. The field is much smaller when we want robots with at least a vaguely humanoid appearance.

Hobbyists need not apply for NASA’s R5 Valkyrie robot, with its several million dollar value. Most of Valkyrie’s fellow competitors in the 2013 DARPA Robotics Challenge are similarly custom built for the competition and unavailable to anyone else. One of the exceptions is the ThorMang chassis, built by the same people behind Dynamixel AX-12A serial bus servos. Naturally, the motors of a ThorMang are their highest end components on the opposite side of their entry level AX-12A. Not surprisingly, it is into the “please call for a quote” category of pricing, but hey, at least it’s theoretically possible to buy one.

The junior member of that team are their OP2 and OP3 robots, which appear to be roughly the size of a toddler and uses smaller and more affordable motors. Handling computation inside the chest is an Intel NUC, which might be the closest we get to a powerful commodity robot brain computer. Still, “affordable” here is still a five-digit proposition at $11,000 USD or so.

There are multiple offerings at this price level, using servos that are similar to the AX-12A. However they all appear roughly equally crude. For something more refined, we’d have to step up to something like a NAO robot. It seems like a modern-day QRIO but actually available for purchase for around $16,000.

A large part of the cost is the difficulty of building a self-balancing, self-contained, two-legged robot. It’s a big part of a humanoid appearance, but it is out of proportion with the parts that lend a robot well to human interaction. Giving up the legged locomotion for a wheeled platform allows something far cheaper but still have an expressive head and face plus two arms.

The people who make the NAO also makes the Pepper. Roughly the size of a human choid, it still has fully expressive head and arms but uses a wheeled base platform. The company seems to be trying to find niches outside of education and development, but they all seem rather far-fetched to me. Or at least, I don’t see enough to justify the cost of ownership at roughly $30,000.

Simplifying further, we can have smaller robots on wheels that still have an expressive head but limited arms. Out of the offerings in this arena, Misty II is the most developer-friendly platform I’m aware of. Since my first introduction to Misty II, the company has launched several variants including a cost-reduced basic version that lacks the 3D depth camera. (Similar to a Microsoft Kinect.) Misty is still not cheap at a starting price of $2,000, but not so bad in the context of all these other robots.

(Image source: Misty Robotics)

NASA R5 Valkyrie Humanoid Robot

When I was researching my Hackaday post about DARPA Subterranean Challenge, I learned that there’s a virtual track to the competition using just digital robots inside Gazebo. I also learned it was not the first time a virtual competition with prize money took place within Gazebo, there was also the NASA Space Robotics Challenge where competitors submit software to control a humanoid robot on a Mars habitat.

What I didn’t know at the time was that the virtual humanoid robot was actually based on a physical robot, the NASA R5. Also called Valkyrie, this robot is the size of a full human adult with a 7-digit price tag putting it quite far out of my reach. This robot was originally built for the 2013 DARPA Robotics Challenge. It appeared the robot had no shortage of ingenious mechanical design (I like the pair of series elastic actuator for that ankle joint.) It was not lacking in sophisticated sensors, and it was not lacking in electric power. What it lacked were the software to tie them all together, and an unfortunate network configuration issue hampered performance on actual day of DARPA competition.

After the competition, Valkyrie visited several research institutions interested in advancing the state of the art in humanoid robotics. I assume some of that research ended up as published papers, though I have not yet gone looking for them. Their experience likely fed into how the NASA Space Robotics Challenge was structured.

That competition was where Valkyrie got its next round of fame, albeit in a digital form inside Gazebo. Competitors were given a simulation environment to perform the list of tasks required. Using a robot simulator meant people don’t need a huge budget and a machine shop to build robots to participate. NASA said the intent is to open up the field to nontraditional sources, to welcome new ideas by new thinkers they termed “citizen inventors”. This proved to be a valid approach, as the winner was an one-person team.

As for the physical robot, I found a code repository seemingly created by NASA to support research institutions that have borrowed Valkyrie, but it feels rather incomplete and has not been updated in several years. Perhaps Valkyrie has been retired and there’s a successor (R6?) underway? A writer at IEEE Spectrum noticed a job listing that implied as such.

(Image source: NASA)

Vertical Stand for Asus Router

After almost 7 years of reliable service, my Asus RT-N66R started failing. I bout an Asus RT-AC66U B1 as replacement. The two routers look nearly identical from the outside, but the new one is actually slightly larger so it would not fit exactly in the same place. Which was fine, because I felt maybe my previous placement didn’t have enough ventilation and contributed to the old router’s demise.

For better space utilization, I wanted the router to stand vertically. But in the interest of providing more cooling, I didn’t want it to be wall-mounted against an airflow-constricting surface. Making a vertical stand became a quick-and-dirty design and 3D printing project.

As soon as it started printing I realized I overlooked an something important: the base of the stand is too thin for proper print bed adhesion. The was compounded by the fact that it sat near print bed corners, which tends to be a little cooler than the center of the bed. A few layers into the print, one corner started to lift as expected. Looking at the design, I guessed a base with a lifted edge will still be sufficient. So I decided to let the print continue rather than abort the print and waste the filament.

I was rather surprised at how far it continued to lift! I thought after a few millimeters there would have been enough plastic to hold things rigid, and that expectation was true for one corner. (Left side in the picture below.) But the other corner just kept lifting and lifting, even starting to peel the main body off the bed. I was starting to get worried the whole thing would pop off. Fortunately it finally stabilized after lifting a little over 21mm.

Router stand bed lift

This was outside my experience, as I usually abort a print before the lift got nearly that bad. But my original guess was correct: the stand worked just fine even with rear corners asymmetrically lifted from the print bed. What I have in hand is good enough for my purposes so I’ll use it as-is, but the public Onshape document is here if anyone wants to evolve this design to make it less prone to lifting.

Window Shopping: ElectronJS

The Universal Windows Platform allows Windows application developers to create UI that can dynamically adapt to different screen sizes and resolutions, as well as adapting to different input methods like mouse vs. touchscreen. The selling point is to make it as easy and robust as a web page.

So… why not have a web page? Web developers were the pioneers in solving these problems and we might want to adapt existing solutions instead of Microsoft’s effort to replicate them on Windows. But a web page has limitations relative to native applications, and hardware access is definitely one such category. (For USB specifically, there is web USB, but that is not a general hardware access solution.)

Thus occasionally developers familiar with web technology had a need to build platform native applications. Some of them decided to build their own native application framework to support web-style interfaces across multiple platforms. This is why we have Electron. (Sometimes ElectronJS to differentiate it from its namesake.)

All the x86_64 operating systems are supported: Windows, MacOS, and Linux are first tier platforms. There’s no fundamental reason Electron won’t work elsewhere, but apparently users need to be prepared to deal with various headaches to run it on platforms like a Raspberry Pi. And that’s just getting it to run, that doesn’t even touch on the most interesting part of running on a Raspberry Pi: its GPIO pins.

Like UWP, given graphical capabilities of modern websites, I have no doubt I can display arbitrary data visualization under Electron. My favorite demo of what modern WebGL is capable of is this fluid dynamics simulation.

The attention then turns to serial communication, and a web search quickly pointed me to electron-serialport Github repo. At first glance this looks promising, though I have to be careful when building it into an Electron app. The tricky part is that this serial support is native code and must be compiled to match the version in a particular release of Electron. It appears the tool electron-rebuild can take care of this particular case. However, it sets expectation that anything Electron app dealing with hardware would likely also require a native code component.

If I ever need to build a graphically dynamic application that needs to run across multiple operating systems, plus hardware access that is restricted to native applications, I’ll come back and take a closer look at Electron.

Window Shopping: Universal Windows Platform Fluent Design

Looking over National Instruments’ Measurement Studio reinforced the possibility that there really isn’t anything particularly special about what I want to do for a computer front-end to control my electronics projects. I am confident that whatever I want to do in such a piece of software, I can put it in a Windows application.

The only question is what kind of trade-offs are involved for different approaches, because there is certainly no shortage of options. There have been many application frameworks over the long history of Windows. I criticised LabWindows for faithfully following the style of an older generation of Windows applications and failed to keep updated since. So if I’m so keen on the latest flashy gizmo, I might as well look over the latest in Windows application development: the Universal Windows Platform.

People not familiar with Microsoft platform branding might get unduly excited about “Universal” in the name, as it would be amazing if Microsoft released a platform that worked across all operating systems. The next word dispelled that fantasy: “Universal Windows” just meant across multiple Microsoft platforms: PC, Xbox, and Hololens. UWP was also going to cover phone as well, but well, you know

Given the reduction in scope and the lack of adoption, some critics are calling UWP a dead end. History will show if they are right or not. However that shakes out, I do like Fluent Design that was launched alongside UWP. A similar but competitive offering to Google’s Material Design, I think they both have potential for building some really good user interactivity.

Given the graphical capabilities, I’m not worried about displaying my own data visualizations. But given UWP’s intent to be compatible across different Windows hardware platforms, I am worried about my ability to communicate with my own custom built hardware. If something was difficult to rationalize a standard API across PC, Xbox, and Hololens, it might not be supported entirely.

Fortunately that worry is unfounded. There is a UWP section of the API for serial communication which I expect to work for USB-to-serial converters. Surprisingly, it actually went beyond that: there’s also an API for general USB communication even with devices lacking standard Windows USB support. If this is flexible enough to interface arbitrary USB hardware other than USB-to-serial converters, it has a great deal of potential.

The downside, of course, is that UWP would be limited to Windows PCs and exclude Apple Macintosh and Linux computers. If the objective is to build a graphically rich and dynamically adaptable user interface across multiple desktop application platforms (not just Windows) we have to use something else.

A Quick Look At NI Measurement Studio

While digging through National Instruments online documentation to learn about LabVIEW and LabWindows/CVI, I also came across something called Measurement Studio. This trio of products make up their category of Programming Environments for Electronic Test and Instrumentation. Since I’ve looked at two out of three, might as well look at the whole set and jot down some notes.

Immediately we see a difference in the product description. Measurement Studio is not a standalone application, but an extension to Microsoft Visual Studio. By doing so, National Instruments takes a step back and allows Microsoft Visual Studio to handle most of the common overhead of writing an application, stepping in only when necessary to deliver functionality valuable to their target market. What are these functions? The product page lists three bullet points:

  • Connect to Any Hardware – Electronics equipment industry standard communication protocols GPIB, VISA, etc.
  • Engineering UI Controls – on-screen representation of tasks an electronics engineer would want to perform.
  • Advanced Analysis Libraries – data processing capabilities valuable to electronics engineers.

Basically, all the parts of LabVIEW and LabWindows/CVI that I did not care about for my own projects! Thus if I build a computer control application in Microsoft Visual Studio, I’m likely to just use Visual Studio by itself without the Measurement Studio extension. I am not quite the target market for LabVIEW or LabWindows, and I am completely the wrong market for Measurement Studio.

Even if I needed Measurement Studio for some reason, the price of admission is steep. Because Measurement Studio is not compatible with the free Community Edition of Visual Studio, developing with Measurement Studio requires buying license for a paid tier of Microsoft Visual Studio in addition to the license for Measurement Studio.

And finally, it has been noted that the National Instruments products require low level Win32 API access that prevents them from being a part of the new generation of Windows app that can be distributed via Microsoft Store. These newer apps promise to have better installation and removal experience, automatic updates, and better isolated from each other to avoid incompatibilities like “DLL Hell”. None of those benefits are available if an application pulls in National Instruments software components, which is a pity.

Earlier I said “if I build a computer control application in Microsoft Visual Studio, I’ll just use Visual Studio by itself without the Measurement Studio extension” which got me thinking: that’s a good point! What if I went ahead and wrote a standard Windows application with Visual Studio?

Digging Further Into LabWindows/CVI

Following initial success of RS-232 serial communication in a LabWindows/CVI sample program, I dived deeper into the documentation. This RS-232 library accesses the serial port at a very low level. On the good side, it allows me to communicate with non-VISA peripherals like a consumer 3D printer. On the bad side, it means I’ll be responsible for all the overhead of running a serial port.

The textbook solution is to leave the main program thread to maintain responsive UI, and spin up another thread to keep an eye on the serial port so we know when data comes in. The good news here is that LabWindows/CVI help files say RS-232 library code is thread safe, the bad news here is that I’m responsible for thread management myself. I failed to find much in the help files, but I did find something online for LabWindows/CVI multi-threading. Not super easy to use, but powerful enough to handle the scenario. I can probably make this work.

Earlier I noted that LabWindows/CVI design seems to reflect the state of the art about ten years ago and not advanced since. This was most apparent in the visual appearance of both the tool itself and of the programs it generated. Perhaps the target paying audience won’t put much emphasis on visual design, but I like to put in some effort in my own projects.

Which is why it really pained me when I realized the layout in a LabWindows/CVI program is fixed. Once they are laid out in the drag-and-drop tool, that’s it, forever. Maximizing the window will only make the window larger, all the controls stay the same and we just get more blank space. I searched for an option to scale windows and found this article in National Instruments support, but it only meant scaling in the literal sense. When this option is used, and I maximize a window, all the controls still keep the same layout but they just get proportionally larger. There’s no easy way to take advantage of additional screen real estate in a productive way.

This means a default LabWindows/CVI program will be unable to adapt to a screen with different aspect ratio, or be stacked side-by-side with another window, or any of the dynamic layout capabilities I’ve come to expect of applications today. This makes me sad, because the low-level capabilities are quite promising. But due to the age of the design and the high cost, I’m likely to look elsewhere for my own projects. But before I go, a quick look at one other National Instruments product: Measurement Studio.

LabWindows/CVI Serial Communication Test

Once I was done with LabWindow’s Hello World tour, it was time for some independent study, focused on fields I’m personally interested in. Top of the list was serial port communications. Researching them ahead of time indicated it was capable of arbitrary protocols. Was that correct? I dived into the RS-232 API to find out.

Before we can open a serial port for communication, we must first find it. And the LabWindows/CVI RS-232 library for enumerating serial port is… nothing. There isn’t one. A search on user forums indicate this is the consensus: if someone wants to enumerate serial ports, they have to go straight to the underlying Win32 API.

Puzzled at how a program is supposed to know which COM port to open without an enumeration API, I went into the sample applications directory and found a generic serial terminal program. How did they solve this problem? They did not: they punted it to the user. There is a slider control for the user to select the COM port to open. If the user doesn’t know which device is mapped to which COM port, it is not LabWindows’ problem. So much for user-friendliness.

I had a 3D printer handy for experimentation, so I tried to use the sample program to send some Marlin G-code commands. The first obstacle is baud rate: USB serial communication can handle much faster speeds than old school RS-232 so my printer defaults to 250,000 baud. The sample program’s baud selection control only went up to 57,600 baud so the sample program had to be modified to add a 250,000 baud option. After that was done, everything worked: I could command the printer to home its axis, move to position, etc.

First test: success! Time to dig deepeer.

LabWindows/CVI Getting Started Guide

A quick look through the help files for LabWindows/CVI found it to be an interesting candidate for further investigation. It’s not exactly designed for my own project goals, but there is enough alignment to justify a closer look.

Download is accomplished through National Instruments Package Manager. Once installed and updated I could scroll through all of the National Instruments software and select LabWindows/CVI for installation. As is typical of development tools, it’s not just one package but many (~20) separate packages that get installed. Ranging from the actual IDE to runtime library redistributable binaries.

Once up and running I find that my free trial period lasts only a week, but that’s fine as I only wanted to run through their Hello World tutorial in LabWindows/CVI Getting Started Guide (PDF). The tutorial walks through generating a simple application with a few buttons and a graphing control that displays a generated sine wave. I found the LabWindows/CVI interface to be familiar, with a strong resemblance to Microsoft Visual Studio which is probably not a complete coincidence. The code editor, file browser, debug features, and drag-and-drop UI editor are all features I’ve experienced before.

The biggest difference worth calling out is the UI-based tool “Function Panel” for generating library calls. While library calls can be typed up directly in text like any other C API, there’s the option to do it with a visual representation. The function panel is a dialog box that has a description of the function and all its parameters listed in text boxes that the developer can fill in. When applicable, the panel also shows an example of the resulting configuration. Once we are satisfied with a particular setup, selecting “Code”/”Insert Function Call” puts all parameters in their proper order in C source code. It’s a nifty way to go beyond a help page of text, making it a distinct point of improvement over the Visual Studio I knew.

Not the modern Microsoft Visual Studio, though, more like Visual Studio of many years ago. The dated visual appearance of the tool itself are consistent with old appearance of available user controls. They are also consistent with the documentation, as that Getting Started PDF was dated October 2010 and I couldn’t find anything more recent. The latest edition of the more detailed LabWindows/CVI Programmer’s Reference Manual (PDF) is even older, at June 2003.

All of these data points make LabWindows appear to be a product of an earlier generation. But never mind the age – how well does it work?

Window Shopping LabWindows/CVI

I’ve taken a quick look over Keysight VEE and LabVIEW, both tools that present software development in a format that resembles physical components and wires: software modules are virtual instruments, data flow are virtual wires. This is very powerful for expressing certain problem domains and naturally imposes a structure. From a software perspective, explicit description of data flow also makes it easier to take advantage of parallel execution possible on modern multicore processors.

But imposing certain structures also make it hard to venture off the beaten path, which is why attention now turns to LabVIEW’s stablemate, LabWindows/CVI. They both offer access to industry standard communication protocols plus data analysis and visualization tools, but the data flow and program structure is entirely different. Instead of LabVIEW’s visual “G” language, LabWindows/CVI uses ANSI C to connect all its components and control flow of data and execution. I am optimistic it will be more aligned with my software experience.

Like LabVIEW, the program help files for LabWindows/CVI is also available for download and perusal. Things look fairly promising at first glance.

I found a serial communication API that can read and write raw bytes under:

  • Library Reference
    • RS-232 Library
      • function tree

For user display, I found something that resembles LabVIEW’s “2D Picture Control” here called a “Canvas Control”. An overview of drawing with Canvas Control’s basic drawing primitives can be found under:

  • Library Reference
    • User Interface Library
      • Controls
        • Control Types
          • Canvas Controls
            • Programming with Canvas Controls

I’m encouraged by what I found looking through LabWindows/CVI help files, enough to download the actual development tool and get hands-on with it.

Window Shopping: LabVIEW 2019

After taking a quick look over Keysight VEE, I switched focus to LabVIEW by National Instruments. I don’t know how directly these two products compete in the broader market, but I do know they have some overlap relating to instrument control. I had some exposure to LabVIEW many years ago thanks to LEGO Mindstorms, which had used a version of LabVIEW for programming the NXT brick. Back then the Mindstorm-specific version was very closely guarded and, when I lost track of my CD-ROM, I was out of luck because neither NI nor LEGO made it available for download. Thankfully that has since changed and the Mindstorm flavor of LabVIEW is available for download.

But I’m not focused on LEGO right now, today’s aim is to see how I might fulfill my general computer control goals with this tool. For that information I was thankful National Instruments made help files for LabVIEW available for download so I can investigate without a full download and installation of the full tool suite. It took a bit of hunting around to find them, though, and the download page was titled LabVIEW 2018 but it has a download link for the 2019 help files.

I found a help page “Serial Port Communication” under the section:

  • Controlling Instruments
    • Types of Instruments

And it assumes the user would only be controlling devices that can communicate to VISA protocol, not general serial communication. There were more serial communication information in the section:

  • VISA Resource
    • I/O Session
      • Serial Instr

There’s also an online tutorial for instrument communication. This page has a flowchart that implied there’s a “Direct I/O” that we can fallback to if all else fails, but I found no mention for performing this direct I/O in the help files.

The graphics rendering side was more straightforward. There’s no mention of ActiveX control here, but under:

  • Fundamentals
    • Graphs and Charts
      • Graphics and Sound VIs

There are multiple pages of information for a “2D Picture Control” with drawing primitives like points, lines, arcs, etc. Details on this drawing API are found under:

  • VI and Function Reference
    • Programming VIs and Functions
      • Graphics & Sound VIs
        • Picture Plot VIs

However, it’s not clear this functionality scales to complex drawings with thousands (or more) of primitives. It certainly wouldn’t be the first time I used an API that stumbled as the data size grew.

So the drawing side looks workable pending a question mark on how well it scales, but the serial communication side is blocked. Until I find a way to perform that mystical direct I/O, I’m going to set LabVIEW aside and look at its sibling LabWindows/CVI.

Window Shopping: Keysight VEE Custom Data Display

Looking over Keysight VEE’s support for device communication, I found there is only support for a limited subset of USB serial communication patterns. And even for the supported transaction model, it seems to be quite labor intensive to craft. It left me with the impression venturing outside VEE’s supported list of equipment is something to be avoided.

Attention then turn to VEE’s support for arbitrary display or data. Like its competitors in the space of test instrumentation software, there is an extensive library for data analysis common for the problem domain. This is useful for their paying customers, but again quite restrictive if we want to venture outside their supported list.

As far as I can tell by just reading their Advanced Techniques PDF, the method to add a custom data visualization component is to create an ActiveX control. This is a technology I haven’t thought about in years! I first learned of it in the context of Microsoft Visual Basic decades ago, where people could drag and drop UI components to build their application. Each of these visual components were built with technology that eventually became named ActiveX controls. This technology is so old not even Microsoft is investing in it now. They have moved on, giving stewardship to an open standards body.

The fact ActiveX is the state-of-the-art technology for extending VEE is telling. Looking over the recent history of VEE software releases, it has all the signs of a piece of software living on continuing life support. They are still releasing new versions on a regular basis, but the advances between releases are mostly in the form of new instrument support (GPIB and otherwise) and certification it will run on latest edition of Windows. I have seen very little in the way of new feature development or general evolution.

VEE seems to be perfectly suited to their target market: electronics engineers trying to automate a collection of instruments, every one of which support industry standard protocols. (Especially those made by Keysight.) Then, perform analysis of that data as typically needed by electrical engineers. But since my goal is to control arbitrary equipment communicating over USB serial, then process and display that data in ways unrelated to electrical engineering, I should set VEE aside and look at other options.

Window Shopping: Keysight VEE Serial Communication

When I was learning about industry standards for electronics test and measurement equipment automation, I quickly came across GPIB which has its roots in something Hewlett-Packard developed for their equipment. It made sense, then, that they would have a software suite to run on a PC and talk to these instruments via GPIB. This turned out to be something called VEE, but it is no longer a HP product. It has had multiple custodians. From HP it moved to Agilent, and now it is in the hands of Keysight Technologies.

So it was no surprise the focus would be on professional equipment with GPIB or the closely related USB-based successor USBTMC. There is also built-in support for a few other instrument standards, all packaged together in the IO Libraries Suite of a full VEE installation. However, I had a hard time finding any mention of how to communicate with custom-built equipment outside of supported protocols. It certainly wasn’t covered in their Quick Start Guide (PDF), so I moved on to Advanced Techniques (PDF).

I thought perhaps I would have to create what they call a “panel driver” for installation into VEE in order to support custom equipment, but a search for “How to write a VEE Panel Driver” failed to retrieve useful links. How do instrument manufacturers create and release VEE Panel Driver for their equipment? So far that is still a mystery.

In the absence of a custom panel driver, the next option is to specify a custom communication protocol directly in VEE, and such a thing can be built from their Transaction I/O mechanism. Suitable at least for query/response types of interaction. The exact commands being sent out and expected to be received are crafted step by step using VEE GUI. This seems very labor intensive but has the advantage of avoiding annoying and common byte processing bugs typical when such serial byte stream processing are written in C.

It’s not obvious from reading the document what happens if the VEE transaction specification is wrong and incoming serial data doesn’t match. This is not encouraging, neither were there any mechanisms to help support development of transaction I/O. There’s just trial and error. Seriously. This is a direct quote from the manual:

Many times the best way to develop the transactions you need is by using trial and error

(Chapter 4: Using Transaction I/O / Creating and Reading Transactions / Editing the Data Field / Suggestions for Developing Transactions)

I didn’t find any mention of how to deal with continuous data stream from devices that do not perform transaction-based communication. For example a thermometer that continuously reports temperature without prompting, or the Neato LIDAR. VEE does have a data polling feature, but that seems to be restricted to devices on specific subset of supported protocols and not arbitrary serial communication.

From this brief survey, it appears VEE support for arbitrary USB serial communication is quite limited. The next step is to look at how VEE support displaying arbitrary data.

New Project: Computer Control via USB Serial

Since I’ve started this blog I’ve explored several problem domains with a common thread: equipment automation by computer control. Most recently my CNC project running GRBL needed a computer G-code sender. 3D printers are similar, though most modern printers can run independently using G-code stored on a memory card. And finally the aborted thermoforming machine project that had ambition to use a Raspberry Pi as its brain.

In all of these cases, we had a piece of equipment that could communicate with a PC via a USB serial adapter, so we can let the hardware focus on low level tasks and the PC could perform higher level tasks. It would be useful to dig deeper into this world and get a better feel of the tools available for solving problems.

For this initial pass my focus will be on PC software that interfaces with arbitrary pieces of equipment that communicate via USB serial, and the primary motivation for PC involvement would be its ability to display data on a full sized computer monitor. These two considerations will be my “North Star” for prioritization.

As a starting point, I thought I’d look over a few examples from an existing ecosystem: that of computer controlled hardware for electronics measurement and testing automation. I didn’t think they would be directly applicable, but I didn’t know how to articulate why until I learned a little more.

As of this writing, my understanding is that such software’s value to professional engineers are in two areas that are near, but not quite, my own priorities for this investigation:

  • They are built around the IEEE-488 standard, a formalized version of GPIB interface that started as a proprietary solution (HP-IB) for Hewlett-Packard equipment but has grown into a de facto standard in this industry. This standard encompasses the physical connector, electrical interface, and software protocol. Recently, the popularity of USB meant USB has taken over the physical and electrical portions. But the GPIB communication protocol lives on as USB488, which I understand to be a part of USB TMC. This is different from USB serial adapters, which usually identify as USB CDC and/or ACM.
  • They offer an extensive collection of data analysis and visualization tools specialized to the field of electronics test and measurement. This does not necessarily mean they support custom rendering of arbitrarily large data sets.

As a first example of how those differences manifest in practice, let’s do a quick window-shopping pass of Keysight VEE.

Old School Engraving With Gravoply

There’s a certain aesthetic I associate with older labels, signs, and equipment control panels. They have two contrasting colors and a three dimensional feel. It has mostly faded away by now, replaced by crisp flat printing with multiple solid colors or even full color halftone. I hadn’t thought much about those old panels until I had the opportunity to look over some dusty retired equipment for making them.

This particular material was “Gravoply” and it is still available for order. We can specify from a wide selection of colors, though the core (rear) color selection is a little more limited than selection for surface (front) color. The dimensional feel is a function of how they are used to create signage: a rotary engraving tool cuts away the surface layer and expose the core layer to produce a display with two contrasting colors.

This rotary tool was held in a pantograph to trace templates on Gravoply. Laying out a particular design meant working with individual letter templates in a simplified version of how past typesetters did their jobs. While in concept a pantograph could allow arbitrary scaling, it appears scaling is limited in this particular implementation. Otherwise there wouldn’t need to be multiple sizes of letter templates.

Gravoply Templates

This technique is unforgiving of mistakes. If the rotary tool went off track, it would cut portions of surface material that was not intended to be cut away. When this happens, the user has no choice but to start over. Which was the explanation for why these pieces haven’t been used in years: they moved away from this system as soon as a cost effective and less frustrating alternative was available.

Visiting the web site of Gravograph today, I see their products on the front page are computer motion controlled machines,. Though they still make pantographs for doing things the old fashioned way, materials for mechanical removal like Gravoply are typically cut with small CNC vertical mills. Plus they also have material designed for engraving by laser. Technology has moved on, and the company behind Gravoply has evolved with it.

I found the pantograph an interesting mechanism and I might ask to use it in the future for the sake of getting some hands on time with a mechanical anachronism. But I’m not likely to actually create something significant using a pantograph, at least not as long as I have a CNC engraver at my disposal.

Preparing Retired Laptops For Computing Beginners

I’ve just finished looking over several old laptop computers with an eye for using them as robot brains running ROS, a research project made possible by NUCC. Independent of my primary focus, the exercise also gave me some ideas about old laptops used for another common purpose: as cheap “starter” computers for people getting their feet wet in the world of computers. This intent involves a different set of requirements and constraints than robot building.

In this arena, ease of use becomes paramount which means most distributions of Linux are excluded. Even Raspbian, the distribution intended for people to learn in a simplified environment on a Raspberry Pi, can get intimidating for complete beginners. If someone who receives a hand-me-down computer knows and prefers Linux, it’s a fair assumption they know how to install it themselves as I had done with my refurbished Dell Latitude E6230.

Next, a hand-me-down laptop usually includes personal data from its previous owner. Ideally it is inaccessible and hidden behind password protection, but even if not, the safest way to protect against disclosure is to completely erase the storage device and reinstall the operating system from scratch.

Historically for Windows laptops such cleaning also meant the loss of the Windows license since the license key has almost certainly been separated from the computer in its lifespan. Fortunately, starting from Windows 8 there is a Windows license key embedded in the hardware, so a clean install will still activate and function properly. For these Windows laptops and MacOS machines, it is best to preserve that value and run its original operating system. This was the case for the HP Split X2 I examined.

If a Windows or MacOS license is not available, the most beginner-friendly operating system is Chrome OS. It is available for individuals to install as Neverware CloudReady: Home Edition. Putting this on a system before giving it to a beginner will allow them to explore much of modern computing while also sparing them much of the headaches. And if they dig themselves into a hole, it is easy to restart from scratch with a “Powerwash”. This was what I had done with the Toshiba Chromebook 2 I examined.

But modern computing has left 32-bit CPUs behind, limiting options for older computers lacking support for 64-bit x86_64 instruction set. It meant Neverware CloudReady is not an option for them either. It is possible the user can be served by a machine that is a stateless web kiosk machine, in which case we can install Ubuntu Core with the basics of web kiosk.

And if we have exhausted all of those options, as was the case for the HP Mini netbook I examined, I believe that means the machine is not suitable as a hand-me-down starter computer for a beginner. Computers unable to meet minimum requirements for all of the above would only be suitable for basic command-line based usage. And whether computing veterans like it or not, current convention wisdom says a command line is not the recommended starting point for today’s computing beginners.

So in order of preference, the options for a beginner-friendly laptop after wiping a disk of old data are:

  1. Windows (if license is in hardware) or MacOS (for Apple products)
  2. Either original Chromebook/Chromebox or Chrome OS via Neverware CloudReady.
  3. Ubuntu Snappy Core in Web Kiosk mode.
  4. Sorry, it is not beginner friendly.

A Tale of Three Laptops

This is a summary of my research project enabled by the National Upcycling Computing Collective (NUCC). Who allowed me to examine three retired laptop computers of unknown condition, evaluating them as potential robot brain for running Robot Operating System (ROS).

For all three machines, I found a way to charge their flat batteries and get them up and running to evaluate their condition. I also took them apart to understand how I might mechanically integrate them into a robot chassis. Each of them got a short first-pass evaluation report, and all three are likely to feature in future projects both robotic and otherwise.

In the order they were examined, the machines were:

  1. HP Split X2 (13-r010dx): This was a tablet/laptop convertible designed for running Windows 8, an operating system that was also designed for such a dual-use life. Out of the three machines, this one had the longest feature list including the most modern and powerful Intel Core i3 CPU. But as a tradeoff, it was also the bulkiest of the bunch. Thus while the machine will have no problem running ROS, the mechanical integration will be a challenge. Its first pass evaluation report is here. For full details query tag of 13-r010dx for all posts relating to this machine, including future projects.
  2. Toshiba Chromebook 2 (CB35-B3340): This machine was roughly the same age as the HP, but as a Chromebook it had a far less ambitious feature list but that also gave it a correspondingly lighter and slimmer profile. It is possible to run a form of Ubuntu (and therefore ROS) inside a Chromebook, but there are various limitations of doing so. Its suitability as a robot brain is still unknown. In the meantime, the first pass evaluation report is here, and all related posts (past and future) tagged with CB35-B3340.
  3. HP Mini (110-1134CL): This was a ~10 year old netbook, making it the oldest and least capable machine of the bunch. A netbook was a simple modest machine when new, and the age meant this hardware lacks enough processing power to handle modern software. While technically capable of running ROS Kinetic, the low power processor could only run the simplest of robots and unable to take advantage of the more powerful aspects of ROS. The first pass evaluation report is here, and all related posts tagged with 110-1134CL.

While not the focus of my research project, looking over four old laptops in rapid succession (these three from NUCC plus the refurbished Dell Latitude E6230 I bought) also gave me a perspective on preparing old laptops for computing beginners.

HP Mini (110-1134CL): First Pass Evaluation

This HP Mini netbook was the oldest of three laptops in this NUCC-sponsored research project. As a netbook, it was a very limited and basic machine even when new, and that was around ten years ago. A lot has changed in the computing world since then.

Today, its 32-bit only CPU limits robot brain applications, as only the older ROS Kinetic LTS released prebuilt 32-bit binaries. Outside of robot brain applications, any modern graphical user interface is sluggish on this machine. From Chrome OS up through Windows 10 and everything in between. When running Ubuntu Mate, it actually felt worse than a Raspberry Pi running the same operating system, which came as a surprise. Both had ~1GHz CPUs and 1GB of RAM. And even though a 10-year old Atom could outperform a modern ARM CPU, the 10-year old Intel integrated graphics processor has fallen well behind a modern ARM’s graphics core.

So it appears the best position for this machine is in running command line computing or data processing tasks that work well on old low-end Intel 32-bit chips. It would be a decent contender for the type of projects that today we would think of running on a Raspberry Pi. With the caveat of weaker graphics effects, it offers the following advantages over a Raspberry Pi:

  • Intel x86 (32-bit) instruction set.
  • Higher resolution screen than the standard Raspberry Pi touchscreen.
  • Keyboard (minus the N key in this particular example)
  • Touchpad
  • Battery for portable use
  • Actual data storage device in the form of a SATA drive, not a microSD card.

It is also the only one of the three NUCC machines to have a hard wire Ethernet port. As someone who’s been burned by wireless communication issues more than once, this is a pretty significant advantage over the rest of the machines in my book.


HP Mini (110-1134CL): Command Line Adept

So far I’ve determined a ~10 year old netbooks lack the computing power for a modern desktop graphical user interface, even those considered lightweight by today’s standards. Was it always sluggish even in its prime? It’s a little hard to tell from here, because even though computers have undoubtedly gotten faster, our expectations have risen as well.

But there’s more to a computer’s capability than pushing pixels around, so we fall back to the next round of experiments with command line interface systems. And since we’ve already established that a solid state drive was not a great performance booster on this platform, I put the original spinning platter hard drive back in for the next round.

This time instead of Ubuntu Desktop, I installed Ubuntu server edition instead. This minimalist distribution lacks the user friendliness of a graphical user interface, but it also lacks the graphics processing workload of displaying one as well. As a result this machine is quite snappy and responsive. I found it quite usable, especially now that I’ve learned about virtual consoles and use the Alt key plus F1 through F6 to switch between up to six different sessions. Simple tasks like running Python scripts and running a basic server were done easily and quickly.

I started experimenting with Ubuntu 16, because Ubuntu did not release prebuilt installation binaries for 32-bit Ubuntu 18. However, once Ubuntu 16 server was and and running, I was able to rundo-release-upgrade to move up to Ubuntu 18. From minor tinkering I didn’t notice any significant difference between them.

Then I remembered I had played with an even more minimalist Ubuntu earlier, on an even older machine. Ubuntu 18 Snappy Core is available for 32-bit i386 processors, and it installed successfully on this laptop. Now I have one more incentive to learn how to build my own snaps to install on such a system. I just have to remember to that I can only connect to an Ubuntu Snap machine via SSH, and the list of valid keys associated with an account do not auto-update. I typically generate a SSH key every time I reset a machine, and I no longer have the keys to access my previous snappy core experiment. I ended up reinstalling snappy core to pick up the current set of SSH keys.