Seeed Studio Odyssey X86J4105 Has Good ROS2 Potential

If I were to experiment with upgrading my Sawppy to ROS2 right now, with what I have on hand, I would start by putting Ubuntu ARM64 on a Raspberry Pi 3 for a quick “Hello World”. However, I would also expect to quickly run into limitations of a Pi 3. If I wanted to buy myself a little more headroom, what would I do?

The Pi 4 is an obvious step up from the 3, but if I’m going to spend money, the Seeed Studio Odyssey X86J4105 is a very promising candidate. Unlike the Pi, it has an Intel Celeron processor on board so I can build x86_64 binaries on my desktop machine and copy them straight over. Something I hope to eventually be a painless option for ROS2 cross compilation to ARM, but we’re not there yet.

This board is larger than a Raspberry Pi, but still well within Sawppy’s carrying capacity. It’s also very interesting that they copied the GPIO pin layout from Raspberry Pi, the idea some HATs can just plug right in is very enticing. Although that’s not a capability that would be immediately useful for Sawppy specifically.

The onboard Arduino co-processor is only useful for this application if it can fit within a ROS2 ecosystem, and the good news is that it is based on the SAMD21. Which makes it powerful enough to run micro-ROS, an option not available to the old school ATmega32U4 on the LattePanda boards.

And finally, the electrical power supply requirements are very robot friendly. The spec sheet lists DC input voltage requirement at 12V-19V, implying we can just put 4S LiPo power straight into the barrel jack and onboard voltage regulators will do the rest.

The combination of computing power, I/O, and power flexibility makes this board even more interesting than an Up Board. Definitely something to keep in mind for Sawppy contemplation and maybe I’ll click “Add to Cart” on this nifty little board (*) sometime in the near future.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Update on ARM64: ROS2 on Pi

When I last looked at running ROS on a Raspberry Pi robot brain, I noticed Ubuntu now releases images for Raspberry Pi in both 32-bit and 64-bit flavors but I didn’t know of any compelling reason to move to 64-bit. The situation has now changed, especially if considering a move to the future of ROS2.

The update came courtesy of an announcement on ROS Discourse notifying the community that supporting 32-bit ARM builds have become troublesome, and furthermore, telemetry indicated that very few ROS2 robot builders were using 32-bit anyway. Thus the support for that platform is demoted to tier 3 for the current release Foxy Fitzroy.

This was made official on REP 2000 ROS 2 Releases and Target Platforms showing arm32 as a Tier 3 platform. As per that document, tier 3 means:

Tier 3 platforms are those for which community reports indicate that the release is functional. The development team does not run the unit test suite or perform any other tests on platforms in Tier 3. Installation instructions should be available and up-to-date in order for a platform to be listed in this category. Community members may provide assistance with these platforms.

Looking at the history of ROS 2 releases, we can see 64-bit has always been the focus. The first release Ardent Apalone (December 2017) only supported amd64 and arm64. Support for arm32 was only added a year ago for Dashing Diademata (May 2019) and only at tier 2. They kept it at tier 2 for another release Eloquent Elusor (November 2019) but now it is getting dropped to tier 3.

Another contributing factor is the release of Raspberry Pi 4 with 8GB of memory. It exceeded the 4GB limit of 32-bit addressing. This was accompanied by an update to the official Raspberry Pi operating system, renamed from Raspbian to Raspberry Pi OS, is still 32-bit but with mechanisms to allow addressing 8GB of RAM across the operating system even though individual processes are limited to 3GB. The real way forward is to move to a 64-bit operating system, and there’s a beta 64-bit build of Raspberry Pi OS.

Or we can go straight to Ubuntu’s release of 64-bit operating system for Raspberry Pi.

And the final note on ROS2: a bunch of new tutorials have been posted! The barrier for transitioning to ROS2 is continually getting dismantled, one brick at a time. And it’s even getting some new attention in long problematic areas like cross-compilation.

Window Shopping ARCore: API Documentation

Investigating Google ARCore for potential robotics usage, it was useful to review their Fundamental Concepts and Design Guidelines because it tells us the motivation behind various details and the priorities of the project. That gives us context around what we see in the nuts and bolts of the actual APIs.

But the APIs are where “the rubber meets the road” and where we leave all the ambitions and desires behind: the actual APIs implemented in shipping phones define the limitations of reality.

We get a dose of reality pretty immediately: estimation of phone pose in the world comes with basically no guarantees on global consistency.

World Coordinate Space
As ARCore’s understanding of the environment changes, it adjusts its model of the world to keep things consistent. When this happens, the numerical location (coordinates) of the camera and Anchors can change significantly to maintain appropriate relative positions of the physical locations they represent.

These changes mean that every frame should be considered to be in a completely unique world coordinate space. The numerical coordinates of anchors and the camera should never be used outside the rendering frame during which they were retrieved. If a position needs to be considered beyond the scope of a single rendering frame, either an anchor should be created or a position relative to a nearby existing anchor should be used.

Since it is on a per-frame basis, we could get Pose and PointCloud from a Frame. And based on that text, these would then need to be translated through anchors somehow? The first line of Anchor page makes it sound that way:

Describes a fixed location and orientation in the real world. To stay at a fixed location in physical space, the numerical description of this position will update as ARCore’s understanding of the space improves.

However, I saw no way to retrieve any kind of identifying context for these points. Ideally I would want “Put an anchor on that distinctive corner of the table” or some such. But still, “Working with anchors” has basic information on how it is useful. But as covered in many points throughout ARCore documentation, use of anchors must be kept at a minimum due to computational expense. Each Anchor is placed relative to a Trackable, and there are many ways to get one. The biggest hammer seems to be getAllTrackables from Sesson, which has a shortcut of createAnchor. There are more narrowly scoped ways to query for Trackable points depending on scenario.

Given what I see of ARCore APIs right now, I’m still a huge fan of future potential. Unfortunately its current state is not a slam dunk for robotics application, and that is not likely to change in the near future due to explicit priorities set by the product team.

But while I had my head buried in studying ARCore documentation, another approach popped up on the radar: the OpenCV AI Kit.

Window Shopping Google ARCore: Design Guidelines

After I got up to speed on fundamental concepts of Google ARCore SDK, I moved on to their design recommendations. There are two parts to their design guidelines: an user experience focused document, and a software development focused variant. They cover many of the same points, but from slightly different perspectives.

Augmented reality is fascinating because it has the potential to create some very immersive interactive experiences. The downside is that an user may get so immersed in the interaction they lose track of their surroundings. Much of the design document described things to avoid in an AR app that boiled down to: please don’t let the user hurt themselves. Many potential problems were illustrated by animated cartoon characters, like this one of an user walking backwards so focused on their phone they trip over an obstacle. Hence one of the recommendations is to avoid making users walk backwards.

Image source: Google

Some of the user experience guidelines help designers avoid weaknesses in ARCore capabilities. Like an admission that vertical surfaces can be challenging, because they usually have fewer identifiable features as compared to floors and tabletops. I found this interesting because some of the advertised capabilities, such as augmented images, are primarily targeted to vertical surfaces yet it isn’t something they’ve really figured out yet.

What I found most interesting was the discouragement of haptic feedback in both the UX design document and the developer document. Phone haptic feedback are usually implemented as a small electric motor spinning an unbalanced weight, causing vibration. This harms both parts of Structure from Motion calculations at the heart of phone AR: vibration adds noise to the IMU (inertial measurement unit) tracking motion, and vibration blurs the video captured by the camera.

From a robotics adaption viewpoint, this is discouraging. A robot chassis will have motors and their inevitable vibrations, some of which would be passed on to a phone bolted to the chassis. The characteristics of this vibration noise would be different from shaky human hands, and given priority of the ARCore team they would work to damp out the human shakiness but robotic motions would not be a priority.

These tidbits of information have been very illuminating, leading to the next step: find more details in the nitty gritty API documentation.

Window Shopping Google ARCore: Tracking

I started learning about Google ARCore SDK by reading the “Fundamental Concepts” document. I’m not in it for augmented reality, but to see if I can adapt machine vision capabilities for robotics. So while there are some interesting things ARCore could tell me about a particular point in time, the real power is when things start moving and ARCore works to track them.

The Trackable object types in the SDK represent data points in three dimension space that ARCore, well, tracks. I’m inferring these are visible features that are unique enough in the visible scene to be picked out across multiple video frames, and whose movement across those frames were sufficient for ARCore to calculate its position. Since those conditions won’t always hold, individual points of interest will come and go as the user moves around in the environment.

From there we can infer such an ephemeral nature would require a fair amount of work to make such data useful for augmented reality apps. We’d need to follow multiple feature points so that we can tolerate individual disappearances without losing our reference. And when new interesting features comes on to the scene, we’d need to decide if they should be added to the set of information followed. Thankfully, the SDK offers the Anchor object to encapsulate this type of work in a form usable by app authors, letting us designate a particular trackable point as important, and telling ARCore it needs to put in extra effort to make sure that point does not disappear. This anchor designation apparently brings in a lot of extra processing, because ARCore can only support a limited number of simultaneous anchors and there are repeated reminders to release anchors if they are no longer necessary.

So anchors are a limited but valuable resource for tracking specific points of interest within an environment, and that led to the even more interesting possibilities opened up by ARCore Cloud Anchor API. This is one of Google’s cloud services, remembering an anchor in general enough terms that another user on another phone can recognize the same point in real world space. In robot navigation terms, it means multiple different robots can share a set of navigation landmarks, which would be a fascinating feature if it can be adapted to serve as such.

In the meantime, I move on to the ARCore Design Guidelines document.

Window Shopping Google ARCore: Concepts

I thought Google’s ARCore SDK offered interesting capabilities for robots. So even though the SDK team is explicitly not considering robotics applications, I wanted to take a look.

The obvious starting point is ARCore’s “Fundamental Concepts” document. Here we can confirm the theory operation is consistent with an application of Structure from Motion algorithms. Out of all the possible type of information that can be extracted via SfM, a subset is exposed to applications using the ARCore SDK.

Under “Environmental Understanding” we see the foundation supporting AR applications: an understanding of the phone’s position in the world, and of surfaces that AR objects can interact with. ARCore picks out horizontal surfaces (tables, floor) upon which an AR object can be placed, or vertical surfaces (walls) upon which AR images can be hung like a picture. All other features build on top of this basic foundation, which also feel useful for robotics: most robots only navigate on horizontal surfaces, and try to avoid vertical walls. Knowing where they are relative to current position in the world would help collision detection.

The depth map is a new feature that caught my attention in the first place, used for object occlusion. There is also light estimation, helping to shade objects to fit in with their surroundings. Both of these allow a more realistic rendering of a virtual object in real space. While the depth map has obvious application for collision detection and avoidance more useful than merely detecting vertical wall surfaces. Light estimation isn’t obviously useful for a robot, but maybe interesting ideas will pop up later.

In order for users to interact with AR objects, the SDK includes the ability to map the user’s touch coordinate in 2D space into the corresponding location in 3D space. I have a vague feeling it might be useful for a robot to know where a particular point in view is in 3D space, but again no immediate application comes to mind.

ARCore also offers “Augmented Images” that can overlay 3D objects on top of 2D markers. One example offered: “for instance, they could point their phone’s camera at a movie poster and have a character pop out and enact a scene.” I don’t see this as a useful capability in a robotics application.

But as interesting as these capabilities are, they are focused on a static snapshot of a single point in time. Things get even more interesting once we are on the move and correlate data across multiple points in space or even more exciting, multiple devices.

Robotic Applications for “Structure From Motion” and ARCore

I was interested to explore if I can adapt capabilities of augmented reality on mobile device to an entirely different problem domain: robot sensing. First I had to do a little study to verify it (or more specifically, the Structure from Motion algorithms underneath) isn’t fundamentally incompatible with robots in some way. Once I gained some confidence I’m not barking up the wrong tree, a quick search online using keywords like “ROS SfM” returned several resources for applying SfM to robotics including several built on OpenCV. A fairly consistent theme is that such calculations are very computationally intensive. I found that curious, because such traits are inconsistent with the fact they run on cell phone CPUs for ARCore and ARKit. A side trip explored whether these calculations were assisted by specialized hardware like “AI Neural Coprocessor” that phone manufacturers like to tout on their spec sheet, but I decided that was unlikely for two reasons. (1) If deep learning algorithms are at play here, I should be able to find something about doing this fast on the Google AIY Vision kit, Google Coral dev board, or NVIDIA Jetson but I came up empty-handed. (2) ARCore can run on some fairly low-frills mid range phones like my Moto X4.

Finding a way to do SFM from a cell phone class processor would be useful, because that means we can potentially put it on a Raspberry Pi, the darling of hobbyist robotics. Even better if I can leverage neural net hardware like those listed above, but that’s not required. So far my searches have been empty but something might turn up later.

Turning focus back to ARCore, a search for previous work applying ARCore to robotics returned a few hits. The first hit is the most discouraging: ARCore for Robotics is explicitly not a goal for Google and the issue closed without resolution.

But that didn’t prevent a few people from trying:

  • An Indoor Navigation Robot Using Augmented Reality by Corotan, et al. is a paper on doing exactly this. Unfortunately, it’s locked behind IEEE paywall. The Semantic Scholar page at least lets me sees the figures and tables, where I can see a few tantalizing details that just make me want to find this paper even more.
  • Indoor Navigation Using AR Technology (PDF) by Kumar et al. is not about robot but human navigation, making it less applicable for my interest. Their project used ARCore to implement an indoor navigation aid, but it required the environment to be known and already scanned into a 3D point cloud. It mentions the Corotan paper above as part of “Literature Survey”, sadly none of the other papers in that section was specific to ARCore.
  • Localization of a Robotic Platform using ARCore (PDF) sounded great but, when I brought it up, I was disappointed to find it was a school project assignment and not results.

I wish I could bring up that first paper, I think it would be informative. But even without that guide, I can start looking over the ARCore SDK itself.

Augmented Reality Built On “Structure From Motion”

When learning about a new piece of technology in a domain I don’t know much about, I like to do a little background research to understand the fundamentals. This is not just for idle curiosity: understanding theoretical constraints could save a lot of grief down the line if that knowledge spares me from trying to do something that looked reasonable at the time but is actually fundamentally impossible. (Don’t laugh, this has happened more than once.)

For the current generation of augmented reality technology that can run on cell phones and tablets, the fundamental area of research is “Structure from Motion“. Motion is right in the name, and that key component explains how a depth map can be calculated from just a 2D camera image. A cell phone does not have a distance sensor like Kinect’s infrared projector/camera combination, but it does have motion sensors. Phones and tablets started out with only a crude low resolution accelerometer for detecting orientation, but that’s no longer the case thanks to rapid advancements in mobile electronics. Recent devices have high resolution, high speed sensors that integrate accelerometer, gyroscope, and compass across X, Y, and Z axis. These 9-DOF sensors (3 types of data * 3 axis = 9 Degrees of Freedom) allow the phone to accurately detect motion. And given motion data, an algorithm can correlate movement against camera video feed to extract parallax motion. That then feeds into code which builds a digital representation of the structure of the phone’s physical surroundings.

Their method of operation would also explain how such technology could not replace a Kinect sensor, which is designed to sit on the fireplace mantle and watch game players jump around in the living room. Because the Kinect sensor bar does not move, there is no motion from which to calculate structure making SfM useless for such tasks. This educational side quest has thus accomplished the “understand what’s fundamentally impossible” item I mentioned earlier.

But mounted on a mobile robot moving around in its environment? That should have no fundamental incompatibilities with SfM, and might be applicable.

Google ARCore Depth Map Caught My Attention

Once I decided to look over an augmented reality SDK with an intent for robotics applications, I went to look at Google’s ARCore instead of Apple’s ARKit for a few reasons. The first is hardware: I have been using Android phones so I have several pieces of ARCore compatible hardware on hand. I also have access to computers that I might be able to draft into Android development duty. In contrast, Apple ARKit development requires MacOS desktop machines and iOS hardware, which is more expensive and rare in my circles.

The second reason was their announcement that ARCore now has a Depth API. Their announcement included two animated GIFs that caught my immediate attention. The first shows that they can generate a depth map, with color corresponding to distance from camera.

ARCore depth map
Image source: Google

This is the kind of data I had previously seen from a Xbox 360 Kinect sensor bar, except the Kinect used an infrared beam projector and infrared camera to construct that depth information on top of its RGB camera. In comparison, Google’s demo implies that they can derive similar information from just a RGB camera. And given such a depth map, it should be theoretically possible to use it in a similar fashion to a Kinect. Except now the sensor would be far smaller, battery powered, and works in bright sunlight unlike the Kinect.

ARCore occlusion
Image source: Google

Here is that data used in ARCore context: letting augmented reality objects be properly occluded by obstacles in the real world. I found this clip comforting because its slight imperfections assured me this is live data of a new technology, and not a Photoshop rendering of what they hope to accomplish.

It’s always the first question we need to ask of anything we see on the internet: is it real? The depth map animation isn’t detailed enough for me to see if it’s too perfect to be true. But the occlusion demo is definitely not too perfect: there are flaws in the object occlusion as the concrete wall moved in and out of the line of sight between us and the animated robot. This is most apparent in the second half of the clip, as the concrete wall retreated we could see bits of stair that should have been covered up by the robot but is still visible because the depth map hadn’t caught on yet.

Incomplete occlusion

So this looks nifty, but what was the math magic that made it possible?

Might A Robot Utilize Google ARCore?

Machine vision is a big field, because there are a lot of useful things we can do when a computer understands what it sees. In narrow machine-friendly niches it has become commonplace, for example the UPC bar code on everyday merchandise is something created for machines to read, and a bar code reader is a very simplified and specific niche of machine vision.

But that is a long, long way from a robot understanding its environment through cameras, with many sub sections along the path which are entire topics in their own right. Again we have successes in narrow machine-friendly domains such as a factory floor set up for automation. Outside of environments tailored for machines, it gets progressively harder. Roomba and similar robot home vacuums like Neato could wander through a human home, but their success depends on a neat and tidy spacious home. As a home becomes more cluttered, success rate of robot vacuums decline.

But they’re still using specialized sensors and not a camera with vision comparable to human sight. Computers have no problems chugging through a 2D array of pixel data, but extracting useful information is hard. The recent breakthrough in deep learning algorithms opened up more frontiers. The typical example is a classifier, and it’s one of the demos that shipped with Google AIY Vision kit. (Though not the default, which was the “Joy Detector.”) With a classifier the computer can say “that’s a cat” which is a useful step toward something a robot needs, which is more like “there’s a house pet in my path and I need to maneuver around it, and I also need to be aware it might get up and move.” (This is a very advanced level of thinking for a robot…)

The skill to pick out relevant physical structure from camera image is useful for robots, but not exclusively to robots. Both Google and Apple are building augmented reality (AR) features into phones and tablets. Underlying that feature is some level of ability to determine structure from image, in order to overlay an AR object over the real world. Maybe that capability can be used for a robot? Time for some research.

Window Shopping Firmata: Connect Microcontrollers To Computers

Reading about LabVIEW LINX stirred up memory of something with a similar premise. I had forgotten its name and it took a bit of research to re-discover Firmata. Like LINX, Firmata is a protocol for communicating between microcontrollers and desktop computers. Like LINX, there are a few prebuilt implementations available for direct download, such as their standard Arduino implementation of Firmata.

There’s one aspect of the Firmata protocol I found interesting: its relationship to MIDI messages. I had originally thought it was merely inspired by MIDI messages, but the Firmata protocol documentation says it is actually a proper subset of MIDI. This means Firmata messages have to option to coexist with MIDI messages on the same channel, conveying data that is mysterious to MIDI instruments but well-formed to not cause problems. This was an interesting assertion, even with the disclaimer that in practice Firmata typically runs at a higher serial speed on its own bus.

Like LINX, Firmata is intended to be easily implemented by simple hardware. The standard Arduino implementation can be customized for specific projects, and anything else that can communicate over a serial port is a candidate hardware endpoint for Firmata.

But on the computer side, Firmata is very much unlike LINX in its wide range of potential software interfaces. LINX is a part of LabVIEW, and that’s the end of the LINX story. Firmata can be implemented by anything that can communicate over a serial port, which should cover almost anything.

Firmata’s own Github hosts some Python sample code, and it is but one of five options for Python client libraries listed on the protocol web site and they carry along some useful tips like using Python’s ord()/chr() to convert hexadecimal data to/from Firmata packets. Beyond Python, every programming language I know of are invited to the Firmata party: Processing, Ruby, JavaScript, etc.

Since I had been playing with C# and .NET recently, I took a quick glance at their section of Firmata. These are older than UWP and use older non-async APIs. The first one on the list used System.IO.Ports.SerialPort, and needed some workaround for Mono. The second one isn’t even C#: It’s aimed at Visual Basic. I haven’t looked at the third one on the list.

If I wanted to write an UWP application that controls hardware via Firmata, writing a client library with the newer async Windows.Devices.SerialCommunication.SerialDevice API might be a fun project.

Windows Shopping LINX: Connecting LabVIEW To Maker Hardware

When I looked over LabVIEW earlier with the eyes of a maker, my biggest stumbling block was trying to connect to the kind of hardware a maker would play with. LabVIEW has a huge library of hardware interface components for sophisticated professional level electronics instrumentation. But I found nothing for simple one-off projects like the kind I have on my workbench. I got as far as finding a reference to a mystical “Direct I/O” mechanism, but no details.

In hindsight, that was perfectly reasonable. I was browsing LabVIEW information presented on their primary site targeted to electronics professionals. I thought the lack of maker-friendly information meant National Instruments didn’t care about makers, but I was wrong. It actually meant I was not looking at the right place. LabVIEW’s maker-friendly face is on an entirely different site, the LabVIEW MakerHub.

Here I learned about LINX, an architecture to interface with maker level hardware starting with the ubiquitous Arduino, Raspberry Pi, and extensible to others. From the LINX FAQ and the How LINX Works page I got the impression it allows individual LabVIEW VI (virtual instrument) to correspond to individual pieces of functionality on an Arduino. But very importantly, it implies that representation is distinct from the physical transport layer, where there’s only one serial (or WiFi, or Ethernet) connection between the computer running LabVIEW and the microcontroller.

If my interpretation is true this is a very powerful mechanism. It allows the bulk of LabVIEW program to be set up without worrying about underlying implementation. Here’s one example that came to mind: A project can start small with a single Arduino handling all hardware interface. Then as the project grows, and the serial link becomes saturated, functions can be split off into separate Arduinos with their own serial link plugged in to the computer. Yet doing so would not change the LabVIEW program.

That design makes LabVIEW much more interesting. What dampens my enthusiasm is the lack of evidence of active maintenance on LabVIEW MakerHub. I see support for BeagleBone Black, but not any of the newer BeagleBone boards (Pocket is the obvious candidate.) The list of supported devices list Raspberry Pi only up to 2, Teensy only up to 3.1, Espressif ESP8266 but not the ESP32, etc. Balancing that discouraging sight is that the code is on Github, and we see more recent traffic there as well as the MakerHub forums. So it’s probably not dead?

LINX looks very useful when the intent is to interface with LabVIEW on the computer side. But when we want something on the computer other than LabVIEW, we can use Firmata which is another implementation of the concept.

UPDATE: And just after I found it (a few years after it launched) NI is killing MakerHub with big bold red text across the top of the site: “This site will be deprecated on August 1, 2020″

Window Shopping: Progressive Web App

When I wrote up my quick notes on ElectronJS, I had the nagging feeling I forgot something. A few days later I remembered: I forgot about Progressive Web Apps (PWA), created by some people at Google who agrees with ElectronJS that their underlying Chromium engine can make a pretty good host for local offline applications.

But even though PWA and ElectronJS share a lot in common, I don’t see them as direct competitors. What I’ve seen on ElectronJS is focused on creating applications in the classic sense. They are primarily local apps, just built using technologies that were born in the web world. Google’s PWA demos showcase extension of online web sites, where the primary focus is on the web site but PWA lets them have a local offline supplement.

Given that interpretation, a computer control panel for an electronics hardware project is better suited to ElectronJS than a PWA. At least, as long as the hardware’s task is standalone and independent of others. If a piece of hardware is tied to a network of other similar or complementary pieces, then the network aspect may favor a PWA interfacing with the hardware via Web USB. Google publishes a tutorial showing how to talk to a BBC micro:bit using a Chrome serial port API. I’m not yet familiar with the various APIs to know if this tutorial used the web standard or if it uses the Chrome proprietary predecessor to the standard, but its last updated date of 2020/2/27 implies the latter.

Since PWA started as a Google initiative, they’ve enabled it in as many places as they could starting with their own platforms like Android and ChromeOS. They are also supported via Chrome browser on major desktop operating systems. The big gap in support are Apple’s iOS platforms, where Apple forbids a native code Chrome browser and more generally any application platforms. There are some technical reasons but the biggest hurdle is financial: installing a PWA bypasses Apple’s iOS app store, a huge source of revenue for the company, so Apple has a financial disincentive.

In addition to Google’s PWA support via Chrome, Microsoft supports PWA on Windows via their Edge browser with both the old EdgeHTML and new Chromium-based versions, though with different API feature levels. While there’s a version of Edge browser for Xbox One, I saw no mention of installing PWAs on an Xbox like a standard title.

PWAs would be worth a look for network-centric projects that also have some offline capabilities, as long as iOS support is not critical.

A Quick Look At NI Measurement Studio

While digging through National Instruments online documentation to learn about LabVIEW and LabWindows/CVI, I also came across something called Measurement Studio. This trio of products make up their category of Programming Environments for Electronic Test and Instrumentation. Since I’ve looked at two out of three, might as well look at the whole set and jot down some notes.

Immediately we see a difference in the product description. Measurement Studio is not a standalone application, but an extension to Microsoft Visual Studio. By doing so, National Instruments takes a step back and allows Microsoft Visual Studio to handle most of the common overhead of writing an application, stepping in only when necessary to deliver functionality valuable to their target market. What are these functions? The product page lists three bullet points:

  • Connect to Any Hardware – Electronics equipment industry standard communication protocols GPIB, VISA, etc.
  • Engineering UI Controls – on-screen representation of tasks an electronics engineer would want to perform.
  • Advanced Analysis Libraries – data processing capabilities valuable to electronics engineers.

Basically, all the parts of LabVIEW and LabWindows/CVI that I did not care about for my own projects! Thus if I build a computer control application in Microsoft Visual Studio, I’m likely to just use Visual Studio by itself without the Measurement Studio extension. I am not quite the target market for LabVIEW or LabWindows, and I am completely the wrong market for Measurement Studio.

Even if I needed Measurement Studio for some reason, the price of admission is steep. Because Measurement Studio is not compatible with the free Community Edition of Visual Studio, developing with Measurement Studio requires buying license for a paid tier of Microsoft Visual Studio in addition to the license for Measurement Studio.

And finally, it has been noted that the National Instruments products require low level Win32 API access that prevents them from being a part of the new generation of Windows app that can be distributed via Microsoft Store. These newer apps promise to have better installation and removal experience, automatic updates, and better isolated from each other to avoid incompatibilities like “DLL Hell”. None of those benefits are available if an application pulls in National Instruments software components, which is a pity.

Earlier I said “if I build a computer control application in Microsoft Visual Studio, I’ll just use Visual Studio by itself without the Measurement Studio extension” which got me thinking: that’s a good point! What if I went ahead and wrote a standard Windows application with Visual Studio?

Window Shopping LabWindows/CVI

I’ve taken a quick look over Keysight VEE and LabVIEW, both tools that present software development in a format that resembles physical components and wires: software modules are virtual instruments, data flow are virtual wires. This is very powerful for expressing certain problem domains and naturally imposes a structure. From a software perspective, explicit description of data flow also makes it easier to take advantage of parallel execution possible on modern multicore processors.

But imposing certain structures also make it hard to venture off the beaten path, which is why attention now turns to LabVIEW’s stablemate, LabWindows/CVI. They both offer access to industry standard communication protocols plus data analysis and visualization tools, but the data flow and program structure is entirely different. Instead of LabVIEW’s visual “G” language, LabWindows/CVI uses ANSI C to connect all its components and control flow of data and execution. I am optimistic it will be more aligned with my software experience.

Like LabVIEW, the program help files for LabWindows/CVI is also available for download and perusal. Things look fairly promising at first glance.

I found a serial communication API that can read and write raw bytes under:

  • Library Reference
    • RS-232 Library
      • function tree

For user display, I found something that resembles LabVIEW’s “2D Picture Control” here called a “Canvas Control”. An overview of drawing with Canvas Control’s basic drawing primitives can be found under:

  • Library Reference
    • User Interface Library
      • Controls
        • Control Types
          • Canvas Controls
            • Programming with Canvas Controls

I’m encouraged by what I found looking through LabWindows/CVI help files, enough to download the actual development tool and get hands-on with it.

Window Shopping: LabVIEW 2019

After taking a quick look over Keysight VEE, I switched focus to LabVIEW by National Instruments. I don’t know how directly these two products compete in the broader market, but I do know they have some overlap relating to instrument control. I had some exposure to LabVIEW many years ago thanks to LEGO Mindstorms, which had used a version of LabVIEW for programming the NXT brick. Back then the Mindstorm-specific version was very closely guarded and, when I lost track of my CD-ROM, I was out of luck because neither NI nor LEGO made it available for download. Thankfully that has since changed and the Mindstorm flavor of LabVIEW is available for download.

But I’m not focused on LEGO right now, today’s aim is to see how I might fulfill my general computer control goals with this tool. For that information I was thankful National Instruments made help files for LabVIEW available for download so I can investigate without a full download and installation of the full tool suite. It took a bit of hunting around to find them, though, and the download page was titled LabVIEW 2018 but it has a download link for the 2019 help files.

I found a help page “Serial Port Communication” under the section:

  • Controlling Instruments
    • Types of Instruments

And it assumes the user would only be controlling devices that can communicate to VISA protocol, not general serial communication. There were more serial communication information in the section:

  • VISA Resource
    • I/O Session
      • Serial Instr

There’s also an online tutorial for instrument communication. This page has a flowchart that implied there’s a “Direct I/O” that we can fallback to if all else fails, but I found no mention for performing this direct I/O in the help files.

The graphics rendering side was more straightforward. There’s no mention of ActiveX control here, but under:

  • Fundamentals
    • Graphs and Charts
      • Graphics and Sound VIs

There are multiple pages of information for a “2D Picture Control” with drawing primitives like points, lines, arcs, etc. Details on this drawing API are found under:

  • VI and Function Reference
    • Programming VIs and Functions
      • Graphics & Sound VIs
        • Picture Plot VIs

However, it’s not clear this functionality scales to complex drawings with thousands (or more) of primitives. It certainly wouldn’t be the first time I used an API that stumbled as the data size grew.

So the drawing side looks workable pending a question mark on how well it scales, but the serial communication side is blocked. Until I find a way to perform that mystical direct I/O, I’m going to set LabVIEW aside and look at its sibling LabWindows/CVI.

[UPDATE: I’ve since found LabVIEW MakerHub and LINX, which allows LabVIEW to communicate with maker level hardware over serial.]

Window Shopping: Keysight VEE Custom Data Display

Looking over Keysight VEE’s support for device communication, I found there is only support for a limited subset of USB serial communication patterns. And even for the supported transaction model, it seems to be quite labor intensive to craft. It left me with the impression venturing outside VEE’s supported list of equipment is something to be avoided.

Attention then turn to VEE’s support for arbitrary display or data. Like its competitors in the space of test instrumentation software, there is an extensive library for data analysis common for the problem domain. This is useful for their paying customers, but again quite restrictive if we want to venture outside their supported list.

As far as I can tell by just reading their Advanced Techniques PDF, the method to add a custom data visualization component is to create an ActiveX control. This is a technology I haven’t thought about in years! I first learned of it in the context of Microsoft Visual Basic decades ago, where people could drag and drop UI components to build their application. Each of these visual components were built with technology that eventually became named ActiveX controls. This technology is so old not even Microsoft is investing in it now. They have moved on, giving stewardship to an open standards body.

The fact ActiveX is the state-of-the-art technology for extending VEE is telling. Looking over the recent history of VEE software releases, it has all the signs of a piece of software living on continuing life support. They are still releasing new versions on a regular basis, but the advances between releases are mostly in the form of new instrument support (GPIB and otherwise) and certification it will run on latest edition of Windows. I have seen very little in the way of new feature development or general evolution.

VEE seems to be perfectly suited to their target market: electronics engineers trying to automate a collection of instruments, every one of which support industry standard protocols. (Especially those made by Keysight.) Then, perform analysis of that data as typically needed by electrical engineers. But since my goal is to control arbitrary equipment communicating over USB serial, then process and display that data in ways unrelated to electrical engineering, I should set VEE aside and look at other options.

Window Shopping: Keysight VEE Serial Communication

When I was learning about industry standards for electronics test and measurement equipment automation, I quickly came across GPIB which has its roots in something Hewlett-Packard developed for their equipment. It made sense, then, that they would have a software suite to run on a PC and talk to these instruments via GPIB. This turned out to be something called VEE, but it is no longer a HP product. It has had multiple custodians. From HP it moved to Agilent, and now it is in the hands of Keysight Technologies.

So it was no surprise the focus would be on professional equipment with GPIB or the closely related USB-based successor USBTMC. There is also built-in support for a few other instrument standards, all packaged together in the IO Libraries Suite of a full VEE installation. However, I had a hard time finding any mention of how to communicate with custom-built equipment outside of supported protocols. It certainly wasn’t covered in their Quick Start Guide (PDF), so I moved on to Advanced Techniques (PDF).

I thought perhaps I would have to create what they call a “panel driver” for installation into VEE in order to support custom equipment, but a search for “How to write a VEE Panel Driver” failed to retrieve useful links. How do instrument manufacturers create and release VEE Panel Driver for their equipment? So far that is still a mystery.

In the absence of a custom panel driver, the next option is to specify a custom communication protocol directly in VEE, and such a thing can be built from their Transaction I/O mechanism. Suitable at least for query/response types of interaction. The exact commands being sent out and expected to be received are crafted step by step using VEE GUI. This seems very labor intensive but has the advantage of avoiding annoying and common byte processing bugs typical when such serial byte stream processing are written in C.

It’s not obvious from reading the document what happens if the VEE transaction specification is wrong and incoming serial data doesn’t match. This is not encouraging, neither were there any mechanisms to help support development of transaction I/O. There’s just trial and error. Seriously. This is a direct quote from the manual:

Many times the best way to develop the transactions you need is by using trial and error

(Chapter 4: Using Transaction I/O / Creating and Reading Transactions / Editing the Data Field / Suggestions for Developing Transactions)

I didn’t find any mention of how to deal with continuous data stream from devices that do not perform transaction-based communication. For example a thermometer that continuously reports temperature without prompting, or the Neato LIDAR. VEE does have a data polling feature, but that seems to be restricted to devices on specific subset of supported protocols and not arbitrary serial communication.

From this brief survey, it appears VEE support for arbitrary USB serial communication is quite limited. The next step is to look at how VEE support displaying arbitrary data.

Toyota Mirai Water Release Switch

I have always been a fan of novel engineering and willing to spend my own money to support adventurous products. This is why, back in 2003, I was cross shopping two cars that had nothing in common except novel engineering: the Mazda RX-8 with its Wankel rotary engine and the Toyota Prius gas-electric hybrid.

Side note: it is common for car salesman to ask what other cars a particular shopper is also considering. When I tell them, it was fun to watch watch their faces as they work to process the answer.

Eventually I decided on a Mazda RX-8, which I still own. Since then I have also leased a Chevrolet Volt plug-in hybrid for three years. In fact, the exact Volt shown at the top of my Hackaday post memorializing the car. Both of those cars are no longer being manufactured. Meanwhile Toyota’s gas-electric hybrids have become mainstream, making them less personally interesting to me.

But Toyota has an entirely different car to showcase novel engineering: the hydrogen fuel cell Mirai. I had the chance to join a friend evaluating the car. He was serious about getting one, I just wanted to check it out and was not contemplating one of my own. While we were waiting for his appointment, we got in the showroom model and started looking around.

And since we were engineers, this also included digging into the owner’s manual sitting in the glovebox. The Mirai ownership experience is a fascinating blend of the familiar and the unusual, the strangest item that caught our attention was this water release switch. The manual only said it was for ‘certain situations’ but did not elaborate. We asked the sales rep and learned it was so water can be dumped before entering places where water could cause problems.

Two potential examples were actually in front of us: the Mirai parked in their showroom was sitting on a carpeted surface, where water could leave a stain. Elsewhere in the showroom, cars are parked on tile or polished concrete where water could leave a slippery surface causing people to fall. The button allows a Mirai to drain its water before moving into the showroom.

Right now commercially the Mirai is in a tough spot. It is at the end of the current product cycle, where three year old units from the same generation can be purchased off lease at significant depreciation while a far better looking next generation is on the horizon. Toyota has a lot of incentives on offer for potential Mirai shoppers. When leasing for three years, in addition to discount up front, all regular checkup and maintenance is free (no oil and filter changes here, but things like checking for hydrogen leaks instead) and a $12,000 credit for hydrogen fuel.

It was not enough to entice my friend, and I was not interested either. I believe my next car will be a battery electric vehicle.

Window Shopping Chirp For Arduino… Actually, ESP32

Lately local fellow maker Emily has been tinkering with the Mozzi sound synthesis library for building an Arduino-based noise making contraption. I pitched in occasionally on the software side of her project, picking up bits and pieces of Mozzi along the way. Naturally I started thinking about how I might use Mozzi in a project of my own. I floated the idea of using Mozzi to create a synthetic robotic voice for Sawppy, similar to the voices created for silver screen robots R2-D2, WALL-E, and BB-8.

“That’d be neat,” she said, “but there’s this other thing you should look into.”

I was shown a YouTube video by Alex of Hackster.io. (Also embedded below.) Where a system was prototyped to create a voice for her robot companion Archimedes. And Archie’s candidate new voice isn’t just a set of noises for fun’s sake, they encode data and thus an actual sensible verbal language for a robot.

This “acoustic data transmission” magic is the core offering of Chirp.io, which was created for purposes completely unrelated to cute robot voices. The idea is to allow communication of short bursts of data without the overhead of joining a WiFi network or pairing Bluetooth devices. Almost every modern device — laptop, phone, or tablet — already has a microphone and a speaker for Chirp.io to leverage. Their developer portal lists a wide variety of platforms with Chirp.io SDK support.

Companion robot owls and motorized Mars rovers models weren’t part of the original set of target platforms, but that is fine. We’re makers and we can make it work. I was encouraged when I saw a link for the Chirp for Arduino SDK. Then a scan through documentation of the current release revealed it would be more accurately called the Chirp for Espressif ESP32 SDK as it doesn’t support original genuine Arduino boards. The target platform is actually the ESP32 hardware (connected to audio input and output peripherals) running in its Arduino IDE compatible mode. It didn’t matter to me, ESP32 is the platform I’ve been meaning to gain some proficiency at anyway, but might be annoying to someone who actually wanted to use it on other Arduino and compatible boards.

Getting Chirp.io on an ESP32 up and running sounds like fun, and it’s free to start experimenting. So thanks to Emily, I now have another project for my to-do list.