Virtual Lunar Rovers May Help Sawppy Rovers

Over a year ago I hand-waved a grandiose wish that robots should become smarter to compensate for their imperfections instead of chasing perfection with ever more expensive hardware. This was primarily motivated by a period of disillusionment as I wanted to make use of work by robotics research only to find that their published software tend to require hardware orders of magnitude more expensive than what’s feasible for my Sawppy.

Since then, I’ve noticed imperfection is something that’s coming up more and more frequently. I had my eyes on the DARPA Subterranean Challenge (SubT) for focusing researcher attention towards rugged imperfect environments. They’ve also provided a very promising looking set of tutorials for working with the ROS-based SubT infrastructure. This is a great resource on my to-do list.

Another interesting potential that I wasn’t previously aware of is NASA Space Robotics Phase 2 competition. While phase 1 is a simulation of a humanoid robot on Mars, phase 2 is about simulated rovers on the moon. And just like SubT, there will be challenges with perception making sense of rugged environments and virtual robots trying to navigate their way through. Slippery uneven surfaces, unreliable wheel odometry, all the challenges Sawppy has to deal with in real life.

And good news, at least some of the participants in this challenge are neither big bucks corporations nor secretive “let me publish it first” researchers. One of them, Rud Merriam, is asking questions on ROS Discourse and, even more usefully for me, breaking down the field jargon to language outsiders can understand on his blog. If all goes well, there’ll be findings here useful for Sawppy here on earth! This should be fun to watch.

Micro-ROS Now Supports ESP32

When I looked wistfully at rosserial and how it doesn’t seem to have a future in the world of ROS2, I mentioned micro-ROS appeared to be the spiritual successor but it required more powerful microcontrollers leaving the classic 8-bit chips behind. Micro-ROS doesn’t quite put microcontrollers on a ROS2 system as a first-class peer to other software nodes running on the computer, as there still needs to be a corresponding “agent’ node on the computer to act as proxy. But it comes closer than rosserial ever tried to be, and looks great on paper.

Based on the micro-ROS architectural overview diagram, I understand it can support multiple independent software components running in parallel on a microcontroller. This is a boost in capability from rosserial which can only focus on a single task. However, it’s not yet clear to me whether a robot with a microcontroller running micro-ROS can function independent of a computer running full fledged ROS2. On a robot with ROS using rosserial, there still needs to be a computer running the ROS master among other foundational features. Are there similar requirements for a robot with micro-ROS?

I suppose I wouldn’t really know until I set aside the time to dig into it, which has yet to happen. But the likelihood just increased. I knew that the official support for micro-ROS started with various ARM Cortex boards, but reading the system requirements I saw nothing that would have prevented the non-ARM ESP32 from joining the group. Especially since it is a popular piece of hardware and it already runs FreeRTOS by default. I have a few modules already on hand and I expected it was only a matter of time before someone ported micro-ROS to the ESP32. Likely before I build up the expertise and find the time to try it myself.

It appears that expectation was correct, a few days ago an announcement was posted to ROS Discourse that ESP32 is now officially part of the micro-ROS ecosystem. And thus another barrier against ROS2 adoption has been removed.

Scott Locklin’s Take on Robotics

As someone who writes about robots on WordPress, I am frequently shown what other people have written about robots on WordPress. Like this post titled “Open problems in Robotics” by Scott Licklin and I agree with his conclusion: state of the art robotics still struggle to perform tasks that an average one year old human child can do with ease.

He is honest with a disclaimer that he is not a Serious Robotics Researcher, merely a technically competent spectator taking a quick survey of the current state of the art. That’s pretty much the same position I am in, and I agree with his list of big problems that are known and generally unsolved. But more importantly, he was able to explain these unsolved problems in generally understandable terms and not fall into field jargon as longtime practitioners (or wanna-be posers like myself) would be tempted to do. If someone not well versed in the field is curious to see how a new perspective might be able to contribute, Scott’s list is not a bad place to start. Robotics research still has a ton of room for newcomers to bring new ideas and new solutions.

Another important aspect of Scott’s writing is making it clear that unsolved does not mean unsolvable, a tone I see all too frequently from naysayers claiming robotics research is doomed to failure and a waste of time and money. Robotics research has certainly been time consuming and expensive, but I think it’s a stretch to say it’ll stay that way forever.

However, Scott is pessimistic that algorithms running on computers as we know them today would ever solve these problems, hypothesizing that robots would not be successful until they take a different approach to cognition. “more like a simulacrum of a simple nervous system than writing python code in ROS” and here our opinions differ. I agree current computing systems built on silicon aren’t able to duplicate brains built on proteins, but I don’t agree that is a requirement for success.

We have many examples in our daily lives where a human invention works nothing like their natural world inspiration, but have been proven useful regardless of that dissimilarity. Hydraulic cylinders are nothing like muscles, bearings and shafts are nothing like joints, and a Boeing 747 flies nothing like an eagle. I believe robots can effectively operate in our world without having brains that think the way human brains do.

But hey, what do I know? I’m not a big shot researcher, either. So the most honest thing to do is to end my WordPress post here with the exact same words Scott did:

But really, all I know about robotics is that it’s pretty difficult.

Randomized Dungeon Crawling Levels for Robots

I’ve spent more time than I should have on Diablo III, a video game where our hero adventures through endless series of challenges. Each level in the game has a randomly generated layout so it’s not possible to memorize where the most rewarding monsters live or where the best treasures are hidden. This keeps the game interesting because every level is an exploration in an environment I’ve never seen before and will never see its exact duplicate again.

This is what came to my mind when I learned of WorldForge, a new feature of AWS RoboMaker. For those who don’t know: RoboMaker is an AWS offering built around ROS (Robot Operating System) that lets robot builders leverage the advantages of AWS. One example most closely relevant to WorldForge is the ability to run multiple virtual robot simulations in parallel across a large number of AWS machines. It’ll cost money, of course, but less than buying a large number of actual physical computers to run those simulations.

But running a lot of simulations isn’t very useful whey they are all running the same robot through the same test environment, and this is where WorldForge comes in. It’s a tool that accepts a set of parameters, then generate a set of simulation worlds that randomly place or replace features according to those given parameters. Then virtual robots can be set loose to do their thing across AWS machines running in parallel. Consistent successful completion across different environments builds confidence our robot logic is properly generalized and not just memorizing where the best treasures are buried. So basically, a randomized dungeon crawler adventure for virtual robots.

WorldForge launched with ability to generate randomized residential environments, useful for testing robots intended for home use. To broaden the appeal of WorldForge, other types of environments are coming in the future. So robots won’t get bored with the residential tileset, they’ll also get industrial and business tilesets and more to come.

I hope they appreciate the effort to keep their games interesting.

Seeed Studio Odyssey X86J4105 Has Good ROS2 Potential

If I were to experiment with upgrading my Sawppy to ROS2 right now, with what I have on hand, I would start by putting Ubuntu ARM64 on a Raspberry Pi 3 for a quick “Hello World”. However, I would also expect to quickly run into limitations of a Pi 3. If I wanted to buy myself a little more headroom, what would I do?

The Pi 4 is an obvious step up from the 3, but if I’m going to spend money, the Seeed Studio Odyssey X86J4105 is a very promising candidate. Unlike the Pi, it has an Intel Celeron processor on board so I can build x86_64 binaries on my desktop machine and copy them straight over. Something I hope to eventually be a painless option for ROS2 cross compilation to ARM, but we’re not there yet.

This board is larger than a Raspberry Pi, but still well within Sawppy’s carrying capacity. It’s also very interesting that they copied the GPIO pin layout from Raspberry Pi, the idea some HATs can just plug right in is very enticing. Although that’s not a capability that would be immediately useful for Sawppy specifically.

The onboard Arduino co-processor is only useful for this application if it can fit within a ROS2 ecosystem, and the good news is that it is based on the SAMD21. Which makes it powerful enough to run micro-ROS, an option not available to the old school ATmega32U4 on the LattePanda boards.

And finally, the electrical power supply requirements are very robot friendly. The spec sheet lists DC input voltage requirement at 12V-19V, implying we can just put 4S LiPo power straight into the barrel jack and onboard voltage regulators will do the rest.

The combination of computing power, I/O, and power flexibility makes this board even more interesting than an Up Board. Definitely something to keep in mind for Sawppy contemplation and maybe I’ll click “Add to Cart” on this nifty little board (*) sometime in the near future.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Still Constantly Asked: Is ROS2 Ready Yet?

One more note on this series of “keeping a pulse on ROS2” before returning to my own projects: we’re still very much in the stage where people are constantly asking if ROS2 is ready yet, and the answer has moved from “probably not” to “maybe, depending on what you are doing.” I noticed that since the first LTS was declared, the number of people who have dived in and started using ROS2 has grown. Thanks to the community of people willing to share their work, it helps growing the amount of developer resources available. It also means a lot of reports based on first-hand experience, one pops up on my radar once every few days. The most recent one being this article: 5 features ROS 2 needs in 2020.

The most encouraging thing about this particular post is towards the end: the author listed several problems that he had wanted to write about when he started, but before it was published, then the community had found answers! This highlights how the field is still growing and maturing rapidly. Whenever anyone reads a published article on “Is ROS 2 ready yet?” they must understand that some subset of facts would already be obsolete. Check the publication date, and discount accordingly.

Out of the issues on this list, there were two I hadn’t known about so I’m thankful I could learn about them here. I knew ROS 2 introduced the concept of having different QoS (Quality of Service) guarantees for messages. This allows a robot builder to make sure when the going gets tough, the robot knows which less important messages can be dropped to make sure the important ones get through. What I didn’t know was the fact QoS also introduced confusion in ROS2 debugging and diagnostics tools. I can see how it becomes a matter of perspective: to keep a robot running, the important messages need to get to their recipients and at that very second, it’s not as important to deliver them to a debugger. But if the developer is to understand what went wrong, they need to see the debugger messages! It’ll be interesting to see how those tools reconcile these perspectives.

The other one I hadn’t know about was the lack of visibility into subscriber connections and disconnections. Theoretically in an ideal world robot modules don’t have to care about who is listening to whatever they have to say, but it’s definitely one of the areas where the real world hasn’t matched up well with the theory. Again it’ll be interesting to see how this one evolves.

The next phase of “Is ROS2 ready yet?” will be “yeah, unless you’re doing something exotic” and judging by the pace so far, that’s only a year or two away. I’m definitely seeing a long term trend towards the day when the answer is “Ugh, why are you still using ROS 1?”

Notes on ROS2 and rosserial

I’m happy to see ROS2 improve over the past several releases, each release more mature and suitable for adoption than the last. Tackling some long standing problems like cross compilation and also new frontiers. I know a lot of my current reservations about ROS2 are on the to-do list, but there’s a fairly significant item that appears to be deliberately absent: rosserial.

In ROS, the rosserial module is the default way of for something simple to communicate with the rest of a ROS robot. It’s been a popular way for robot builders to add small dedicated modules that serve their little niche simply and effectively with only an Arduino or similar 8-bit microcontroller. By following its conventions for translating ROS messages into simple serial byte sequences, robot builders don’t have to constantly reinvent this wheel. However, it is only really applicable when we are in control of both the computer and Arduino end of the communication. When one side is outside of our control — such as the case for LX-16A servos used on Sawppy — we can’t use the rosserial protocol and a custom node has to be created.

But while I couldn’t use rosserial for communication with the servos on my own Sawppy, I’ve seen it deployed for other Sawppy builds in different contexts. Rhys Mainwaring’s Curio rover ROS stack uses rosserial to communicate with its Arduino Mega, and Steve [jetdillo] has just installed a battery voltage monitor that reports via rosserial.

With its proven value to budget robotics, I’m sad to see it’s not currently in the cards for ROS2. Officially, the replacement for rosserial is micro-ROS built on the Micro XRCE-DDS Client. DDS is the standardized communication protocol used by ROS2, and XRCE stands for “eXtremely Resource Constrained Environment.” It’s an admirable goal to keep the protocol running with low resource consumption, but “low” is relative. Micro XRCE-DDS proudly listed its resource requirements as thus:

From the point of view of memory footprint, the latest version of this library has a memory consumption of less than 75 KB of Flash memory and 2.5 KB of RAM for a complete publisher and subscriber application.

If we look at the ATmega328P chip at the heart of a basic Arduino, we see it has 32KB of Flash and 2KB of RAM and that’s just not going to work. A straightforward port of rosserial was aborted due to intrinsic ties to ROS, but that Github issue still sees traffic because people want the thing that does not exist.

I found a ros2arduino library built on Micro XRCE DDS, and was eager to see how it managed to fit on a simple ATmega328. Disappointingly, it doesn’t. The “Arduino” in the name referred to newer high end boards like the Arduino MKR ZERO, leaving the humble ATmega328 Arduino boards out in the cold.

As far as I can tell, this is by design. Much as how ROS2 has focused on 64-bit computing platforms over 32-bit CPUs, their “low end microcontroller support” is graduating from old school 8-bit chips to newer designs like the ESP32 and various ARM Cortex flavors such as the STM32 family. Given how those microcontrollers have fallen to single digit dollar amounts of cost, it’s hard to argue for the economic advantage of old 8-bit chips. (The processing power per dollar argument was lost long ago.) So even though the old 8-bit chips still hold some advantages, I can see the reasoning, and have resigned to accept it as the way of the future.

ROS2 Receives Cross Compile Love

While I’m on the topic of information on Raspberry Pi running ROS2, another one that I found interesting was that there is now an actual working group focused on tooling. And their pathfinder project is to make ROS2 cross compilation less painful than it was in ROS.

Cross compilation has always been an option for ROS developers, but it was never as easy as it could be and there were many places where it, to put things politely, fell short of expectations. The reason cross-compilation was interesting was, again, the inexpensive Raspberry Pi allowing us to put a full Linux computer on board our ROS robots.

A Raspberry Pi could run ROS but there were performance limitations, and these limitations had even more severe impact when trying to compile ROS code on board the Raspberry Pi itself. The quad cores sounded nice on paper, but before the Pi 4, there was only 1GB of RAM to go around and compilation quickly ate it all up. Every developer is tempted to run four compilers in parallel to optimize for the four cores, and most ROS developers (including this one) have tried it at least once. We quickly learn this was folly. As soon as the RAM was exhausted, the Pi went to virtual memory which was a microSD card, and performance drops off a cliff because they are not designed for random reads and writes. I frequently get better overall performance by limiting compilation to a single core and staying well within available RAM.

Thus the temptation to use cross compilation: use our powerful 64-bit desktop machines to compile ARM32 binaries. Or more specifically, ARMHF, the instruction set for 32-bit ARM processors with (H)ardware (F)loating-point like those on the Raspberry Pi. But the pains of doing so has never proven to be worth the effort.

While Raspberry Pi 4 is now available with up to 8GB of RAM along with a corresponding 64-bit operating system, that’s still a lot less than the memory available on a typical developer workstation. And a modern PC’s solid state drive is still far faster than a Pi’s microSD storage. So best wishes to the ROS2 tooling working group, I hope you can make cross compilation effective for the community.

Update on ARM64: ROS2 on Pi

When I last looked at running ROS on a Raspberry Pi robot brain, I noticed Ubuntu now releases images for Raspberry Pi in both 32-bit and 64-bit flavors but I didn’t know of any compelling reason to move to 64-bit. The situation has now changed, especially if considering a move to the future of ROS2.

The update came courtesy of an announcement on ROS Discourse notifying the community that supporting 32-bit ARM builds have become troublesome, and furthermore, telemetry indicated that very few ROS2 robot builders were using 32-bit anyway. Thus the support for that platform is demoted to tier 3 for the current release Foxy Fitzroy.

This was made official on REP 2000 ROS 2 Releases and Target Platforms showing arm32 as a Tier 3 platform. As per that document, tier 3 means:

Tier 3 platforms are those for which community reports indicate that the release is functional. The development team does not run the unit test suite or perform any other tests on platforms in Tier 3. Installation instructions should be available and up-to-date in order for a platform to be listed in this category. Community members may provide assistance with these platforms.

Looking at the history of ROS 2 releases, we can see 64-bit has always been the focus. The first release Ardent Apalone (December 2017) only supported amd64 and arm64. Support for arm32 was only added a year ago for Dashing Diademata (May 2019) and only at tier 2. They kept it at tier 2 for another release Eloquent Elusor (November 2019) but now it is getting dropped to tier 3.

Another contributing factor is the release of Raspberry Pi 4 with 8GB of memory. It exceeded the 4GB limit of 32-bit addressing. This was accompanied by an update to the official Raspberry Pi operating system, renamed from Raspbian to Raspberry Pi OS, is still 32-bit but with mechanisms to allow addressing 8GB of RAM across the operating system even though individual processes are limited to 3GB. The real way forward is to move to a 64-bit operating system, and there’s a beta 64-bit build of Raspberry Pi OS.

Or we can go straight to Ubuntu’s release of 64-bit operating system for Raspberry Pi.

And the final note on ROS2: a bunch of new tutorials have been posted! The barrier for transitioning to ROS2 is continually getting dismantled, one brick at a time. And it’s even getting some new attention in long problematic areas like cross-compilation.

Learning DOT and Graph Description Languages Exist

One of the conventions of ROS is the /cmd_vel topic. Short for “command velocity”, it is commonly how high-level robot planning logic communicates “I want to move in this direction at this speed” to lower-level robot chassis control nodes of a robot. In ROS tutorials, this is usually the first practical topic that gets discussed. This convention helps with one of the core promises of ROS: portability of modules. High level logic can be created and output to /cmd_vel without worrying about low level motor control details, and robot chassis builders know teaching their hardware to understand /cmd_vel allows them to support a wide range of different robot modules.

Sounds great in theory, but there are limitations in practice and every once a while a discussion arises on how to improve things. I was reading one such discussion when I noticed one message had an illustrative graph accompanied by a “source” link. That went to a Github Gist with just a few simple lines of text describing that graph, and it took me down a rabbit hole learning about graph description languages.

In my computer software experience, I’ve come across graphical description languages like OpenGL, PostScript, and SVG. But they are complex and designed for general purpose computer graphics, I had no idea there were entire languages designed just for describing graphs. This particular file was DOT, with more information available on Wikipedia including the limitations of the language.

I’m filing this under the “TIL” (Today I Learned) section of the blog, but it’s more accurately a “How did I never come across this before?” post. It seems like an obvious and useful tool but not adopted widely enough for me to have seen it before. I’m adding it to my toolbox and look forward to the time when it would be the right tool for the job, versus something heavyweight like firing up Inkscape or Microsoft Office just to create a graph to illustrate an idea.

First Few Issues of ROS on Ubuntu on Crouton on Chrome OS

Some minor wrong turns aside, I think I’ve successfully installed ROS Melodic on Ubuntu 18 running within a Crouton chroot inside a Toshiba Chromebook 2 (CB35-B3340). The first test is to launch roscore, verify it is up and running without errors, then run rostopic /list to verify the default set of topics are listed.

With that done, then next challenge is to see if ROS works across machines. First I tried running roscore on another machine, and set ROS_MASTER_URI to point to that remote machine. With this configuration, rostopic /list shows the expected list of topics.

Then I tried the reverse: I started roscore on the Chromebook and pointed another machine’s ROS_MASTER_URI to the IP address my WiFi router assigned to the Chromebook. In this case rostopic/list failed to communicate with master. There’s probably some sort of networking translation or tunneling between Chrome OS and an installation of Ubuntu running inside Crouton chroot, and that’s something I’ll need to dig into and figure out. Or it might be a firewall issue similar to what I encountered when running ROS under Windows Subsystem for Linux.

In addition to the networking issue, if I want to embed this Chromebook into a robot as its brain, I’ll also need to figure out power-up procedure.

First: upon power-up, a Chromebook in developer mode puts up a dialog box notifying the user as such, letting normal users know a Chromebook in developer mode is not trustworthy for their personal data. This screen is held for about 30 seconds with an audible beep, unless the user presses a key combination prescribed onscreen. How might this work when embedded in a robot?

Second: when Chrome OS boots up, how do I also launch Ubuntu 18 inside Crouton chroot? The good news is that this procedure is covered in Crouton wiki, the bad news is that it is pretty complex and involves removing a few more Chromebook security provisions.

Third: Once Ubuntu 18 is up and running inside Crouton chroot, how do I launch ROS automatically? My current favorite “run on bootup” procedure for Linux is to create a service, but systemctl does not run inside chroot so I’ll need something else.

And that’s only what I can foresee right now, I’m sure there are others I haven’t even thought about yet. There’ll be several more challenges to overcome before a Chrome OS machine can be a robot brain. Perhaps instead of wrestling with Chrome OS, I should consider bypassing Chrome OS entirely?

Ubuntu 18 and ROS on Toshiba Chromebook 2 (CB35-B3340)

Following default instructions, I was able to put Ubuntu 16 on a Chromebook in developer mode. But the current LTS (Longer Term Support) release for ROS (Robot Operating System) is their “M” or Melodic Morenia release whose corresponding Ubuntu LTS is 18. (Bionic Beaver)

As of this writing, Ubuntu 18 is not officially supported for Crouton. It’s not explicitly forbidden, but it does come with a warning: “May work with some effort.” I didn’t know exactly what the problem might be, but given how easy it is to erase and restart on a Chromebook I decided to try it and see what happens.

It failed failed with a hash sum failure during download. This wasn’t the kind of failure I thought might occur with an unsupported build, download hash sum failure seems more like a flawed or compromised download server. I didn’t understand enough about the underlying infrastructure to know what went wrong, never mind fixing it. So in an attempt to tackle a smaller problem with a smaller surface area, I backed off to the minimalist “cli-extra” install of Bionic which skips graphical user interface components. This path succeeded without errors, and I now have a command line interface that reported itself to be Ubuntu 18 Bionic.

As a quick test to see if hardware is visible to software running inside this environment, I plugged in a USB to serial adapter. I was happy to see dmesg reported the device was visible and accessible via /dev/ttyUSB0. Curiously, the owner showed up as serial group instead of the usual dialout I see on Ubuntu installations.

A visible serial peripheral was promising enough for me to proceed and install ROS Melodic. I thought I’d try installation with Python 3 as the Python executable, but that went awry. I then repeated installation with the default Python 2. Since I have no GUI, I installed the ros-melodic-ros-base package. Its installation completed with no errors, allowing me to poke around and see how ROS works in this environment.

Window Shopping: ElectronJS

The Universal Windows Platform allows Windows application developers to create UI that can dynamically adapt to different screen sizes and resolutions, as well as adapting to different input methods like mouse vs. touchscreen. The selling point is to make it as easy and robust as a web page.

So… why not have a web page? Web developers were the pioneers in solving these problems and we might want to adapt existing solutions instead of Microsoft’s effort to replicate them on Windows. But a web page has limitations relative to native applications, and hardware access is definitely one such category. (For USB specifically, there is web USB, but that is not a general hardware access solution.)

Thus occasionally developers familiar with web technology had a need to build platform native applications. Some of them decided to build their own native application framework to support web-style interfaces across multiple platforms. This is why we have Electron. (Sometimes ElectronJS to differentiate it from its namesake.)

All the x86_64 operating systems are supported: Windows, MacOS, and Linux are first tier platforms. There’s no fundamental reason Electron won’t work elsewhere, but apparently users need to be prepared to deal with various headaches to run it on platforms like a Raspberry Pi. And that’s just getting it to run, that doesn’t even touch on the most interesting part of running on a Raspberry Pi: its GPIO pins.

Like UWP, given graphical capabilities of modern websites, I have no doubt I can display arbitrary data visualization under Electron. My favorite demo of what modern WebGL is capable of is this fluid dynamics simulation.

The attention then turns to serial communication, and a web search quickly pointed me to electron-serialport Github repo. At first glance this looks promising, though I have to be careful when building it into an Electron app. The tricky part is that this serial support is native code and must be compiled to match the version in a particular release of Electron. It appears the tool electron-rebuild can take care of this particular case. However, it sets expectation that anything Electron app dealing with hardware would likely also require a native code component.

If I ever need to build a graphically dynamic application that needs to run across multiple operating systems, plus hardware access that is restricted to native applications, I’ll come back and take a closer look at Electron. But it’s not the only game in town for a offline local application based on web technology. For applications whose purpose is less about local hardware and more about online connectivity, we also have the option of Progressive Web Applications.

Window Shopping: Universal Windows Platform Fluent Design

Looking over National Instruments’ Measurement Studio reinforced the possibility that there really isn’t anything particularly special about what I want to do for a computer front-end to control my electronics projects. I am confident that whatever I want to do in such a piece of software, I can put it in a Windows application.

The only question is what kind of trade-offs are involved for different approaches, because there is certainly no shortage of options. There have been many application frameworks over the long history of Windows. I criticised LabWindows for faithfully following the style of an older generation of Windows applications and failed to keep updated since. So if I’m so keen on the latest flashy gizmo, I might as well look over the latest in Windows application development: the Universal Windows Platform.

People not familiar with Microsoft platform branding might get unduly excited about “Universal” in the name, as it would be amazing if Microsoft released a platform that worked across all operating systems. The next word dispelled that fantasy: “Universal Windows” just meant across multiple Microsoft platforms: PC, Xbox, and Hololens. UWP was also going to cover phone as well, but well, you know

Given the reduction in scope and the lack of adoption, some critics are calling UWP a dead end. History will show if they are right or not. However that shakes out, I do like Fluent Design that was launched alongside UWP. A similar but competitive offering to Google’s Material Design, I think they both have potential for building some really good user interactivity.

Given the graphical capabilities, I’m not worried about displaying my own data visualizations. But given UWP’s intent to be compatible across different Windows hardware platforms, I am worried about my ability to communicate with my own custom built hardware. If something was difficult to rationalize a standard API across PC, Xbox, and Hololens, it might not be supported entirely.

Fortunately that worry is unfounded. There is a UWP section of the API for serial communication which I expect to work for USB-to-serial converters. Surprisingly, it actually went beyond that: there’s also an API for general USB communication even with devices lacking standard Windows USB support. If this is flexible enough to interface arbitrary USB hardware other than USB-to-serial converters, it has a great deal of potential.

The downside, of course, is that UWP would be limited to Windows PCs and exclude Apple Macintosh and Linux computers. If the objective is to build a graphically rich and dynamically adaptable user interface across multiple desktop application platforms (not just Windows) we have to use something else.

A Quick Look At NI Measurement Studio

While digging through National Instruments online documentation to learn about LabVIEW and LabWindows/CVI, I also came across something called Measurement Studio. This trio of products make up their category of Programming Environments for Electronic Test and Instrumentation. Since I’ve looked at two out of three, might as well look at the whole set and jot down some notes.

Immediately we see a difference in the product description. Measurement Studio is not a standalone application, but an extension to Microsoft Visual Studio. By doing so, National Instruments takes a step back and allows Microsoft Visual Studio to handle most of the common overhead of writing an application, stepping in only when necessary to deliver functionality valuable to their target market. What are these functions? The product page lists three bullet points:

  • Connect to Any Hardware – Electronics equipment industry standard communication protocols GPIB, VISA, etc.
  • Engineering UI Controls – on-screen representation of tasks an electronics engineer would want to perform.
  • Advanced Analysis Libraries – data processing capabilities valuable to electronics engineers.

Basically, all the parts of LabVIEW and LabWindows/CVI that I did not care about for my own projects! Thus if I build a computer control application in Microsoft Visual Studio, I’m likely to just use Visual Studio by itself without the Measurement Studio extension. I am not quite the target market for LabVIEW or LabWindows, and I am completely the wrong market for Measurement Studio.

Even if I needed Measurement Studio for some reason, the price of admission is steep. Because Measurement Studio is not compatible with the free Community Edition of Visual Studio, developing with Measurement Studio requires buying license for a paid tier of Microsoft Visual Studio in addition to the license for Measurement Studio.

And finally, it has been noted that the National Instruments products require low level Win32 API access that prevents them from being a part of the new generation of Windows app that can be distributed via Microsoft Store. These newer apps promise to have better installation and removal experience, automatic updates, and better isolated from each other to avoid incompatibilities like “DLL Hell”. None of those benefits are available if an application pulls in National Instruments software components, which is a pity.

Earlier I said “if I build a computer control application in Microsoft Visual Studio, I’ll just use Visual Studio by itself without the Measurement Studio extension” which got me thinking: that’s a good point! What if I went ahead and wrote a standard Windows application with Visual Studio?

Digging Further Into LabWindows/CVI

Following initial success of RS-232 serial communication in a LabWindows/CVI sample program, I dived deeper into the documentation. This RS-232 library accesses the serial port at a very low level. On the good side, it allows me to communicate with non-VISA peripherals like a consumer 3D printer. On the bad side, it means I’ll be responsible for all the overhead of running a serial port.

The textbook solution is to leave the main program thread to maintain responsive UI, and spin up another thread to keep an eye on the serial port so we know when data comes in. The good news here is that LabWindows/CVI help files say RS-232 library code is thread safe, the bad news here is that I’m responsible for thread management myself. I failed to find much in the help files, but I did find something online for LabWindows/CVI multi-threading. Not super easy to use, but powerful enough to handle the scenario. I can probably make this work.

Earlier I noted that LabWindows/CVI design seems to reflect the state of the art about ten years ago and not advanced since. This was most apparent in the visual appearance of both the tool itself and of the programs it generated. Perhaps the target paying audience won’t put much emphasis on visual design, but I like to put in some effort in my own projects.

Which is why it really pained me when I realized the layout in a LabWindows/CVI program is fixed. Once they are laid out in the drag-and-drop tool, that’s it, forever. Maximizing the window will only make the window larger, all the controls stay the same and we just get more blank space. I searched for an option to scale windows and found this article in National Instruments support, but it only meant scaling in the literal sense. When this option is used, and I maximize a window, all the controls still keep the same layout but they just get proportionally larger. There’s no easy way to take advantage of additional screen real estate in a productive way.

This means a default LabWindows/CVI program will be unable to adapt to a screen with different aspect ratio, or be stacked side-by-side with another window, or any of the dynamic layout capabilities I’ve come to expect of applications today. This makes me sad, because the low-level capabilities are quite promising. But due to the age of the design and the high cost, I’m likely to look elsewhere for my own projects. But before I go, a quick look at one other National Instruments product: Measurement Studio.

LabWindows/CVI Serial Communication Test

Once I was done with LabWindow’s Hello World tour, it was time for some independent study, focused on fields I’m personally interested in. Top of the list was serial port communications. Researching them ahead of time indicated it was capable of arbitrary protocols. Was that correct? I dived into the RS-232 API to find out.

Before we can open a serial port for communication, we must first find it. And the LabWindows/CVI RS-232 library for enumerating serial port is… nothing. There isn’t one. A search on user forums indicate this is the consensus: if someone wants to enumerate serial ports, they have to go straight to the underlying Win32 API.

Puzzled at how a program is supposed to know which COM port to open without an enumeration API, I went into the sample applications directory and found a generic serial terminal program. How did they solve this problem? They did not: they punted it to the user. There is a slider control for the user to select the COM port to open. If the user doesn’t know which device is mapped to which COM port, it is not LabWindows’ problem. So much for user-friendliness.

I had a 3D printer handy for experimentation, so I tried to use the sample program to send some Marlin G-code commands. The first obstacle is baud rate: USB serial communication can handle much faster speeds than old school RS-232 so my printer defaults to 250,000 baud. The sample program’s baud selection control only went up to 57,600 baud so the sample program had to be modified to add a 250,000 baud option. After that was done, everything worked: I could command the printer to home its axis, move to position, etc.

First test: success! Time to dig deepeer.

LabWindows/CVI Getting Started Guide

A quick look through the help files for LabWindows/CVI found it to be an interesting candidate for further investigation. It’s not exactly designed for my own project goals, but there is enough alignment to justify a closer look.

Download is accomplished through National Instruments Package Manager. Once installed and updated I could scroll through all of the National Instruments software and select LabWindows/CVI for installation. As is typical of development tools, it’s not just one package but many (~20) separate packages that get installed. Ranging from the actual IDE to runtime library redistributable binaries.

Once up and running I find that my free trial period lasts only a week, but that’s fine as I only wanted to run through their Hello World tutorial in LabWindows/CVI Getting Started Guide (PDF). The tutorial walks through generating a simple application with a few buttons and a graphing control that displays a generated sine wave. I found the LabWindows/CVI interface to be familiar, with a strong resemblance to Microsoft Visual Studio which is probably not a complete coincidence. The code editor, file browser, debug features, and drag-and-drop UI editor are all features I’ve experienced before.

The biggest difference worth calling out is the UI-based tool “Function Panel” for generating library calls. While library calls can be typed up directly in text like any other C API, there’s the option to do it with a visual representation. The function panel is a dialog box that has a description of the function and all its parameters listed in text boxes that the developer can fill in. When applicable, the panel also shows an example of the resulting configuration. Once we are satisfied with a particular setup, selecting “Code”/”Insert Function Call” puts all parameters in their proper order in C source code. It’s a nifty way to go beyond a help page of text, making it a distinct point of improvement over the Visual Studio I knew.

Not the modern Microsoft Visual Studio, though, more like Visual Studio of many years ago. The dated visual appearance of the tool itself are consistent with old appearance of available user controls. They are also consistent with the documentation, as that Getting Started PDF was dated October 2010 and I couldn’t find anything more recent. The latest edition of the more detailed LabWindows/CVI Programmer’s Reference Manual (PDF) is even older, at June 2003.

All of these data points make LabWindows appear to be a product of an earlier generation. But never mind the age – how well does it work?

Window Shopping LabWindows/CVI

I’ve taken a quick look over Keysight VEE and LabVIEW, both tools that present software development in a format that resembles physical components and wires: software modules are virtual instruments, data flow are virtual wires. This is very powerful for expressing certain problem domains and naturally imposes a structure. From a software perspective, explicit description of data flow also makes it easier to take advantage of parallel execution possible on modern multicore processors.

But imposing certain structures also make it hard to venture off the beaten path, which is why attention now turns to LabVIEW’s stablemate, LabWindows/CVI. They both offer access to industry standard communication protocols plus data analysis and visualization tools, but the data flow and program structure is entirely different. Instead of LabVIEW’s visual “G” language, LabWindows/CVI uses ANSI C to connect all its components and control flow of data and execution. I am optimistic it will be more aligned with my software experience.

Like LabVIEW, the program help files for LabWindows/CVI is also available for download and perusal. Things look fairly promising at first glance.

I found a serial communication API that can read and write raw bytes under:

  • Library Reference
    • RS-232 Library
      • function tree

For user display, I found something that resembles LabVIEW’s “2D Picture Control” here called a “Canvas Control”. An overview of drawing with Canvas Control’s basic drawing primitives can be found under:

  • Library Reference
    • User Interface Library
      • Controls
        • Control Types
          • Canvas Controls
            • Programming with Canvas Controls

I’m encouraged by what I found looking through LabWindows/CVI help files, enough to download the actual development tool and get hands-on with it.

Window Shopping: LabVIEW 2019

After taking a quick look over Keysight VEE, I switched focus to LabVIEW by National Instruments. I don’t know how directly these two products compete in the broader market, but I do know they have some overlap relating to instrument control. I had some exposure to LabVIEW many years ago thanks to LEGO Mindstorms, which had used a version of LabVIEW for programming the NXT brick. Back then the Mindstorm-specific version was very closely guarded and, when I lost track of my CD-ROM, I was out of luck because neither NI nor LEGO made it available for download. Thankfully that has since changed and the Mindstorm flavor of LabVIEW is available for download.

But I’m not focused on LEGO right now, today’s aim is to see how I might fulfill my general computer control goals with this tool. For that information I was thankful National Instruments made help files for LabVIEW available for download so I can investigate without a full download and installation of the full tool suite. It took a bit of hunting around to find them, though, and the download page was titled LabVIEW 2018 but it has a download link for the 2019 help files.

I found a help page “Serial Port Communication” under the section:

  • Controlling Instruments
    • Types of Instruments

And it assumes the user would only be controlling devices that can communicate to VISA protocol, not general serial communication. There were more serial communication information in the section:

  • VISA Resource
    • I/O Session
      • Serial Instr

There’s also an online tutorial for instrument communication. This page has a flowchart that implied there’s a “Direct I/O” that we can fallback to if all else fails, but I found no mention for performing this direct I/O in the help files.

The graphics rendering side was more straightforward. There’s no mention of ActiveX control here, but under:

  • Fundamentals
    • Graphs and Charts
      • Graphics and Sound VIs

There are multiple pages of information for a “2D Picture Control” with drawing primitives like points, lines, arcs, etc. Details on this drawing API are found under:

  • VI and Function Reference
    • Programming VIs and Functions
      • Graphics & Sound VIs
        • Picture Plot VIs

However, it’s not clear this functionality scales to complex drawings with thousands (or more) of primitives. It certainly wouldn’t be the first time I used an API that stumbled as the data size grew.

So the drawing side looks workable pending a question mark on how well it scales, but the serial communication side is blocked. Until I find a way to perform that mystical direct I/O, I’m going to set LabVIEW aside and look at its sibling LabWindows/CVI.

[UPDATE: I’ve since found LabVIEW MakerHub and LINX, which allows LabVIEW to communicate with maker level hardware over serial.]