Window Shopping Firmata: Connect Microcontrollers To Computers

Reading about LabVIEW LINX stirred up memory of something with a similar premise. I had forgotten its name and it took a bit of research to re-discover Firmata. Like LINX, Firmata is a protocol for communicating between microcontrollers and desktop computers. Like LINX, there are a few prebuilt implementations available for direct download, such as their standard Arduino implementation of Firmata.

There’s one aspect of the Firmata protocol I found interesting: its relationship to MIDI messages. I had originally thought it was merely inspired by MIDI messages, but the Firmata protocol documentation says it is actually a proper subset of MIDI. This means Firmata messages have to option to coexist with MIDI messages on the same channel, conveying data that is mysterious to MIDI instruments but well-formed to not cause problems. This was an interesting assertion, even with the disclaimer that in practice Firmata typically runs at a higher serial speed on its own bus.

Like LINX, Firmata is intended to be easily implemented by simple hardware. The standard Arduino implementation can be customized for specific projects, and anything else that can communicate over a serial port is a candidate hardware endpoint for Firmata.

But on the computer side, Firmata is very much unlike LINX in its wide range of potential software interfaces. LINX is a part of LabVIEW, and that’s the end of the LINX story. Firmata can be implemented by anything that can communicate over a serial port, which should cover almost anything.

Firmata’s own Github hosts some Python sample code, and it is but one of five options for Python client libraries listed on the protocol web site and they carry along some useful tips like using Python’s ord()/chr() to convert hexadecimal data to/from Firmata packets. Beyond Python, every programming language I know of are invited to the Firmata party: Processing, Ruby, JavaScript, etc.

Since I had been playing with C# and .NET recently, I took a quick glance at their section of Firmata. These are older than UWP and use older non-async APIs. The first one on the list used System.IO.Ports.SerialPort, and needed some workaround for Mono. The second one isn’t even C#: It’s aimed at Visual Basic. I haven’t looked at the third one on the list.

If I wanted to write an UWP application that controls hardware via Firmata, writing a client library with the newer async Windows.Devices.SerialCommunication.SerialDevice API might be a fun project.

Windows Shopping LINX: Connecting LabVIEW To Maker Hardware

When I looked over LabVIEW earlier with the eyes of a maker, my biggest stumbling block was trying to connect to the kind of hardware a maker would play with. LabVIEW has a huge library of hardware interface components for sophisticated professional level electronics instrumentation. But I found nothing for simple one-off projects like the kind I have on my workbench. I got as far as finding a reference to a mystical “Direct I/O” mechanism, but no details.

In hindsight, that was perfectly reasonable. I was browsing LabVIEW information presented on their primary site targeted to electronics professionals. I thought the lack of maker-friendly information meant National Instruments didn’t care about makers, but I was wrong. It actually meant I was not looking at the right place. LabVIEW’s maker-friendly face is on an entirely different site, the LabVIEW MakerHub.

Here I learned about LINX, an architecture to interface with maker level hardware starting with the ubiquitous Arduino, Raspberry Pi, and extensible to others. From the LINX FAQ and the How LINX Works page I got the impression it allows individual LabVIEW VI (virtual instrument) to correspond to individual pieces of functionality on an Arduino. But very importantly, it implies that representation is distinct from the physical transport layer, where there’s only one serial (or WiFi, or Ethernet) connection between the computer running LabVIEW and the microcontroller.

If my interpretation is true this is a very powerful mechanism. It allows the bulk of LabVIEW program to be set up without worrying about underlying implementation. Here’s one example that came to mind: A project can start small with a single Arduino handling all hardware interface. Then as the project grows, and the serial link becomes saturated, functions can be split off into separate Arduinos with their own serial link plugged in to the computer. Yet doing so would not change the LabVIEW program.

That design makes LabVIEW much more interesting. What dampens my enthusiasm is the lack of evidence of active maintenance on LabVIEW MakerHub. I see support for BeagleBone Black, but not any of the newer BeagleBone boards (Pocket is the obvious candidate.) The list of supported devices list Raspberry Pi only up to 2, Teensy only up to 3.1, Espressif ESP8266 but not the ESP32, etc. Balancing that discouraging sight is that the code is on Github, and we see more recent traffic there as well as the MakerHub forums. So it’s probably not dead?

LINX looks very useful when the intent is to interface with LabVIEW on the computer side. But when we want something on the computer other than LabVIEW, we can use Firmata which is another implementation of the concept.

UPDATE: And just after I found it (a few years after it launched) NI is killing MakerHub with big bold red text across the top of the site: “This site will be deprecated on August 1, 2020″

Window Shopping: Progressive Web App

When I wrote up my quick notes on ElectronJS, I had the nagging feeling I forgot something. A few days later I remembered: I forgot about Progressive Web Apps (PWA), created by some people at Google who agrees with ElectronJS that their underlying Chromium engine can make a pretty good host for local offline applications.

But even though PWA and ElectronJS share a lot in common, I don’t see them as direct competitors. What I’ve seen on ElectronJS is focused on creating applications in the classic sense. They are primarily local apps, just built using technologies that were born in the web world. Google’s PWA demos showcase extension of online web sites, where the primary focus is on the web site but PWA lets them have a local offline supplement.

Given that interpretation, a computer control panel for an electronics hardware project is better suited to ElectronJS than a PWA. At least, as long as the hardware’s task is standalone and independent of others. If a piece of hardware is tied to a network of other similar or complementary pieces, then the network aspect may favor a PWA interfacing with the hardware via Web USB. Google publishes a tutorial showing how to talk to a BBC micro:bit using a Chrome serial port API. I’m not yet familiar with the various APIs to know if this tutorial used the web standard or if it uses the Chrome proprietary predecessor to the standard, but its last updated date of 2020/2/27 implies the latter.

Since PWA started as a Google initiative, they’ve enabled it in as many places as they could starting with their own platforms like Android and ChromeOS. They are also supported via Chrome browser on major desktop operating systems. The big gap in support are Apple’s iOS platforms, where Apple forbids a native code Chrome browser and more generally any application platforms. There are some technical reasons but the biggest hurdle is financial: installing a PWA bypasses Apple’s iOS app store, a huge source of revenue for the company, so Apple has a financial disincentive.

In addition to Google’s PWA support via Chrome, Microsoft supports PWA on Windows via their Edge browser with both the old EdgeHTML and new Chromium-based versions, though with different API feature levels. While there’s a version of Edge browser for Xbox One, I saw no mention of installing PWAs on an Xbox like a standard title.

PWAs would be worth a look for network-centric projects that also have some offline capabilities, as long as iOS support is not critical.

A Quick Look At NI Measurement Studio

While digging through National Instruments online documentation to learn about LabVIEW and LabWindows/CVI, I also came across something called Measurement Studio. This trio of products make up their category of Programming Environments for Electronic Test and Instrumentation. Since I’ve looked at two out of three, might as well look at the whole set and jot down some notes.

Immediately we see a difference in the product description. Measurement Studio is not a standalone application, but an extension to Microsoft Visual Studio. By doing so, National Instruments takes a step back and allows Microsoft Visual Studio to handle most of the common overhead of writing an application, stepping in only when necessary to deliver functionality valuable to their target market. What are these functions? The product page lists three bullet points:

  • Connect to Any Hardware – Electronics equipment industry standard communication protocols GPIB, VISA, etc.
  • Engineering UI Controls – on-screen representation of tasks an electronics engineer would want to perform.
  • Advanced Analysis Libraries – data processing capabilities valuable to electronics engineers.

Basically, all the parts of LabVIEW and LabWindows/CVI that I did not care about for my own projects! Thus if I build a computer control application in Microsoft Visual Studio, I’m likely to just use Visual Studio by itself without the Measurement Studio extension. I am not quite the target market for LabVIEW or LabWindows, and I am completely the wrong market for Measurement Studio.

Even if I needed Measurement Studio for some reason, the price of admission is steep. Because Measurement Studio is not compatible with the free Community Edition of Visual Studio, developing with Measurement Studio requires buying license for a paid tier of Microsoft Visual Studio in addition to the license for Measurement Studio.

And finally, it has been noted that the National Instruments products require low level Win32 API access that prevents them from being a part of the new generation of Windows app that can be distributed via Microsoft Store. These newer apps promise to have better installation and removal experience, automatic updates, and better isolated from each other to avoid incompatibilities like “DLL Hell”. None of those benefits are available if an application pulls in National Instruments software components, which is a pity.

Earlier I said “if I build a computer control application in Microsoft Visual Studio, I’ll just use Visual Studio by itself without the Measurement Studio extension” which got me thinking: that’s a good point! What if I went ahead and wrote a standard Windows application with Visual Studio?

Window Shopping LabWindows/CVI

I’ve taken a quick look over Keysight VEE and LabVIEW, both tools that present software development in a format that resembles physical components and wires: software modules are virtual instruments, data flow are virtual wires. This is very powerful for expressing certain problem domains and naturally imposes a structure. From a software perspective, explicit description of data flow also makes it easier to take advantage of parallel execution possible on modern multicore processors.

But imposing certain structures also make it hard to venture off the beaten path, which is why attention now turns to LabVIEW’s stablemate, LabWindows/CVI. They both offer access to industry standard communication protocols plus data analysis and visualization tools, but the data flow and program structure is entirely different. Instead of LabVIEW’s visual “G” language, LabWindows/CVI uses ANSI C to connect all its components and control flow of data and execution. I am optimistic it will be more aligned with my software experience.

Like LabVIEW, the program help files for LabWindows/CVI is also available for download and perusal. Things look fairly promising at first glance.

I found a serial communication API that can read and write raw bytes under:

  • Library Reference
    • RS-232 Library
      • function tree

For user display, I found something that resembles LabVIEW’s “2D Picture Control” here called a “Canvas Control”. An overview of drawing with Canvas Control’s basic drawing primitives can be found under:

  • Library Reference
    • User Interface Library
      • Controls
        • Control Types
          • Canvas Controls
            • Programming with Canvas Controls

I’m encouraged by what I found looking through LabWindows/CVI help files, enough to download the actual development tool and get hands-on with it.

Window Shopping: LabVIEW 2019

After taking a quick look over Keysight VEE, I switched focus to LabVIEW by National Instruments. I don’t know how directly these two products compete in the broader market, but I do know they have some overlap relating to instrument control. I had some exposure to LabVIEW many years ago thanks to LEGO Mindstorms, which had used a version of LabVIEW for programming the NXT brick. Back then the Mindstorm-specific version was very closely guarded and, when I lost track of my CD-ROM, I was out of luck because neither NI nor LEGO made it available for download. Thankfully that has since changed and the Mindstorm flavor of LabVIEW is available for download.

But I’m not focused on LEGO right now, today’s aim is to see how I might fulfill my general computer control goals with this tool. For that information I was thankful National Instruments made help files for LabVIEW available for download so I can investigate without a full download and installation of the full tool suite. It took a bit of hunting around to find them, though, and the download page was titled LabVIEW 2018 but it has a download link for the 2019 help files.

I found a help page “Serial Port Communication” under the section:

  • Controlling Instruments
    • Types of Instruments

And it assumes the user would only be controlling devices that can communicate to VISA protocol, not general serial communication. There were more serial communication information in the section:

  • VISA Resource
    • I/O Session
      • Serial Instr

There’s also an online tutorial for instrument communication. This page has a flowchart that implied there’s a “Direct I/O” that we can fallback to if all else fails, but I found no mention for performing this direct I/O in the help files.

The graphics rendering side was more straightforward. There’s no mention of ActiveX control here, but under:

  • Fundamentals
    • Graphs and Charts
      • Graphics and Sound VIs

There are multiple pages of information for a “2D Picture Control” with drawing primitives like points, lines, arcs, etc. Details on this drawing API are found under:

  • VI and Function Reference
    • Programming VIs and Functions
      • Graphics & Sound VIs
        • Picture Plot VIs

However, it’s not clear this functionality scales to complex drawings with thousands (or more) of primitives. It certainly wouldn’t be the first time I used an API that stumbled as the data size grew.

So the drawing side looks workable pending a question mark on how well it scales, but the serial communication side is blocked. Until I find a way to perform that mystical direct I/O, I’m going to set LabVIEW aside and look at its sibling LabWindows/CVI.

[UPDATE: I’ve since found LabVIEW MakerHub and LINX, which allows LabVIEW to communicate with maker level hardware over serial.]

Window Shopping: Keysight VEE Custom Data Display

Looking over Keysight VEE’s support for device communication, I found there is only support for a limited subset of USB serial communication patterns. And even for the supported transaction model, it seems to be quite labor intensive to craft. It left me with the impression venturing outside VEE’s supported list of equipment is something to be avoided.

Attention then turn to VEE’s support for arbitrary display or data. Like its competitors in the space of test instrumentation software, there is an extensive library for data analysis common for the problem domain. This is useful for their paying customers, but again quite restrictive if we want to venture outside their supported list.

As far as I can tell by just reading their Advanced Techniques PDF, the method to add a custom data visualization component is to create an ActiveX control. This is a technology I haven’t thought about in years! I first learned of it in the context of Microsoft Visual Basic decades ago, where people could drag and drop UI components to build their application. Each of these visual components were built with technology that eventually became named ActiveX controls. This technology is so old not even Microsoft is investing in it now. They have moved on, giving stewardship to an open standards body.

The fact ActiveX is the state-of-the-art technology for extending VEE is telling. Looking over the recent history of VEE software releases, it has all the signs of a piece of software living on continuing life support. They are still releasing new versions on a regular basis, but the advances between releases are mostly in the form of new instrument support (GPIB and otherwise) and certification it will run on latest edition of Windows. I have seen very little in the way of new feature development or general evolution.

VEE seems to be perfectly suited to their target market: electronics engineers trying to automate a collection of instruments, every one of which support industry standard protocols. (Especially those made by Keysight.) Then, perform analysis of that data as typically needed by electrical engineers. But since my goal is to control arbitrary equipment communicating over USB serial, then process and display that data in ways unrelated to electrical engineering, I should set VEE aside and look at other options.

Window Shopping: Keysight VEE Serial Communication

When I was learning about industry standards for electronics test and measurement equipment automation, I quickly came across GPIB which has its roots in something Hewlett-Packard developed for their equipment. It made sense, then, that they would have a software suite to run on a PC and talk to these instruments via GPIB. This turned out to be something called VEE, but it is no longer a HP product. It has had multiple custodians. From HP it moved to Agilent, and now it is in the hands of Keysight Technologies.

So it was no surprise the focus would be on professional equipment with GPIB or the closely related USB-based successor USBTMC. There is also built-in support for a few other instrument standards, all packaged together in the IO Libraries Suite of a full VEE installation. However, I had a hard time finding any mention of how to communicate with custom-built equipment outside of supported protocols. It certainly wasn’t covered in their Quick Start Guide (PDF), so I moved on to Advanced Techniques (PDF).

I thought perhaps I would have to create what they call a “panel driver” for installation into VEE in order to support custom equipment, but a search for “How to write a VEE Panel Driver” failed to retrieve useful links. How do instrument manufacturers create and release VEE Panel Driver for their equipment? So far that is still a mystery.

In the absence of a custom panel driver, the next option is to specify a custom communication protocol directly in VEE, and such a thing can be built from their Transaction I/O mechanism. Suitable at least for query/response types of interaction. The exact commands being sent out and expected to be received are crafted step by step using VEE GUI. This seems very labor intensive but has the advantage of avoiding annoying and common byte processing bugs typical when such serial byte stream processing are written in C.

It’s not obvious from reading the document what happens if the VEE transaction specification is wrong and incoming serial data doesn’t match. This is not encouraging, neither were there any mechanisms to help support development of transaction I/O. There’s just trial and error. Seriously. This is a direct quote from the manual:

Many times the best way to develop the transactions you need is by using trial and error

(Chapter 4: Using Transaction I/O / Creating and Reading Transactions / Editing the Data Field / Suggestions for Developing Transactions)

I didn’t find any mention of how to deal with continuous data stream from devices that do not perform transaction-based communication. For example a thermometer that continuously reports temperature without prompting, or the Neato LIDAR. VEE does have a data polling feature, but that seems to be restricted to devices on specific subset of supported protocols and not arbitrary serial communication.

From this brief survey, it appears VEE support for arbitrary USB serial communication is quite limited. The next step is to look at how VEE support displaying arbitrary data.

Toyota Mirai Water Release Switch

I have always been a fan of novel engineering and willing to spend my own money to support adventurous products. This is why, back in 2003, I was cross shopping two cars that had nothing in common except novel engineering: the Mazda RX-8 with its Wankel rotary engine and the Toyota Prius gas-electric hybrid.

Side note: it is common for car salesman to ask what other cars a particular shopper is also considering. When I tell them, it was fun to watch watch their faces as they work to process the answer.

Eventually I decided on a Mazda RX-8, which I still own. Since then I have also leased a Chevrolet Volt plug-in hybrid for three years. In fact, the exact Volt shown at the top of my Hackaday post memorializing the car. Both of those cars are no longer being manufactured. Meanwhile Toyota’s gas-electric hybrids have become mainstream, making them less personally interesting to me.

But Toyota has an entirely different car to showcase novel engineering: the hydrogen fuel cell Mirai. I had the chance to join a friend evaluating the car. He was serious about getting one, I just wanted to check it out and was not contemplating one of my own. While we were waiting for his appointment, we got in the showroom model and started looking around.

And since we were engineers, this also included digging into the owner’s manual sitting in the glovebox. The Mirai ownership experience is a fascinating blend of the familiar and the unusual, the strangest item that caught our attention was this water release switch. The manual only said it was for ‘certain situations’ but did not elaborate. We asked the sales rep and learned it was so water can be dumped before entering places where water could cause problems.

Two potential examples were actually in front of us: the Mirai parked in their showroom was sitting on a carpeted surface, where water could leave a stain. Elsewhere in the showroom, cars are parked on tile or polished concrete where water could leave a slippery surface causing people to fall. The button allows a Mirai to drain its water before moving into the showroom.

Right now commercially the Mirai is in a tough spot. It is at the end of the current product cycle, where three year old units from the same generation can be purchased off lease at significant depreciation while a far better looking next generation is on the horizon. Toyota has a lot of incentives on offer for potential Mirai shoppers. When leasing for three years, in addition to discount up front, all regular checkup and maintenance is free (no oil and filter changes here, but things like checking for hydrogen leaks instead) and a $12,000 credit for hydrogen fuel.

It was not enough to entice my friend, and I was not interested either. I believe my next car will be a battery electric vehicle.

Window Shopping Chirp For Arduino… Actually, ESP32

Lately local fellow maker Emily has been tinkering with the Mozzi sound synthesis library for building an Arduino-based noise making contraption. I pitched in occasionally on the software side of her project, picking up bits and pieces of Mozzi along the way. Naturally I started thinking about how I might use Mozzi in a project of my own. I floated the idea of using Mozzi to create a synthetic robotic voice for Sawppy, similar to the voices created for silver screen robots R2-D2, WALL-E, and BB-8.

“That’d be neat,” she said, “but there’s this other thing you should look into.”

I was shown a YouTube video by Alex of Hackster.io. (Also embedded below.) Where a system was prototyped to create a voice for her robot companion Archimedes. And Archie’s candidate new voice isn’t just a set of noises for fun’s sake, they encode data and thus an actual sensible verbal language for a robot.

This “acoustic data transmission” magic is the core offering of Chirp.io, which was created for purposes completely unrelated to cute robot voices. The idea is to allow communication of short bursts of data without the overhead of joining a WiFi network or pairing Bluetooth devices. Almost every modern device — laptop, phone, or tablet — already has a microphone and a speaker for Chirp.io to leverage. Their developer portal lists a wide variety of platforms with Chirp.io SDK support.

Companion robot owls and motorized Mars rovers models weren’t part of the original set of target platforms, but that is fine. We’re makers and we can make it work. I was encouraged when I saw a link for the Chirp for Arduino SDK. Then a scan through documentation of the current release revealed it would be more accurately called the Chirp for Espressif ESP32 SDK as it doesn’t support original genuine Arduino boards. The target platform is actually the ESP32 hardware (connected to audio input and output peripherals) running in its Arduino IDE compatible mode. It didn’t matter to me, ESP32 is the platform I’ve been meaning to gain some proficiency at anyway, but might be annoying to someone who actually wanted to use it on other Arduino and compatible boards.

Getting Chirp.io on an ESP32 up and running sounds like fun, and it’s free to start experimenting. So thanks to Emily, I now have another project for my to-do list.

BeagleBone Blue And Robot Control Library Drives eduMIP

My motivation to learn about the BeagleBone Blue came from my rover Sawppy driving by the BeagleBoard foundation booth at SCaLE 17x. While this board might not the best fit for a six wheel drive four wheel steering rocker bogie mars rover model, it has a great deal of potential for other projects.

But what motivated the BeagleBone Blue? When brainstorming about what I could do with something cool, it’s always instructive to learn a little bit about where it came from. A little research usually pays off because the better my idea aligns with its original intent, the better my chances are of a successful project.

I found that I could thank the Coordinated Robotics Lab at University of California, San Diego for this creation. As teaching tool for one of the courses at UCSD, they created the Robotics Cape add-on for a BeagleBone Black. It is filled with goodies useful for robot projects they could cover in class. More importantly, with quadrature input to go along with DC motor output and a 9-axis IMU on top of other sensors, this board is designed for robots that react to their environment. Not just simple automata that flail their limbs.

The signature robot chassis for this brain is the eduMIP. MIP stands for Mobile Inverted Pendulum and the “edu” prefix makes it clear it’s about teaching the principles behind such systems and invite exploration and experimentation. Not just a little self-balancing Segway-like toy, but one where we can dig into and modify its internals. I like where they are coming from.

eduMIP 1600
Photo of eduMIP by Renaissance Robotics.

BeagleBone Blue, then, is an offering to make robots like an eduMIP easier to build. By merging a BeagleBone Black with the Robotics Cape into a single board, removing components that aren’t as useful for a mobile robot (such as the Ethernet port) we arrive at a BeagleBone Blue.

Of course, the brawn of a robotics chassis isn’t much use without the smarts to make it all work together. Befitting university coursework nature and BeagleBoard Foundation’s standard procedure, its peripherals software now called Robot Control Library are documented and source code available on Github.

I could buy an eduMIP of my own to help me explore the BeagleBone Blue, and at $50 it is quite affordable. But I think I want to spend some time with the BeagleBone Blue itself before I spend more money.

Window Shopping BeagleBone Blue

Sawppy was a great ice breaker as I roamed through the expo hall of SCaLE 17x. It was certainly the right audience to appreciate such a project, even though there were few companies with products directly relevant to a hobbyist Mars rover. One notable exception, however is the BeagleBoard Foundation booth. As Sawppy drove by, the reception was: “Is that a Raspberry Pi? Yes it is. That should be a BeagleBone Blue!”

Beaglebone Blue 1600
BeagleBone Blue picture from Make.

With this prompt, I looked into BBBlue in more detail. At $80 it is significantly more expensive than a bare Raspberry Pi, but it incorporates a lot of robotics-related features that a Pi would require several HATs to reach parity.

All BeagleBoards offer a few advantages over a Raspberry Pi, which the BBBlue inherits:

  • Integrated flash storage, Pi requires a separate microSD card.
  • Onboard LEDs for diagnosis information.
  • Onboard buttons for user interaction – including a power button! It’s always personally grated me a Raspberry Pi has no graceful shutdown button.

Above and beyond standard BeagleBoards, the Blue adds:

  • A voltage regulator, which I know well is an extra component on a Pi.
  • On top of that, BBBlue can also handle charging a 2S LiPo battery! Being able to leave the battery inside a robot would be a huge convenience. And people who don’t own smart battery chargers wouldn’t need to buy one if all they do is use their battery with a BBBlue.
  • 8 PWM headers for RC-style servo motors.
  • 4 H-bridge to control 4 DC motors.
  • 4 Quadrature encoder inputs to know what those motors are up to.
  • 9-axis IMU (XYZ accelaration + XYZ rotation)
  • Barometer

Sadly, a BBBlue is not a great fit for Sawppy because it uses serial bus servos making all the hardware control features (8 PWM header, 4 motor control, 4 quadrature input) redundant. But I can definitely think of a few projects that would make good use of a BeagleBone Blue. It is promising enough for me to order one to play with.

Window Shopping RobotC For My NXT

I remember my excitement when LEGO launched their Mindstorm NXT product line. I grew up with LEGO and was always a fan of the Technic line letting a small child experiment with mechanical designs without physical dangers of fabrication shop tools. Building an intuitive grasp of the powers of gear reduction was the first big step on my learning curve of mechanical design.

Starting with simple machines that were operated by hand cranks and levers, LEGO added actuators like pneumatic cylinders and electric motors. This trend eventually grew to programmable electronic logic. Some of the more affordable Mindstorm products only allowed the user to select between a few fixed behaviors, but with the NXT it became possible for users to write their own fully general programs.

Lego Mindstorm NXT Smart Brick

At this point I was quite comfortable with programming in languages like C, but that was not suitable for LEGO’s intended audience. So they packaged a LabVIEW-based programming environment that is a visual block-based system like today’s Scratch & friends. It lowered the barrier to entry but exacted a cost in performance. The brick is respectably powerful inside, and many others thought it was worthwhile to unlock its full power. I saw enough efforts underway that I thought I’d check back later… and I finally got around to it.

Over a decade later now, I see Wikipedia has a long list of alternative methods of programming a LEGO Mindstorm NXT. My motivation to look at this list came from Jim Dinunzio, a member of Robotics Society of Southern California, presenting his project TotalCanzRecall at the February 2019 RSSC meeting. Mechanically his robot was built from the stock set of components in a NXT kit, but the software was written with RobotC. Jim reviewed the capabilities he had with RobotC that were not available with default LEGO software, the most impressive one being the ability to cooperatively multitask.

A review of information on RobotC web site told me it is almost exactly what I had wanted when I picked up a new NXT off the shelf circa 2006. A full IDE with debugging tools among a long list of interesting features and documentation to help people learn those features.

Unfortunately, we are no longer in 2006. My means of mechanical construction has evolved beyond LEGO to 3D-printing, and I have a wide spectrum of electronic brainpower at my disposal from a low-end 8-bit PIC Microcontrollers (mostly PIC16F18345) to the powerful Raspberry Pi 3, both of which can already be programmed with C.

There may be a day when I will need to build something using my dusty Mindstorm set and program it using RobotC. When that day comes I’ll go buy a license of RobotC and sink my teeth into the problem, but that day is not today.

Window Shopping JeVois Machine Vision Camera

In the discussion period that followed my Sawppy presentation at RSSC, there was a discussion on machine vision. When discussing problems & potential solutions, JeVois camera was mentioned as one of the potential tools for machine vision problems. I wrote down the name and resolved to look it up later. I have done so and I like what I see.

First thing that made me smile was the fact it was a Kickstarter success story. I haven’t committed any of my own money to any Kickstarter project, but I’ve certainly read more about failed projects than successful ones. It’s nice when the occasional success story comes across my radar.

The camera module is of the type commonly used in cell phones, and behind the camera is a small machine vision computer again built mostly of portable electronics components. The idea is to have a completely self-contained vision processing system, requiring only power input and delivers processed data output. Various machine vision tasks can be handled completely inside the little module as long as the user is realistic about the limited processing power available. It is less powerful but also less expensive and smaller than Google’s AIY Vision module.

The small size is impressive, and led to my next note of happiness: it looks pretty well documented. When I looked at its size, I had wondered how to best mount the camera on a project. It took less than 5 minutes to decipher documentation hierarchy and find details on physical dimensions and how to mount the camera case. Similarly, my curiosity about power requirements was quickly answered with confirmation that its power draw does indeed exceed the baseline USB 500mW.

Ease of programming was the next investigation. Some of the claims around this camera made it sound like its open source software stack can run on a developer’s PC and debugged before publishing to the camera. However, the few tutorials I skimmed through (one example here) all required an actual JeVois camera to run vision code. I interpret this to mean that JeVois software stack is indeed specific to the camera. The whole “develop on your PC first” only means the general sense of developing vision algorithms on a PC before porting to JeVois software stack for deployment on the camera itself. If I find out I’m wrong, I’ll come back and update this paragraph.

2017-04-27-15-14-36When I looked on Hackaday, I saw that one of the writers thought JeVois camera’s demo mode was a very effective piece of software. It should be quite effective at its job: get users interested in digging deeper. Project-wise, I see a squirrel detector and a front door camera already online.

The JeVois camera has certainly earned a place on my “might be interesting to look into” list for more detailed later investigation.

Window Shopping AWS DeepRacer

At AWS re:Invent 2018 a few weeks ago, Amazon announced their DeepRacer project. At first glance it appears to be a more formalized version of DonkeyCar, complete with an Amazon-sponsored racing league to take place both online digitally and physically at future Amazon events. Since the time I wrote up a quick snapshot for Hackaday, I went through and tried to learn more about the project.

While it would have been nice to get hands-on time, it is still in pre-release and my application to join the program received a an acknowledgement that boils down to “don’t call us, we’ll call you.” There’s been no updates since, but I can still learn a lot by reading their pre-release documentation.

Based on the (still subject to change) Developer Guide, I’ve found interesting differences between DeepRacer and DonkeyCar. While they are both built on a 1/18th scale toy truck chassis, there are differences almost everywhere above that. Starting with the on board computer: a standard DonkeyCar uses a Raspberry Pi, but the DeepRacer has a more capable onboard computer built around an Intel Atom processor.

The software behind DonkeyCar is focused just on driving a DonkeyCar. In contrast DeepRacer’s software infrastructure is built on ROS which is a more generalized system that just happens to have preset resources to help people get up and running on a DeepRacer. The theme continues to the simulator: DonkeyCar has a task specific simulator, DeepRacer uses Gazebo that can simulate an environment for anything from a DeepRacer to a humanoid robot on Mars. Amazon provides a preset Gazebo environment to make it easy to start DeepRacer simulations.

And of course, for training the neural networks, DonkeyCar uses your desktop machine while DeepRacer wants you to train on AWS hardware. And again there are presets available for DeepRacer. It’s no surprise that Amazon wants people to build skills that are easily transferable to robots other than DeepRacer while staying in their ecosystem, but it’s interesting to see them build a gentle on-ramp with DeepRacer.

Both cars boil down to a line-following robot controlled by a neural network. In the case of DonkeyCar, the user trains the network to drive like a human driver. In DeepRacer, the network is trained via reinforcement learning. This is a subset of deep learning where the developer provides a way to score robot behavior, the higher the better, in the form of an reward function. Reinforcement learning trains a neural network to explore different behaviors and remember the ones that help it get a higher score on the developer-provided evaluation function. AWS developer guide starts people off with a “stay on the track” function which won’t work very well, but it is a simple starting point for further enhancements.

Based on reading through documentation, but before any hands-on time, the differences between DonkeyCar and DeepRacer serve different audiences with different priorities.

  • Using AWS machine learning requires minimal up-front investment but can add up over time. Training a DonkeyCar requires higher up-front investment in computer hardware for machine learning with TensorFlow.
  • DonkeyCar is trained to emulate behavior of a human, which is less likely to make silly mistakes but will never be better than the trainer. DeepRacer is trained to optimize reward scoring, which will start by making lots of mistakes but has the potential to drive in a way no human would think of… both for better and worse!
  • DonkeyCar has simpler software which looks easier to get started. DeepRacer uses generalized robot software like ROS and Gazebo that, while presets are available to simplify use, still adds more complexity than strictly necessary. On the flipside, what’s learned by using ROS and Gazebo can be transferred to other robot projects.
  • The physical AWS DeepRacer car is a single pre-built and tested unit. DonkeyCar is a DIY project. Which is better depends on whether a person views building their own car as a fun project or a chore.

I’m sure there are other differences that will surface with some hands-on time, I plan to return and look at AWS DeepRacer in more detail after they open it up to the public.

Sawppy Odometry Candidate: Flow Breakout Board

When I presented Sawppy the Rover at Robotics Society of Southern California, one of the things I brought up as an open problems is how to determine Sawppy’s movement through its environment. Wheel odometry is sufficient for a robot traveling on flat ground like Phoebe, but when Sawppy travels on rough terrain things can get messy in more ways than one.

In the question-and-answer session some people brought up the idea of calculating odometry by visual means, much in the way a modern optical computer mouse determines its movement on a desk. This is something I could whip up with a downward pointing webcam and open source software, but there are also pieces of hardware designed specifically to perform this task. One example is the PWM3901 chip, which I could experiment using breakout boards like this item on Tindie.

However, that visual calculation is only part of the challenge, because translating what that camera sees into a physical dimension requires one more piece of data: the distance from the camera to the surface it is looking at. Depending on application, this distance might be a known quantity. But for robotic applications where the distance may vary, a distance sensor would be required.

As a follow-up to my presentation, RSSC’s online discussion forum brought up the Flow Breakout Board. This is an interesting candidate for helping Sawppy gain awareness of how it is moving through its environment (or failing to do so, as the case may be.) A small lightweight module that puts the aforementioned PWM3901 chip alongside a VL53L0x distance sensor.

flow_breakout_585px-1

The breakout board only handles the electrical connections – an external computer or microcontroller will be necessary to make the whole system sing. That external module will need to communicate with PWM3901 via SPI and, separately, VL53L0x via I2C. Then it will need perform the math to calculate actual X-Y distance traveled. This in itself isn’t a problem.

The problem comes from the fact a PWM3901 was designed to be used on small multirotor aircraft to aid them in holding position. Two design decisions that make sense for its intended purpose turns out to be a problem for Sawppy.

  1. This chip is designed to help hold position, which is why it was not concerned with knowing the height above surface or physical dimension of that translation: the sensor was only concerned with detecting movement so the aircraft can be brought back to position.
  2. Multirotor aircraft all have built-in gyroscopes to stabilize itself, so they already detect rotation about their Z axis. Sawppy has no such sensor and would not be able to calculate its position in global space if it doesn’t know how much it has turned in place.
  3. Multirotor aircraft are flying in the air, so the designed working range of 80mm to infinity is perfectly fine. However, Sawppy has only 160mm between the bottom of the equipment bay and nominal floor distance. If traversing over obstacles more than 80mm tall, or rough terrain bringing surface within 80mm of the sensor, this sensor would become disoriented.

This is a very cool sensor module that has a lot of possibilities, and despite its potential problems it has been added to the list of things to try for Sawppy in the future.

Intel RealSense T265 Tracking Camera

In the middle of these experiments with a Xbox 360 Kinect as robot depth sensor, Intel announced a new product that’s along similar lines and a tempting venue for robotic exploration: the Intel RealSense T265 Tracking Camera. Here’s a picture from Intel’s website announcing the product:

intel_realsense_tracking_camera_t265_hero

T265 is not a direct replacement for the Kinect, at least not as a depth sensing camera. For that, we need to look at Intel’s D415 and D435. They would be fun to play with, too, but I already had the Kinect so I’m learning on what I have before I spend money.

So if the T265 is not a Kinect replacement, how is it interesting? It can act as a complement to a depth sensing camera. The point of the thing is not to capture the environment – it is to track the motion and position within that environment. Yes, there is the option for an image output stream, but the primary data output of this device is a position and orientation.

This type of camera-based “inside-out” tracking is used by the Windows Mixed Reality headsets to determine its user’s head position and orientation. These sensors requires low latency and high accuracy to avoid VR motion sickness, and has obvious applications in robotics. Now Intel’s T265 offers that capability in a standalone device.

According to Intel, the implementation is based on a pair of video cameras and an inertial motion unit (IMU). Data feeds into internal electronics running a V-SLAM (visual simultaneous location and mapping) algorithm aided by Movidius neural network chip. This process generates position+orientation output. It seems pretty impressive to me that it is done in such a small form factor and high speed (at least low latency) with 1.5 watt of power.

At $200, it is a tempting toy for experimentation. Before I spend that money, though, I’ll want to read more about how to interface with this device. The USB 2 connection is not surprising, but there’s a phrase that I don’t yet understand: “non volatile memory to boot the device” makes it sound like the host is responsible for some portion of the device’s boot process, which isn’t like any other sensor I’ve worked with before.

Dell Alienware Area-51m vs. Luggable PC

On the Hackaday.io project page of my Luggable PC, I wrote the following as part of my reason for undertaking the project:

The laptop market has seen a great deal of innovation in form factors. From super thin-and-light convertible tablets to heavyweight expensive “Gamer Laptops.” The latter pushes the limits of laptop form factor towards the desktop segment.

In contrast, the PC desktop market has not seen a similar level of innovation.

It was true when I wrote it, and to the best of my knowledge it has continued to be the case. CES (Consumer Electronics Show) 2019 is underway and there are some pretty crazy gamer laptops getting launched, and I have heard nothing similar to my Luggable PC from a major computer maker.

laptops-aw-alienware-area-51m-nt-pdp-mod-heroSo what’s new in 2019? A representative of current leading edge gamer laptop is the newly launched Dell Alienware Area-51m. It is a beast of a machine pushing ten pounds, almost half the weight of my luggable. Though granted that weight includes a battery for some duration of operation away from a plug, something my luggable lacks. It’s not clear if that weight includes the AC power adapter, or possibly adapters plural since I see two power sockets in pictures. As the machine has not yet officially launched, there isn’t yet an online manual for me to go read what that’s about.

It offers impressive upgrade options for a laptop. Unlike most laptops, it uses a desktop CPU complete with a desktop motherboard processor socket. The memory and M.2 SSD are not huge surprises – they’re fairly par for the course even in mid tier laptops. What is a surprise is the separate detachable video card that can be upgraded, at least in theory. Unlike my luggable which takes standard desktop video cards, this machine takes a format I do not recognize. Ars Technica said it is the “Dell Graphics Form Factor” which I had never heard of, and found no documentation for. I share Ars skepticism in the upgrade claims. Almost ten years ago I bought a 17-inch Dell laptop with a separate video card, and I never saw an upgrade option for it.

There are many different upgrade options for the 17″ screen, but they are all 1080p displays. I found this curious – I would have expected a 4K option in this day and age. Or at least something like the 2560×1440 resolution of the monitor I used in Mark II.

And finally – that price tag! It’s an impressive piece of engineering, and obviously a low volume niche, but the starting price over $2,500 still came as a shock. While the current market prices may make more sense to buy instead of building a mid-tier computer, I could definitely build a high end luggable with specifications similar to the Alienware Area-51m for less.

I am clearly not the target demographic for this product, but it was still fun to look at.