Window Shopping Chirp For Arduino… Actually, ESP32

Lately local fellow maker Emily has been tinkering with the Mozzi sound synthesis library for building an Arduino-based noise making contraption. I pitched in occasionally on the software side of her project, picking up bits and pieces of Mozzi along the way. Naturally I started thinking about how I might use Mozzi in a project of my own. I floated the idea of using Mozzi to create a synthetic robotic voice for Sawppy, similar to the voices created for silver screen robots R2-D2, WALL-E, and BB-8.

“That’d be neat,” she said, “but there’s this other thing you should look into.”

I was shown a YouTube video by Alex of Hackster.io. (Also embedded below.) Where a system was prototyped to create a voice for her robot companion Archimedes. And Archie’s candidate new voice isn’t just a set of noises for fun’s sake, they encode data and thus an actual sensible verbal language for a robot.

This “acoustic data transmission” magic is the core offering of Chirp.io, which was created for purposes completely unrelated to cute robot voices. The idea is to allow communication of short bursts of data without the overhead of joining a WiFi network or pairing Bluetooth devices. Almost every modern device — laptop, phone, or tablet — already has a microphone and a speaker for Chirp.io to leverage. Their developer portal lists a wide variety of platforms with Chirp.io SDK support.

Companion robot owls and motorized Mars rovers models weren’t part of the original set of target platforms, but that is fine. We’re makers and we can make it work. I was encouraged when I saw a link for the Chirp for Arduino SDK. Then a scan through documentation of the current release revealed it would be more accurately called the Chirp for Espressif ESP32 SDK as it doesn’t support original genuine Arduino boards. The target platform is actually the ESP32 hardware (connected to audio input and output peripherals) running in its Arduino IDE compatible mode. It didn’t matter to me, ESP32 is the platform I’ve been meaning to gain some proficiency at anyway, but might be annoying to someone who actually wanted to use it on other Arduino and compatible boards.

Getting Chirp.io on an ESP32 up and running sounds like fun, and it’s free to start experimenting. So thanks to Emily, I now have another project for my to-do list.

BeagleBone Blue And Robot Control Library Drives eduMIP

My motivation to learn about the BeagleBone Blue came from my rover Sawppy driving by the BeagleBoard foundation booth at SCaLE 17x. While this board might not the best fit for a six wheel drive four wheel steering rocker bogie mars rover model, it has a great deal of potential for other projects.

But what motivated the BeagleBone Blue? When brainstorming about what I could do with something cool, it’s always instructive to learn a little bit about where it came from. A little research usually pays off because the better my idea aligns with its original intent, the better my chances are of a successful project.

I found that I could thank the Coordinated Robotics Lab at University of California, San Diego for this creation. As teaching tool for one of the courses at UCSD, they created the Robotics Cape add-on for a BeagleBone Black. It is filled with goodies useful for robot projects they could cover in class. More importantly, with quadrature input to go along with DC motor output and a 9-axis IMU on top of other sensors, this board is designed for robots that react to their environment. Not just simple automata that flail their limbs.

The signature robot chassis for this brain is the eduMIP. MIP stands for Mobile Inverted Pendulum and the “edu” prefix makes it clear it’s about teaching the principles behind such systems and invite exploration and experimentation. Not just a little self-balancing Segway-like toy, but one where we can dig into and modify its internals. I like where they are coming from.

eduMIP 1600
Photo of eduMIP by Renaissance Robotics.

BeagleBone Blue, then, is an offering to make robots like an eduMIP easier to build. By merging a BeagleBone Black with the Robotics Cape into a single board, removing components that aren’t as useful for a mobile robot (such as the Ethernet port) we arrive at a BeagleBone Blue.

Of course, the brawn of a robotics chassis isn’t much use without the smarts to make it all work together. Befitting university coursework nature and BeagleBoard Foundation’s standard procedure, its peripherals software now called Robot Control Library are documented and source code available on Github.

I could buy an eduMIP of my own to help me explore the BeagleBone Blue, and at $50 it is quite affordable. But I think I want to spend some time with the BeagleBone Blue itself before I spend more money.

Window Shopping BeagleBone Blue

Sawppy was a great ice breaker as I roamed through the expo hall of SCaLE 17x. It was certainly the right audience to appreciate such a project, even though there were few companies with products directly relevant to a hobbyist Mars rover. One notable exception, however is the BeagleBoard Foundation booth. As Sawppy drove by, the reception was: “Is that a Raspberry Pi? Yes it is. That should be a BeagleBone Blue!”

Beaglebone Blue 1600
BeagleBone Blue picture from Make.

With this prompt, I looked into BBBlue in more detail. At $80 it is significantly more expensive than a bare Raspberry Pi, but it incorporates a lot of robotics-related features that a Pi would require several HATs to reach parity.

All BeagleBoards offer a few advantages over a Raspberry Pi, which the BBBlue inherits:

  • Integrated flash storage, Pi requires a separate microSD card.
  • Onboard LEDs for diagnosis information.
  • Onboard buttons for user interaction – including a power button! It’s always personally grated me a Raspberry Pi has no graceful shutdown button.

Above and beyond standard BeagleBoards, the Blue adds:

  • A voltage regulator, which I know well is an extra component on a Pi.
  • On top of that, BBBlue can also handle charging a 2S LiPo battery! Being able to leave the battery inside a robot would be a huge convenience. And people who don’t own smart battery chargers wouldn’t need to buy one if all they do is use their battery with a BBBlue.
  • 8 PWM headers for RC-style servo motors.
  • 4 H-bridge to control 4 DC motors.
  • 4 Quadrature encoder inputs to know what those motors are up to.
  • 9-axis IMU (XYZ accelaration + XYZ rotation)
  • Barometer

Sadly, a BBBlue is not a great fit for Sawppy because it uses serial bus servos making all the hardware control features (8 PWM header, 4 motor control, 4 quadrature input) redundant. But I can definitely think of a few projects that would make good use of a BeagleBone Blue. It is promising enough for me to order one to play with.

Window Shopping RobotC For My NXT

I remember my excitement when LEGO launched their Mindstorm NXT product line. I grew up with LEGO and was always a fan of the Technic line letting a small child experiment with mechanical designs without physical dangers of fabrication shop tools. Building an intuitive grasp of the powers of gear reduction was the first big step on my learning curve of mechanical design.

Starting with simple machines that were operated by hand cranks and levers, LEGO added actuators like pneumatic cylinders and electric motors. This trend eventually grew to programmable electronic logic. Some of the more affordable Mindstorm products only allowed the user to select between a few fixed behaviors, but with the NXT it became possible for users to write their own fully general programs.

Lego Mindstorm NXT Smart Brick

At this point I was quite comfortable with programming in languages like C, but that was not suitable for LEGO’s intended audience. So they packaged a LabVIEW-based programming environment that is a visual block-based system like today’s Scratch & friends. It lowered the barrier to entry but exacted a cost in performance. The brick is respectably powerful inside, and many others thought it was worthwhile to unlock its full power. I saw enough efforts underway that I thought I’d check back later… and I finally got around to it.

Over a decade later now, I see Wikipedia has a long list of alternative methods of programming a LEGO Mindstorm NXT. My motivation to look at this list came from Jim Dinunzio, a member of Robotics Society of Southern California, presenting his project TotalCanzRecall at the February 2019 RSSC meeting. Mechanically his robot was built from the stock set of components in a NXT kit, but the software was written with RobotC. Jim reviewed the capabilities he had with RobotC that were not available with default LEGO software, the most impressive one being the ability to cooperatively multitask.

A review of information on RobotC web site told me it is almost exactly what I had wanted when I picked up a new NXT off the shelf circa 2006. A full IDE with debugging tools among a long list of interesting features and documentation to help people learn those features.

Unfortunately, we are no longer in 2006. My means of mechanical construction has evolved beyond LEGO to 3D-printing, and I have a wide spectrum of electronic brainpower at my disposal from a low-end 8-bit PIC Microcontrollers (mostly PIC16F18345) to the powerful Raspberry Pi 3, both of which can already be programmed with C.

There may be a day when I will need to build something using my dusty Mindstorm set and program it using RobotC. When that day comes I’ll go buy a license of RobotC and sink my teeth into the problem, but that day is not today.

Window Shopping JeVois Machine Vision Camera

In the discussion period that followed my Sawppy presentation at RSSC, there was a discussion on machine vision. When discussing problems & potential solutions, JeVois camera was mentioned as one of the potential tools for machine vision problems. I wrote down the name and resolved to look it up later. I have done so and I like what I see.

First thing that made me smile was the fact it was a Kickstarter success story. I haven’t committed any of my own money to any Kickstarter project, but I’ve certainly read more about failed projects than successful ones. It’s nice when the occasional success story comes across my radar.

The camera module is of the type commonly used in cell phones, and behind the camera is a small machine vision computer again built mostly of portable electronics components. The idea is to have a completely self-contained vision processing system, requiring only power input and delivers processed data output. Various machine vision tasks can be handled completely inside the little module as long as the user is realistic about the limited processing power available. It is less powerful but also less expensive and smaller than Google’s AIY Vision module.

The small size is impressive, and led to my next note of happiness: it looks pretty well documented. When I looked at its size, I had wondered how to best mount the camera on a project. It took less than 5 minutes to decipher documentation hierarchy and find details on physical dimensions and how to mount the camera case. Similarly, my curiosity about power requirements was quickly answered with confirmation that its power draw does indeed exceed the baseline USB 500mW.

Ease of programming was the next investigation. Some of the claims around this camera made it sound like its open source software stack can run on a developer’s PC and debugged before publishing to the camera. However, the few tutorials I skimmed through (one example here) all required an actual JeVois camera to run vision code. I interpret this to mean that JeVois software stack is indeed specific to the camera. The whole “develop on your PC first” only means the general sense of developing vision algorithms on a PC before porting to JeVois software stack for deployment on the camera itself. If I find out I’m wrong, I’ll come back and update this paragraph.

2017-04-27-15-14-36When I looked on Hackaday, I saw that one of the writers thought JeVois camera’s demo mode was a very effective piece of software. It should be quite effective at its job: get users interested in digging deeper. Project-wise, I see a squirrel detector and a front door camera already online.

The JeVois camera has certainly earned a place on my “might be interesting to look into” list for more detailed later investigation.

Window Shopping AWS DeepRacer

aws deepracerAt AWS re:Invent 2018 a few weeks ago, Amazon announced their DeepRacer project. At first glance it appears to be a more formalized version of DonkeyCar, complete with an Amazon-sponsored racing league to take place both online digitally and physically at future Amazon events. Since the time I wrote up a quick snapshot for Hackaday, I went through and tried to learn more about the project.

While it would have been nice to get hands-on time, it is still in pre-release and my application to join the program received a an acknowledgement that boils down to “don’t call us, we’ll call you.” There’s been no updates since, but I can still learn a lot by reading their pre-release documentation.

Based on the (still subject to change) Developer Guide, I’ve found interesting differences between DeepRacer and DonkeyCar. While they are both built on a 1/18th scale toy truck chassis, there are differences almost everywhere above that. Starting with the on board computer: a standard DonkeyCar uses a Raspberry Pi, but the DeepRacer has a more capable onboard computer built around an Intel Atom processor.

The software behind DonkeyCar is focused just on driving a DonkeyCar. In contrast DeepRacer’s software infrastructure is built on ROS which is a more generalized system that just happens to have preset resources to help people get up and running on a DeepRacer. The theme continues to the simulator: DonkeyCar has a task specific simulator, DeepRacer uses Gazebo that can simulate an environment for anything from a DeepRacer to a humanoid robot on Mars. Amazon provides a preset Gazebo environment to make it easy to start DeepRacer simulations.

And of course, for training the neural networks, DonkeyCar uses your desktop machine while DeepRacer wants you to train on AWS hardware. And again there are presets available for DeepRacer. It’s no surprise that Amazon wants people to build skills that are easily transferable to robots other than DeepRacer while staying in their ecosystem, but it’s interesting to see them build a gentle on-ramp with DeepRacer.

Both cars boil down to a line-following robot controlled by a neural network. In the case of DonkeyCar, the user trains the network to drive like a human driver. In DeepRacer, the network is trained via reinforcement learning. This is a subset of deep learning where the developer provides a way to score robot behavior, the higher the better, in the form of an reward function. Reinforcement learning trains a neural network to explore different behaviors and remember the ones that help it get a higher score on the developer-provided evaluation function. AWS developer guide starts people off with a “stay on the track” function which won’t work very well, but it is a simple starting point for further enhancements.

Based on reading through documentation, but before any hands-on time, the differences between DonkeyCar and DeepRacer serve different audiences with different priorities.

  • Using AWS machine learning requires minimal up-front investment but can add up over time. Training a DonkeyCar requires higher up-front investment in computer hardware for machine learning with TensorFlow.
  • DonkeyCar is trained to emulate behavior of a human, which is less likely to make silly mistakes but will never be better than the trainer. DeepRacer is trained to optimize reward scoring, which will start by making lots of mistakes but has the potential to drive in a way no human would think of… both for better and worse!
  • DonkeyCar has simpler software which looks easier to get started. DeepRacer uses generalized robot software like ROS and Gazebo that, while presets are available to simplify use, still adds more complexity than strictly necessary. On the flipside, what’s learned by using ROS and Gazebo can be transferred to other robot projects.
  • The physical AWS DeepRacer car is a single pre-built and tested unit. DonkeyCar is a DIY project. Which is better depends on whether a person views building their own car as a fun project or a chore.

I’m sure there are other differences that will surface with some hands-on time, I plan to return and look at AWS DeepRacer in more detail after they open it up to the public.

Sawppy Odometry Candidate: Flow Breakout Board

When I presented Sawppy the Rover at Robotics Society of Southern California, one of the things I brought up as an open problems is how to determine Sawppy’s movement through its environment. Wheel odometry is sufficient for a robot traveling on flat ground like Phoebe, but when Sawppy travels on rough terrain things can get messy in more ways than one.

In the question-and-answer session some people brought up the idea of calculating odometry by visual means, much in the way a modern optical computer mouse determines its movement on a desk. This is something I could whip up with a downward pointing webcam and open source software, but there are also pieces of hardware designed specifically to perform this task. One example is the PWM3901 chip, which I could experiment using breakout boards like this item on Tindie.

However, that visual calculation is only part of the challenge, because translating what that camera sees into a physical dimension requires one more piece of data: the distance from the camera to the surface it is looking at. Depending on application, this distance might be a known quantity. But for robotic applications where the distance may vary, a distance sensor would be required.

As a follow-up to my presentation, RSSC’s online discussion forum brought up the Flow Breakout Board. This is an interesting candidate for helping Sawppy gain awareness of how it is moving through its environment (or failing to do so, as the case may be.) A small lightweight module that puts the aforementioned PWM3901 chip alongside a VL53L0x distance sensor.

flow_breakout_585px-1

The breakout board only handles the electrical connections – an external computer or microcontroller will be necessary to make the whole system sing. That external module will need to communicate with PWM3901 via SPI and, separately, VL53L0x via I2C. Then it will need perform the math to calculate actual X-Y distance traveled. This in itself isn’t a problem.

The problem comes from the fact a PWM3901 was designed to be used on small multirotor aircraft to aid them in holding position. Two design decisions that make sense for its intended purpose turns out to be a problem for Sawppy.

  1. This chip is designed to help hold position, which is why it was not concerned with knowing the height above surface or physical dimension of that translation: the sensor was only concerned with detecting movement so the aircraft can be brought back to position.
  2. Multirotor aircraft all have built-in gyroscopes to stabilize itself, so they already detect rotation about their Z axis. Sawppy has no such sensor and would not be able to calculate its position in global space if it doesn’t know how much it has turned in place.
  3. Multirotor aircraft are flying in the air, so the designed working range of 80mm to infinity is perfectly fine. However, Sawppy has only 160mm between the bottom of the equipment bay and nominal floor distance. If traversing over obstacles more than 80mm tall, or rough terrain bringing surface within 80mm of the sensor, this sensor would become disoriented.

This is a very cool sensor module that has a lot of possibilities, and despite its potential problems it has been added to the list of things to try for Sawppy in the future.

Intel RealSense T265 Tracking Camera

In the middle of these experiments with a Xbox 360 Kinect as robot depth sensor, Intel announced a new product that’s along similar lines and a tempting venue for robotic exploration: the Intel RealSense T265 Tracking Camera. Here’s a picture from Intel’s website announcing the product:

intel_realsense_tracking_camera_t265_hero

T265 is not a direct replacement for the Kinect, at least not as a depth sensing camera. For that, we need to look at Intel’s D415 and D435. They would be fun to play with, too, but I already had the Kinect so I’m learning on what I have before I spend money.

So if the T265 is not a Kinect replacement, how is it interesting? It can act as a complement to a depth sensing camera. The point of the thing is not to capture the environment – it is to track the motion and position within that environment. Yes, there is the option for an image output stream, but the primary data output of this device is a position and orientation.

This type of camera-based “inside-out” tracking is used by the Windows Mixed Reality headsets to determine its user’s head position and orientation. These sensors requires low latency and high accuracy to avoid VR motion sickness, and has obvious applications in robotics. Now Intel’s T265 offers that capability in a standalone device.

According to Intel, the implementation is based on a pair of video cameras and an inertial motion unit (IMU). Data feeds into internal electronics running a V-SLAM (visual simultaneous location and mapping) algorithm aided by Movidius neural network chip. This process generates position+orientation output. It seems pretty impressive to me that it is done in such a small form factor and high speed (at least low latency) with 1.5 watt of power.

At $200, it is a tempting toy for experimentation. Before I spend that money, though, I’ll want to read more about how to interface with this device. The USB 2 connection is not surprising, but there’s a phrase that I don’t yet understand: “non volatile memory to boot the device” makes it sound like the host is responsible for some portion of the device’s boot process, which isn’t like any other sensor I’ve worked with before.

Dell Alienware Area-51m vs. Luggable PC

On the Hackaday.io project page of my Luggable PC, I wrote the following as part of my reason for undertaking the project:

The laptop market has seen a great deal of innovation in form factors. From super thin-and-light convertible tablets to heavyweight expensive “Gamer Laptops.” The latter pushes the limits of laptop form factor towards the desktop segment.

In contrast, the PC desktop market has not seen a similar level of innovation.

It was true when I wrote it, and to the best of my knowledge it has continued to be the case. CES (Consumer Electronics Show) 2019 is underway and there are some pretty crazy gamer laptops getting launched, and I have heard nothing similar to my Luggable PC from a major computer maker.

laptops-aw-alienware-area-51m-nt-pdp-mod-heroSo what’s new in 2019? A representative of current leading edge gamer laptop is the newly launched Dell Alienware Area-51m. It is a beast of a machine pushing ten pounds, almost half the weight of my luggable. Though granted that weight includes a battery for some duration of operation away from a plug, something my luggable lacks. It’s not clear if that weight includes the AC power adapter, or possibly adapters plural since I see two power sockets in pictures. As the machine has not yet officially launched, there isn’t yet an online manual for me to go read what that’s about.

It offers impressive upgrade options for a laptop. Unlike most laptops, it uses a desktop CPU complete with a desktop motherboard processor socket. The memory and M.2 SSD are not huge surprises – they’re fairly par for the course even in mid tier laptops. What is a surprise is the separate detachable video card that can be upgraded, at least in theory. Unlike my luggable which takes standard desktop video cards, this machine takes a format I do not recognize. Ars Technica said it is the “Dell Graphics Form Factor” which I had never heard of, and found no documentation for. I share Ars skepticism in the upgrade claims. Almost ten years ago I bought a 17-inch Dell laptop with a separate video card, and I never saw an upgrade option for it.

There are many different upgrade options for the 17″ screen, but they are all 1080p displays. I found this curious – I would have expected a 4K option in this day and age. Or at least something like the 2560×1440 resolution of the monitor I used in Mark II.

And finally – that price tag! It’s an impressive piece of engineering, and obviously a low volume niche, but the starting price over $2,500 still came as a shock. While the current market prices may make more sense to buy instead of building a mid-tier computer, I could definitely build a high end luggable with specifications similar to the Alienware Area-51m for less.

I am clearly not the target demographic for this product, but it was still fun to look at.