Raspberry Pi Drives Death Clock

Since this was the first time Emily and I built something to light up a VFD (vacuum fluorescent display) we expected things to go wrong. Given this expectation, I wanted to be able to easily and rapidly iterate through different VFD patterns to pin down problems. I didn’t want to reflash the PIC every time I wanted to change a pattern, so the PIC driver code was written to accept new patterns over I2C. Almost anything can send the byte sequences necessary — Arduino, ESP32, Pi, etc — but what was handy that day was a Raspberry Pi 3 previously configured as backup Sawppy brain.

The ability to write short Python scripts to send different bit patterns turned out to be very helpful when tracking down an errant pin shorted to ground. It was much faster to edit a Python file over SSH and rerun it than it was to reflash the PIC every time. And since we’ve got it working this far, we’ll continue with this system for the following reasons:

  • The established project priority is to stay with what we’ve already got working, not get sidetracked by potential improvements.
  • Emily already had a Raspberry Pi Zero that could be deployed for the task. Underpowered for many tasks, a Pi Zero would have no problem with something this simple.
  • A Raspberry Pi Zero is a very limited platform and a bit of a pain to develop on, but fortunately the common architecture across all Raspberry Pi implies we can do all our work on a Raspberry Pi 3 like we’ve been doing. Once done, we can transfer the microSD into a Raspberry Pi Zero and everything will work. Does that theory translate to practice? We’ll find out!
  • We’ve all read of Raspberry Pi corrupting their microSD storage in fixed installations like this, where it’s impossible to guarantee the Pi will be gracefully shut down before power is disconnected. But how bad is this problem, really? At Maker Faire we talked to a few people who claimed the risk is overblown. What better way to find out than to test it ourselves?

On paper it seems like a Death Clock could be completely implemented in a PIC. But that requires extensive modification of our PIC code for doubious gain. Yeah, a Raspberry Pi is overkill, but it’s what we already have working, and there are some interesting things to learn by doing so. Stay the course and full steam ahead!

Adventures Installing GPU Accelerated TensorFlow On Ubuntu 18.04

Once the decision was made to move to ROS 2, the next step is to upgrade my Ubuntu installation to Ubuntu Bionic Beaver 18.04 LTS. I could upgrade in place, but given that I’m basically rebuilding my system for new infrastructure I decided to take this opportunity to upgrade to a larger SSD and restart from scratch.

As soon as my Ubuntu installation was up and running, I immediately went to revisit the most problematic portion of my previous installation: the version-matching adventure to install GPU-accelerated Tensorflow. If anything goes horribly wrong, this is the best time to flatten the disk and try until I get it right. As it turned out, that was a good call.

Taking to heart the feedback given by people like myself, Google has streamlined TensorFlow installation and even includes an option to run within a Docker container. This packages all of the various software libraries (from Nvidia’s CUDA Toolkit to Google’s TensorFlow itself) into a single integrated package. This is in fact their recommended procedure today for GPU support, with the words:

This setup only requires the NVIDIA GPU drivers.

When it comes to Linux and specialty device drivers, “only” rarely actually turns out to be so. I went online for further resources and found this page offering three options for installing Nvidia drivers on Ubuntu 18.04. Since I like living on the bleeding edge and have little to lose on a freshly installed disk, I tried the manual installation of Nvidia driver files first.

It was not user friendly, the script raised errors that pointed me to log file… but the log file did not contain any information I found relevant for diagnosis. On a lark (again, very little to lose) I selected “continue anyway” options for the process to complete. This probably meant the installation has gone off the rails, but I wanted to see what I end with. After reboot I can tell my video driver has been changed, because it only ran on a single monitor and had flickering visual artifacts.

Well, that didn’t work.

I then tried to install drivers from ppa:graphics-drivers/ppa but that process encountered problems it didn’t know how to solve. Not being familiar with Ubuntu mechanics, I only had approximate understanding of the error messages. What it really told me was “you should probably reformat and restart now” which I did.

Once Ubuntu 18.04 was reinstalled, I tried the ppa:graphics-drivers/ppa option again and this time it successfully installed the latest driver with zero errors and zero drama. I even maintained the use of both monitors without any flickering visual artifacts.

With that success, I installed Docker Community Edition for Ubuntu followed by Nvidia container runtime for Docker, both of which installed smoothly.

Nvidia docker

Once the infrastructure was in place, I was able to run a GPU-enabled TensorFlow docker containers on my machine, executing a simple program.

TensorFlow test

This process is still not great, but at least it is getting smoother. Maybe I’ll revisit this procedure again in another year to find an easier process. In the meantime, I’m back up and running with latest TensorFlow on my Ubuntu 18.04.

An Unsuccessful First Attempt Applying Q-Learning to CartPole Environment

One of the objectives of OpenAI Gym is to have a common programming interface across all of its different environments. And it certainly looks pretty good at the surface: we reset() the environment, take actions to step() through it, and at some point we get True as a return value for the done flag. Having a common interface allows us to use the same algorithm across multiple environments with minimal modification.

But “minimal” modification is not “zero” modification. Some environments are close enough that no modifications are required, but not all of them. Sometimes an environment is just not the right fit for an algorithm, and sometimes there are important details which differ from one environment to another.

One way environments differ is in different type of spaces. An environment has two: an observation_space that describes the observed state of the environment, and an action_space that outlines valid actions an agent may choose to take. They change from one environment to another because they tend to have different observable properties and different actions an agent can take within them.

As an exercise I thought I’d try to take the simple Q-Learning algorithm demonstrated to solve the Taxi environment, and slam it on top of CartPole just to see what happens. And to do that, I had to take CartPole‘s state which is an array of four floating point numbers and convert it into an integer suitable for an array index.

As an naive approach, I’ll slice up the space into discrete slices. Each of four numbers will be divided into ten bins. Each bin will correspond to a single digit zero to nine, so the four numbers will be composed into a four digit integer value.

To determine size of these bins, I executed 1000 episodes of the CartPole simulation while taking random actions via action_space.sample(). The ten bins are evenly divided between maximum and minimum values observed values in this sample run, and Q-learning is off and running… doing nothing useful.

As shown in plot above, reward function is always 8, 9, 10, or 11. We never got above or below this range. Also, out of 10000 possible states, only about 50 were ever traversed.

So this first naive attempt didn’t work, but it was a fun experiment. Now the more challenging part: figuring out where it went wrong, and how to fix it.

Code written in this exercise is available here.

 

Taking First Step Into Reinforcement Learning with OpenAI Gym

The best part about learning a new technology today is the fact that, once armed with a few key terminology, a web search can unlock endless resources online. Some of which are even free! Such was the case after I looked over OpenAI Gym on its own: I searched for an introductory reinforcement learning project online and found several to choose from. I started with this page which uses the “Taxi” environment of OpenAI Gym and, within a few lines of Python code, implemented basic Q-Learning agent that can complete the task within 1000 episodes.

I had previously read the Wikipedia page on Q-Learning, but a description suitable for an encyclopedia entry is not always straightforward to put into code. For example, Wikipedia described learning rate is a value from 0 to 1 plus what it means when it is at the extremes of 0 or 1. But it doesn’t give any guidance on what kind of values are useful in real world examples. The tutorial used 0.618 and while there isn’t enough information on why that value was chosen, it served as a good enough starting point. For this and more related reasons, it was good to have a simple implementation.

After I got it running, it was time to start poking around to learn more. The first question was how fast the algorithm learned to solve the problem, and for that I wanted to plot the cumulative evaluation function reward against iterations. This was trivial with help of PyPlot and I obtained the graph at the top of this post. We can see a lot of learning progress within the first 100 episodes. There’s a mysterious degradation in capability around 175th episode, but the system mostly recovered by 200. After that, there were diminishing returns until about 400 and the agent made no significant improvements after that point.

This simple algorithm used an array that could represent all 500 states of the environment. With six possible actions, it was an array with 3000 entries initially filled with zero. I was curious how long it took for the entire problem space to be explored, and the answer seems to be roughly 50 episodes before there were 2400 nonzero entries and it never exceeded 2400. This was far faster than I had expected to take to explore 2400 states, and it was also a surprise that 600 entries in the array were never used.

What did those 600 entries represent? With six possible actions, it implies there are 100 unreachable states of the environment. I thought I’d throw that array into PyPlot and see if anything jumped out at me:

Taxi Q plotted raw

My mind is at a loss as to how to interpret this data. But I don’t know how important it is to understand right now – this is an environment whose entire problem space can be represented in memory, using discrete values, and these are luxuries that quickly disappear as problems get more complex. The real world is not so easily classified into discrete states, and we haven’t even involved neural networks yet. The latter is referred to as DQN (Deep Q-learning Network?) and is still yet to come.

The code I wrote for this exercise is available here.

Quick Overview: OpenAI Gym

Given what I’ve found so far, it looks like Unity would be a good way to train reinforcement learning agents, and Gazebo would be used afterwards to see how they work before deploying on actual physical robots. I might end up doing something different, but they are good targets to work towards. But where would I start? That’s where OpenAI Gym comes in.

It is a collection of prebuilt environments that are free and open for hobbyists, students, and researchers alike. The list of available environments range across a wide variety of problem domains – from text-based activity that should in theory be easy for computers, to full-on 3D simulations like what I’d expect to find in Unity and Gazebo. Putting them all under the same umbrella and easily accessed from Python in a consistent manner makes it simple to gradually increase complexity of problems being solved.

Following the Getting Started guide, I was able to install the Python package and run the CartPole-v0 example. I was also able to bring up its Atari subsystem in the form of MsPacman-v4. The 3D simulations used MuJoCo as its physics engine, which has a 30-day trial and after that it costs $500/yr for personal non-commercial use. At the moment I don’t see enough benefit to justify the cost so the tentative plan is to learn the basics of reinforcement learning on simple 2D environments. By the time I’m ready to move into 3D, I’ll use Unity instead of paying for MuJoCo, bypassing the 3D simulation portion of OpenAI Gym.

I’m happy OpenAI Gym provides a beginner-friendly set of standard reinforcement learning textbook environments. Now I’ll need to walk through some corresponding textbook examples on how to create an agent that learns to work in those environments.

Quick Overview: Autoware Foundation

ROS is a big world of open source robotics software development, and it’s hard to know everything that’s going on. One thing I’ve been doing to try to keep up is to read announcements made on ROS Discourse. I’ve seen various mentions of Autoware but it’s been confusing trying to figure out what it is from context so today I spent a bit of time to get myself oriented.

That’s when I finally figured out I was confused because the term could mean different things in different contexts. At the root of it all is Autoware Foundation, the non-profit organization supporting open source research and development towards autonomous vehicles. Members hail from universities to hardware vendors to commercial entities.

Autoware Foundation Banner

Under the umbrella of this Autoware Foundation organization is a body of research into self-driving cars using ROS 1.0 as foundation. This package of ROS nodes (and how they weave together for self-driving applications) is collectively Autoware.AI. Much of this work is directly visible in their main Github repository. However, this body of work has a limited future, as ROS 1.0 was built with experimental research in mind. There are some pretty severe and fundamental limitations when building applications where human lives are on the line, such as self-driving cars.

ROS 2.0 is a big change motivated by the desire to address those limitations, allow people to build robotics systems with much more stringent performance and safety requirements on top of ROS 2.0. Autoware is totally on board with this plan and their ROS 2.0-based project is collectively Autoware.Auto. It is less exploratory/experimental and more focused on working their way towards a specific set of milestones running on a specific hardware platform.

There are a few other ancillary projects all under the same umbrella working towards the overall goal. Some with their own catchy names like Autoware.IO (which is “coming soon” but it looks like a squatter has already claimed that domain.) and some without such catchy names. All of this explains why I was confused trying to figure out what Autoware was from context – it is a lot lot of things. And definitely well worth their own section of ROS Discourse.

 

Window Shopping AWS DeepRacer

aws deepracerAt AWS re:Invent 2018 a few weeks ago, Amazon announced their DeepRacer project. At first glance it appears to be a more formalized version of DonkeyCar, complete with an Amazon-sponsored racing league to take place both online digitally and physically at future Amazon events. Since the time I wrote up a quick snapshot for Hackaday, I went through and tried to learn more about the project.

While it would have been nice to get hands-on time, it is still in pre-release and my application to join the program received a an acknowledgement that boils down to “don’t call us, we’ll call you.” There’s been no updates since, but I can still learn a lot by reading their pre-release documentation.

Based on the (still subject to change) Developer Guide, I’ve found interesting differences between DeepRacer and DonkeyCar. While they are both built on a 1/18th scale toy truck chassis, there are differences almost everywhere above that. Starting with the on board computer: a standard DonkeyCar uses a Raspberry Pi, but the DeepRacer has a more capable onboard computer built around an Intel Atom processor.

The software behind DonkeyCar is focused just on driving a DonkeyCar. In contrast DeepRacer’s software infrastructure is built on ROS which is a more generalized system that just happens to have preset resources to help people get up and running on a DeepRacer. The theme continues to the simulator: DonkeyCar has a task specific simulator, DeepRacer uses Gazebo that can simulate an environment for anything from a DeepRacer to a humanoid robot on Mars. Amazon provides a preset Gazebo environment to make it easy to start DeepRacer simulations.

And of course, for training the neural networks, DonkeyCar uses your desktop machine while DeepRacer wants you to train on AWS hardware. And again there are presets available for DeepRacer. It’s no surprise that Amazon wants people to build skills that are easily transferable to robots other than DeepRacer while staying in their ecosystem, but it’s interesting to see them build a gentle on-ramp with DeepRacer.

Both cars boil down to a line-following robot controlled by a neural network. In the case of DonkeyCar, the user trains the network to drive like a human driver. In DeepRacer, the network is trained via reinforcement learning. This is a subset of deep learning where the developer provides a way to score robot behavior, the higher the better, in the form of an reward function. Reinforcement learning trains a neural network to explore different behaviors and remember the ones that help it get a higher score on the developer-provided evaluation function. AWS developer guide starts people off with a “stay on the track” function which won’t work very well, but it is a simple starting point for further enhancements.

Based on reading through documentation, but before any hands-on time, the differences between DonkeyCar and DeepRacer serve different audiences with different priorities.

  • Using AWS machine learning requires minimal up-front investment but can add up over time. Training a DonkeyCar requires higher up-front investment in computer hardware for machine learning with TensorFlow.
  • DonkeyCar is trained to emulate behavior of a human, which is less likely to make silly mistakes but will never be better than the trainer. DeepRacer is trained to optimize reward scoring, which will start by making lots of mistakes but has the potential to drive in a way no human would think of… both for better and worse!
  • DonkeyCar has simpler software which looks easier to get started. DeepRacer uses generalized robot software like ROS and Gazebo that, while presets are available to simplify use, still adds more complexity than strictly necessary. On the flipside, what’s learned by using ROS and Gazebo can be transferred to other robot projects.
  • The physical AWS DeepRacer car is a single pre-built and tested unit. DonkeyCar is a DIY project. Which is better depends on whether a person views building their own car as a fun project or a chore.

I’m sure there are other differences that will surface with some hands-on time, I plan to return and look at AWS DeepRacer in more detail after they open it up to the public.

Sawppy Odometry Candidate: Flow Breakout Board

When I presented Sawppy the Rover at Robotics Society of Southern California, one of the things I brought up as an open problems is how to determine Sawppy’s movement through its environment. Wheel odometry is sufficient for a robot traveling on flat ground like Phoebe, but when Sawppy travels on rough terrain things can get messy in more ways than one.

In the question-and-answer session some people brought up the idea of calculating odometry by visual means, much in the way a modern optical computer mouse determines its movement on a desk. This is something I could whip up with a downward pointing webcam and open source software, but there are also pieces of hardware designed specifically to perform this task. One example is the PWM3901 chip, which I could experiment using breakout boards like this item on Tindie.

However, that visual calculation is only part of the challenge, because translating what that camera sees into a physical dimension requires one more piece of data: the distance from the camera to the surface it is looking at. Depending on application, this distance might be a known quantity. But for robotic applications where the distance may vary, a distance sensor would be required.

As a follow-up to my presentation, RSSC’s online discussion forum brought up the Flow Breakout Board. This is an interesting candidate for helping Sawppy gain awareness of how it is moving through its environment (or failing to do so, as the case may be.) A small lightweight module that puts the aforementioned PWM3901 chip alongside a VL53L0x distance sensor.

flow_breakout_585px-1

The breakout board only handles the electrical connections – an external computer or microcontroller will be necessary to make the whole system sing. That external module will need to communicate with PWM3901 via SPI and, separately, VL53L0x via I2C. Then it will need perform the math to calculate actual X-Y distance traveled. This in itself isn’t a problem.

The problem comes from the fact a PWM3901 was designed to be used on small multirotor aircraft to aid them in holding position. Two design decisions that make sense for its intended purpose turns out to be a problem for Sawppy.

  1. This chip is designed to help hold position, which is why it was not concerned with knowing the height above surface or physical dimension of that translation: the sensor was only concerned with detecting movement so the aircraft can be brought back to position.
  2. Multirotor aircraft all have built-in gyroscopes to stabilize itself, so they already detect rotation about their Z axis. Sawppy has no such sensor and would not be able to calculate its position in global space if it doesn’t know how much it has turned in place.
  3. Multirotor aircraft are flying in the air, so the designed working range of 80mm to infinity is perfectly fine. However, Sawppy has only 160mm between the bottom of the equipment bay and nominal floor distance. If traversing over obstacles more than 80mm tall, or rough terrain bringing surface within 80mm of the sensor, this sensor would become disoriented.

This is a very cool sensor module that has a lot of possibilities, and despite its potential problems it has been added to the list of things to try for Sawppy in the future.

Intel RealSense T265 Tracking Camera

In the middle of these experiments with a Xbox 360 Kinect as robot depth sensor, Intel announced a new product that’s along similar lines and a tempting venue for robotic exploration: the Intel RealSense T265 Tracking Camera. Here’s a picture from Intel’s website announcing the product:

intel_realsense_tracking_camera_t265_hero

T265 is not a direct replacement for the Kinect, at least not as a depth sensing camera. For that, we need to look at Intel’s D415 and D435. They would be fun to play with, too, but I already had the Kinect so I’m learning on what I have before I spend money.

So if the T265 is not a Kinect replacement, how is it interesting? It can act as a complement to a depth sensing camera. The point of the thing is not to capture the environment – it is to track the motion and position within that environment. Yes, there is the option for an image output stream, but the primary data output of this device is a position and orientation.

This type of camera-based “inside-out” tracking is used by the Windows Mixed Reality headsets to determine its user’s head position and orientation. These sensors requires low latency and high accuracy to avoid VR motion sickness, and has obvious applications in robotics. Now Intel’s T265 offers that capability in a standalone device.

According to Intel, the implementation is based on a pair of video cameras and an inertial motion unit (IMU). Data feeds into internal electronics running a V-SLAM (visual simultaneous location and mapping) algorithm aided by Movidius neural network chip. This process generates position+orientation output. It seems pretty impressive to me that it is done in such a small form factor and high speed (at least low latency) with 1.5 watt of power.

At $200, it is a tempting toy for experimentation. Before I spend that money, though, I’ll want to read more about how to interface with this device. The USB 2 connection is not surprising, but there’s a phrase that I don’t yet understand: “non volatile memory to boot the device” makes it sound like the host is responsible for some portion of the device’s boot process, which isn’t like any other sensor I’ve worked with before.

That UI in Jurassic Park Was A Real Thing

I don’t remember exactly when and where I saw the movie Jurassic Park, but I do remember it was during its theatrical release and I was with a group of people who were aware of the state of the art in computer graphics. We were there to admire the digital dinosaurs in that movie, which represented a tremendous leap ahead. Over twenty-five years later, the film is still recognized as a landmark in visual effects history. Yes, more than half of the dinosaurs in the film are physical effects, but the digital dinosaurs stood out.

Given the enthusiasm in computer generated effects, it naturally followed that this crowd was also computer literate. I believe we all knew our way around a UNIX prompt, so this was a crowd that erupted into laughter when the infamous “It’s a UNIX system! I know this!” line was spoken.

jurassic park unix

I was running jobs on a centralized machine via telnet. Graphics user interface on UNIX systems were rare on the machines I had worked with, never mind with animated three-dimensional graphics! I had assumed it was a bit of Hollywood window dressing, because admittedly text characters on a command line doesn’t work on the silver screen as well.

Well, that was a bad assumption. Maybe not all UNIX systems have interfaces that show the user flying over a three dimension landscape of objects, but one did! I have just learned it was an actual user interface available for Silicon Graphics machines. SGI logo is visible on the computer monitor, and their machines were a big part of rendering those digital dinosaurs we were drooling over. It made sense the production crew would be aware of the visually attractive (optional) component for SGI’s UNIX variant, IRIX.

KISS Tindies: On Stage

Now it’s time to wrap up the KISS Tindies wireform practice project with a few images for prosperity. Here’s the band together on stage:

kiss tindie band on stage

An earlier version of this picture had Catman’s drum set in its bare copper wire form. It worked fine, but I recalled most drum sets I saw on stage performances had a band logo or something on their bass drum surface facing the audience. I hastily soldered another self blinking LED to my drum set, arranged so it can draw power from a coin cell battery sitting in the drum. Calling this a “battery holder” would be generous, it’s a far simpler hack than that.

kiss tindie drum set blinky led

I then printed a Tindie logo, scaled to fit on my little drum. Nothing fancy, just a standard laser printer on normal office printer. I then traced out the drum diameter and cut out a little circle with a knife. Old-fashioned white glue worked well enough to attach it to the copper wire, and that was enough to make the drum set far more interesting than just bare wire.

A black cardboard box lid served as a stage, with a 4xAA battery tray serving as an elevated platform for the drummer. I watched a few YouTube videos to see roughly where Demon, Spaceman, and Starchild stand relative to each other and Catman as drummer. It may not be a representative sample, but hopefully it eliminated the risk of: “They never stand that way.”

With batteries installed in everyone, it’s time for lights, camera, action! It was just a simple short video shot on my cell phone camera, one continuous pull back motion as smooth as I can execute with my bare hands helped by the phone’s built in stabilization. I had one of the aforementioned YouTube videos running for background sound.

I wanted to start the video focused on the drum logo, but doing so requires the phone right where I wanted Demon to stand. After a few unsatisfactory tests, I decided to just add Demon mid-scene after the phone has moved out of the way. It solves my position problem and adds a nice bit of dynamism to the shot.

Adafruit Feather System

I received a Adafruit Hallowing in the Supercon sponsor gift bag given to attendees. While reading up about it, I came across this line that made no sense to me at the time.

OK so technically it’s more like a really tricked-out Feather than a Wing but we simply could not resist the Hallowing pun.

feather-logo

I can tell the words “Feather” and “Wing” has some meaning in this context that is different from their plain English meaning, but I didn’t know what they were talking about.

But since this is Adafruit, I knew somewhere on their site is an explanation that breaks down whats going on in detail. I just had to follow the right links to get there. My expectations were fully met – and then some – when I found this link.

So now I understand this is a counterpart to the other electronics hobbyist programming boards and their standardized expansion board form factor. Raspberry Pi foundation defines their HAT, Arduino defines their Shield, and now Adafruit gets into the game with feathers (a board with brains) and wings (accessories to add on a feather.)

Except unlike Raspberry Pi or Arduino, a feather isn’t fixed to a particular architecture, or a particular chip. As long as they operate on 3.3 volts and can communicate with the usual protocols (I2C, SPI), they can be made into a feather. Adafruit make feathers out of all the popular microcontrollers. Not just the SAM D21 at the heart of Hallowing, but also other chips of the ATmega line as well as recent darling ESP32.

Similarly, anyone is welcome to create a wing that could be attached to a feather. As long as they follow guidelines on footprint and pin assignment, it can fit right into the wings ecosystem. Something for me to keep in mind if I ever get into another KiCad project in the future – I can build it as a wing!

 

Hackaday Badge LCD Screen 2: Documented Limitations

Now that I’ve completed my overview of the Hackaday Belgrade 2018 badge (which the upcoming Hackaday Superconference 2018 badge is very close to) it’s time to dig deeper. First topic for a deep dive: that LCD screen. In my earlier brief look, I only established that the screen is fully graphics capable and not restricted to text only. What kind of graphics can we do with this?

18-bit panel, accepts 24-bit color

First topic: color depth. The default badge firmware’s BASIC programming interface seems to allow only 16 colors which the documentation called “EGA colors”, see color_table[16] in disp.c. I was confident a modern LCD module has more than 4-bit color, and page 2 of the LCD module datasheet called itself a “262K color display.” That works out to 18 bits, so the panel native color depth is likely 6 bits red, 6 bits green, 6 bits blue. However, it is not limited to taking information in 18-bit color, the display can be configured to communicate at a wide range of bit depths. Looking in tft_fill_area() in disp.c, we can see the existing firmware is communicating in 24-bit color. 8 bits each for red, green, and blue.

Not enough memory for full 24-bit off-screen buffer

If we don’t want to change modes around on the fly, then, we’ll need to work with the panel in 24-bit color. Standard operating procedure is to draw using an off-screen buffer and, when the result is ready for display, send the buffer to screen in a single fast operation. A 24-bit off screen buffer is usually done with a 32-bit value representing each pixel with ARGB, even if we’re not using A(lpha) channel. 320 wide * 240 high * 32 bits = 300 kilobytes. Unfortunately, the datasheet for our PIC32 processor shows it only has a total of 128 kilobytes of memory, so the easy straightforward full screen buffer is out of the question. We’ll have to be more creative about this.

NHD-2.4-240320CF-CTXI LCD module wiring diagramNo VSYNC

But we wouldn’t have been able to make full use of an off screen buffer anyway. We need to send buffer data to screen at the right time. If we send while the screen is in the midst of drawing, there will be a visible tear as the old content is partially mixed with the new. The typical way to avoid this is to listen to a vertical synchronization signal (VSYNC) to know when to perform the update. And while the ST7789 controller on board the LCD module has provision to signal VSYNC, the LCD module does not expose VSYNC information. There may be some other way to avoid visual tearing, but it ain’t VSYNC.

These limitations, which are relatively typical of embedded electronics projects, are part of the fun of writing software for this class of hardware. Sure, it is a limitation, but it is also a challenge to be overcome, a puzzle to solve, and a change of pace from the luxury of desktop computers where such limitations are absent.

 

 

Animated GIF For When A Screenshot Is Not Enough

Trying to write up yesterday’s blog post posed a challenge. I typically write up a blog post with one or more images to illustrate the topic of the day. Two days ago I talked about lining up Phoebe’s orientation before and after a turn. For that topic, a screenshot was appropriate illustration. But yesterday’s topic is about Phoebe’s RViz plot of LIDAR data moving strangely. How do I convey “moving strangely” in a picture?

After thinking over the problem for a while, I decided that I can’t do it with a static image. What I need is a short video. This has happened before on this blog, but doing a full YouTube video clip seems excessive for this particular application. I only need a few frames. An animated GIF seems like just the thing.

I went online looking for ways to do this, and there were an overwhelming number of answers. Since my project is relatively simple, I didn’t want to spend a lot of time learning a new powerful tool that can do far more than what I need. When a tool is described as simple and straightforward, that gets my attention.

So for this project I went with Kazam that was described on this page as a “lightweight screen recorder for Ubuntu”. Good enough for me. The only hiccup in my learning curve was in the start/stop sequence. Recording can be started by clicking “Capture” button from the application dialog box, but there was no counterpart for stopping. Recording had to be stopped from the icon in the upper right corner of the screen.

Everything else was straightforward and soon I had a MP4 video file of RViz displaying LIDAR movement. Then it was off to this particular Ubuntu answer page to turn MP4 into an animated GIF using command line tools ffmpeg and convert. However, that results in a rather large multi-megabyte GIF file, far larger than the MP4 source at a little over 100 kilobytes! Fortunately these instructions also pointed to the convert option -layers optimize which reduced the size drastically. At over 200 kilobytes, it was still double the size of the captured MP4, but at least it’s well under a megabyte. And more importantly, it is allowed for embedding at my blog hosting subscription tier.

This RViz plot has only simple colors and lines, ideally suited for animated GIF so it could have been even smaller. I suspect a tool dedicated to representing simple geometries on screen could produce a more compact animated GIF. If I ever need to do this on a regular basis, I’ll spend more time to find a better solution. But for today Kazam + ffmpeg + convert was good enough.

Electric Car Chargers Need To Keep Their Cool

A constant criticism of electric cars is their charging time. Despite all of their other advantages, charging takes noticeably longer than refueling a gasoline car and this difference make some people dismissive. When I leased a Volt for 3 years, charging time was a nonissue because it was parked in a garage with a charging station. Meaning my car recharged overnight while I slept, just like my phone and laptop. I rarely ever charged the car away from home, and it’s usually done more out of novelty than necessity.

Bob and Di Thompson Volt J1772 Charging

But real issue or not, charge time is something that needs to be addressed for consumer acceptance. So technology has been developing to make electric car charging ever faster. The rapid pace of development also means a lot of competition, each claiming to be faster than the last. The latest round of news has General Motors proclaiming that they’re working on a 400 kilowatt charging system.

The big headline number of 400 kilowatts is impressive, but the engineers would dig deeper for an arguably a more impressive number: 96.5% efficiency. The publicity material focuses on economical and ecological advantage of wasting less energy, but it also makes adoption far more realistic. Wasting less power isn’t just good for the pocketbook and environment, it also means less power being turned into heat.

How much heat are we talking about? 96.5% efficiency implies 3.5% waste. So that 400 kilowatt charger is turning about 3.5%, or about 14 kilowatts, into heat. For comparison, cheap home electric space heaters usually top out at about 1500 watts, or 1.5 kilowatts. Meaning these car chargers need to deal with byproduct waste heat that’s roughly ten times that generated by a home heater whose entire purpose is to generate heat. That’s a lot of heat to deal with! Heat management is a concern for all the high speed charging stations, from Tesla to Volkswagon. It’s good to see progress in efficiency so our super high power charging stations don’t cook themselves or anyone nearby.

Duckietown Is Full Of Autonomous Duckiebots

Duckietown duckiebotThe previous blog post outlined some points of concern against using Raspberry Pi 3 as the brains of an autonomous robot. But it’s only an inventory of concerns and not condemning the platform against robotics use. A Raspberry Pi is quite a capable little computer in its own right and that’s even before considering its performance in light of low cost. There are certainly many autonomous robot projects where a Raspberry Pi provides sufficient computing power for their respective needs. As one example, we can look at the robots ferrying little rubber duckies around the city of Duckietown.

According to its website, the Duckietown started as a platform to teach a 2016 MIT class on autonomous vehicles. Browsing through their public Github repository it appears all logic is expressed as a ROS stack and executed on board its Raspberry Pi, no sending work to a desktop computer over network like the TurtleBot 3.  A basic Duckiebot has minimal input and output to contend with – just a camera for input and two motors for output. No wheel encoders, no distance scanners, no fancy odometry calculations. And while machine vision can be computationally intensive, it’s the type of task that can be dialed back and shoehorned into a small computer like the Pi.

Making this task easier is assisted by Duckietown, an environment designed to help Duckiebots function by leveraging its strengths and mitigating its weaknesses. Roads have clear contrast to make vision processing easier. Objects have machine-friendly markers to aid object identification. And while such measures imply a Duckiecar won’t function very well away from a Duckietown, it’s still a capable little robotics platform for exploring basic concepts.

At first glance the “Duckiebooks” documentation area has a lot of information, but I was quickly disappointed by finding many pages filled with TODO and links to “404 Not Found”. I suppose it’ll be filled out in coming months, but for today it appears I must look elsewhere for guidelines on building complete robots running ROS on Raspberry Pi.

Duckietown TODO

Embedding an Instagram Post with BBCode Without Plugin

Embedding an Instagram post is trivial on a WordPress blog like this one: copy the full Instagram URL (like https://www.instagram.com/p/BfryG0VnUmF/) and paste it into the visual editor window. Behind the scenes, that URL is parsed to create an embed as shown here.

There are similar plugins to add an tag to a BBCode-based web forum. But what if a forum does not have such direct support installed? This was the case for the web forum set up as community driven support for JPL’s Open Source Rover.

On every Instagram post, there’s an “Embed” option that will bring up a chunk of HTML (which links to some JavaScript) to create an embed. However, a BBCode based web forum does not allow embedding arbitrary HTML like that.

Time to read the manual which in this case is Instagram’s developer resources page about embedding. They prefer that people use the fancy methods like that chunk of HTML we can’t use. But way down towards the bottom, they do describe how to use the /media/ endpoint to pull down just an image file with no active components.

Instagram Rover L

This is simple enough to use within the BBCode [IMG] tag. Then we can surround that image tag with a [URL] tag to turn it into a link to the Instagram post.

[URL=https://www.instagram.com/p/BfryG0VnUmF/][IMG]https://instagram.com/p/BfryG0VnUmF/media/?size=m[/IMG][/URL]

It’s not as fancy as the full embed code, but it does get the basic point across and provides an easy way to access the original Instagram post. Good enough for a SGVHAK Rover post on the JPL OSR web forum.

Diagnosing Periodic Artifact in 3D Print Due To Inconsistent Extrusion

A common error when setting up a 3D printer is putting motor control parameters that don’t actually match the installed physical hardware. Sometimes this is glaringly obvious: maybe the X-axis moves 5mm when it should move 10mm. Big errors are easy to find and fix, but the little “off by 0.5%” errors are tough to track down.

In this category, a specific class of errors are specific to the Z-axis. When X- and Y-axis are moving around printing a layer, the Z-axis needs to hold still for a consistent print. And when it’s time to print another layer, the Z-axis needs to move a precise and consistent amount for every layer. This is usually not a problem for stepper motors typical of hobby level 3D printer Z-axis control, as long as the layers correspond to an even number of steps.

When the layers don’t map cleanly to a number of steps, the Z-axis motor might attempt to hold position in between steps. This is fundamentally a difficult task for a stepper motor and its controller, rarely successful, so most control boards round off to the nearest step instead. This rounding tends to cause periodic errors in the print as the Z-axis rounds a tiny bit higher or lower than the desired position, and failing to meet the “precise and consistent” requirement for a proper print.

With a freshly configured Azteeg X5 Mini WiFi control board in my open-box Monoprice Maker Select printer, seeing a periodic error along the Z-axis when printing Sawppy’s wheels immediately placed suspicion on Z-axis motor configuration.

Debug Periodic Print Layer Artifact

Back to hardware measurement I go, and reviewing motor control parameters. After over an hour of looking for problems in Z-axis configuration I came up empty-handed.

Then a key observation when looking at details under magnification: the error is occurring every 6 layers, and not at a consistent location all around the print. This little bump is actually in a spiral shape around the wheel, which would not be the case when rounding off Z-axis steps.

Following this insight, I went to review the 3D priner G-Code file and saw the print path is on a regular cycle printing the six spokes of the wheel. It printed the same way between 5 of those spokes, but the sixth is slightly different and that slightly different behavior cycles through the six spokes as the print rises through each layer.

It turns out this print artifact is not a Z-axis configuration issue at all, but the result of inconsistent extrusion. When moving in one pattern (5 of the spokes) it extrudes a certain amount, when moving in another (the final spoke) it ends up putting a tiny bit of extra plastic on the print, causing the artifact.

For Cheap Commodity Bearings, Search For 608

My thoughts went to bearings while contemplating a mechanical project. I have the luxury of adjusting the design to fit a bearing thanks to the wonders of 3D printing. Given this flexibility, the obvious first path to investigate is to learn where to get bearings – cheap!

I’ve learned to not kill myself on a roller blade some years back, so I started looking for roller blade bearings based on the logic that there’s enough roller blade production volume – and each pair of blades use 16 bearings – to drop the price of bearings. I quickly found that skateboard wheels use the same size bearing, then I learned that fidget spinners also use the same size bearing.

608-bearingEventually I realized I had the logic backwards – these bearings are not cheap because they’re used in skates, they are used in skates because they were cheap. These bearings have been around far longer than any of those consumer products.

The industrial name for these mass volume commodity bearings seems to be “608“. The 60 designate a series (Google doesn’t seem to know the origin of this designation) and the 8 designate the interior diameter of the bearing. Letter suffixes after the 608 describe the type of seal around the bearings but does not change the physical dimensions.

Another misconception I had from roller blade advertising was the ABEC rating. It has come to imply smoother and faster bearings but technically it only describes the manufacturing tolerances involved. While higher ABEC rated bearings do reduce the tolerance range, that by itself does not necessarily mean faster bearings. There are more variables involved (the lubricant used inside, etc) but somebody decided the mechanical engineering details were too much for the average consumer to wade through, so its meaning was distorted for marketing. Oh well, it’s not the first time that has happened.

Such details may or may not be important, it depends on the project requirements. Strict project demands (temperature, speed, load, etc) will require digging deeper for those details. For projects where pretty much any bearing would do, the 608 designation is enough to guarantee physical dimensions for CAD design and we’re free to order something cheap. Either off Amazon (~$25 for 100 of them) or for even larger quantities, go straight to the factories on Alibaba.

WebAssembly: A High Level Take On Low Level Concepts

webassemblyWebAssembly takes the concepts of low-level assembly language and brings them to web browsers. A likely first question is “Why would anybody want to do that?” And the short answer is: “Because not everybody loves JavaScript.”

People writing service-side “back-end” code have many options on technologies to use. There are multiple web application platforms that are built around different languages. Ruby on Rails and SinatraDjango and Flask, PHP, Node.JS, the list goes on.

In contrast, client-side “front end” code running on the user’s web browser has a far more limited choice in tools and only a single choice for language: JavaScript. The language we know today was not designed with all of its features and capabilities up front. It was a more organic growth that evolved alongside the web.

There have been multiple efforts to tame the tangle that is modern JavaScript and impose some structure. The Ruby school of thought led to CoffeeScript. Microsoft came up with TypeScript. Google invented Dart. What they all had in common was that none have direct browser support like JavaScript. As a result, they all trans-compile into JavaScript for actual execution on the browser.

Such an approach does address problems with JavaScript syntax, by staying within well-defined boundaries. Modern web browsers’ JavaScript engines have learned to look for and take advantage of such structure, enabling the resulting code to run faster. A project focused entirely on this aspect – making JavaScript easy for browsers to run fast – is asm.js. By limiting JavaScript to a very specific subset , sometimes adding hints to the browser it is so, allows JavaScript that can be parsed down to very small and efficient code. Even if it ends up being very difficult for a human to read.

Projects like asm.js make the resulting code run faster than general JavaScript, but that’s only once code starts running. Before it runs, it is still JavaScript transmitted over the network, and JavaScript that needs to be parsed and processed. The only way to reduce this overhead is to describe computation at a very low-level in a manner more compact and concise than JavaScript. This is WebAssembly.

No web developer is expected to hand-write WebAssembly on a regular basis. But once WebAssembly adoption takes hold across the major browsers (and it seems to be making good progress) it opens up the field of front-end code. Google is unlikely to build TypeScript into Chrome. Microsoft is unlikely to build Dart into Edge. Mozilla is not going to bother with CoffeeScript. But if they all agree on supporting WebAssembly, all of those languages – and more – can be built on top of WebAssembly.

The concept can be taken beyond individual programming languages to entire application frameworks. One demonstration of WebAssembly’s potential runs the Unity 3D game engine, purportedly with smaller download size and faster startup than the previous asm.js implementation.

An individual front end web developer has little direct need for WebAssembly today. But if it takes off, it has the potential to enable serious changes in the front end space. Possibly more interesting than anything since… well, JavaScript itself.