First ROS 2 LTS Has Arrived, Let’s Switch

Making a decision to go explore the less popular path of smarter software for imperfect robot hardware has a secondary effect: it also means I can switch to ROS 2 going forward. One of the downsides of going over to ROS 2 now is that I lose access to the vast library of open ROS nodes freely available online. But if I’ve decided I’m not going to use most of them anyway, there’s less of a draw to stay in the ROS 1 ecosystem.

ROS 2 offers a lot of infrastructure upgrades that should be, on paper, very helpful for work going forward. First and foremost on my list is the fact I can now use Python 3 to write code for ROS 2. ROS 1 is coupled to Python 2, whose support stops in January 2020 and there’s been a great deal of debate in ROS land on what to do about it. Open Robotics has declared their future work along this line is all Python 3 on ROS 2. So the community has been devising various ways to make Python 3 work on ROS 1. Switching to ROS 2 now let’s me use Python 3 in a fully supported manner, no workarounds necessary.

And finally, investing in learning ROS 2 now has a much lower risk of having that time be thrown away by a future update. ROS 2 Dashing Diademata has just been released, and it is the first longer term service (LTS) release for ROS 2. I read this as a sign that Open Robotics is confident the period of major code churn for ROS 2 is coming to an end. No guarantees, naturally, especially if they learn of something that affects long term viability of ROS 2, but the odds have dropped significantly with evolution over the past few releases.

The only detraction for my personal exploration is the fact that ROS 2 has not yet released binaries for running on Raspberry Pi. I could build my own Raspberry Pi 3 version of ROS 2 from open source code, but I’m more likely to use the little Dell Inspiron 11 (3180) I had bought as candidate robot brain. It is already running Ubuntu 18.04 LTS on an amd64 processor, making it a directly supported Tier 1 platform for ROS 2.

Let’s Learn To Love Imperfect Robots Just The Way They Are

A few months ago, as part of preparing to present Sawppy to the Robotics Society of Southern California, I described a few of the challenges involved in putting ROS on my Sawppy rover. That was just the tip of the iceberg and I’ve been thinking and researching in this problem area on-and-off over the past few months.

Today I see two divergent paths ahead for a ROS-powered rover.

I can take the traditional route, where I work to upgrade Sawppy components to meet expectations from existing ROS libraries. It means spending a lot of money on hardware upgrades:

  • Wheel motors that can deliver good odometry data.
  • Laser distance scanners faster and more capable than one salvaged from a Neato vacuum.
  • Depth camera with better capabilities than a first generation Kinect
  • etc…

This conforms to a lot of what I see in robotics hardware evolution: more accuracy, more precision, an endless pursuit of perfection. I can’t deny the appeal of having better hardware, but it comes at a steeply rising cost. As anyone dealing with precision machinery or machining knows, physical accuracy costs money: how far can you afford to go? My budget is quite limited.

I find more appeal in pursuing the nonconformist route: instead of spending ever more money on precision hardware, make the software smarter to deal with imperfect mechanicals. Computing power today is astonishingly cheap compared to what they cost only a few years ago. We can add more software smarts for far less money than buying better hardware, making upgrades far more affordable. It is also less wasteful: retired software are just bits, while retired hardware gather dust sitting there reminding us of past spending.

And we know there’s nothing fundamentally wrong with looking for a smarter approach, because we have real world examples in our everyday life. Autonomous vehicle research brag about sub-centimeter accuracy in their 3D LIDAR… but I can drive around my neighborhood without knowing the number of centimeters from one curb to another. A lot of ROS navigation is built on an occupancy grid data structure, but again I don’t need a centimeter-aligned grid of my home in order to make my way to a snack in the kitchen. We might not yet understand how it could be done with a robot, but we know the tasks are possible without the precision and accuracy demanded by certain factions of robotics research.

This is the path less traveled by, and trying to make less capable hardware function using smarter software would definitely have their moments of frustration. However, the less beaten path is always a good place to go looking for something interesting and different. I’m optimistic there will be rewarding moments to balance out those moments of frustration. Let’s learn to love imperfect robots just the way they are, and give them the intelligence to work with what they have.

An Unsuccessful First Attempt Applying Q-Learning to CartPole Environment

One of the objectives of OpenAI Gym is to have a common programming interface across all of its different environments. And it certainly looks pretty good at the surface: we reset() the environment, take actions to step() through it, and at some point we get True as a return value for the done flag. Having a common interface allows us to use the same algorithm across multiple environments with minimal modification.

But “minimal” modification is not “zero” modification. Some environments are close enough that no modifications are required, but not all of them. Sometimes an environment is just not the right fit for an algorithm, and sometimes there are important details which differ from one environment to another.

One way environments differ is in different type of spaces. An environment has two: an observation_space that describes the observed state of the environment, and an action_space that outlines valid actions an agent may choose to take. They change from one environment to another because they tend to have different observable properties and different actions an agent can take within them.

As an exercise I thought I’d try to take the simple Q-Learning algorithm demonstrated to solve the Taxi environment, and slam it on top of CartPole just to see what happens. And to do that, I had to take CartPole‘s state which is an array of four floating point numbers and convert it into an integer suitable for an array index.

As an naive approach, I’ll slice up the space into discrete slices. Each of four numbers will be divided into ten bins. Each bin will correspond to a single digit zero to nine, so the four numbers will be composed into a four digit integer value.

To determine size of these bins, I executed 1000 episodes of the CartPole simulation while taking random actions via action_space.sample(). The ten bins are evenly divided between maximum and minimum values observed values in this sample run, and Q-learning is off and running… doing nothing useful.

As shown in plot above, reward function is always 8, 9, 10, or 11. We never got above or below this range. Also, out of 10000 possible states, only about 50 were ever traversed.

So this first naive attempt didn’t work, but it was a fun experiment. Now the more challenging part: figuring out where it went wrong, and how to fix it.

Code written in this exercise is available here.

 

Taking First Step Into Reinforcement Learning with OpenAI Gym

The best part about learning a new technology today is the fact that, once armed with a few key terminology, a web search can unlock endless resources online. Some of which are even free! Such was the case after I looked over OpenAI Gym on its own: I searched for an introductory reinforcement learning project online and found several to choose from. I started with this page which uses the “Taxi” environment of OpenAI Gym and, within a few lines of Python code, implemented basic Q-Learning agent that can complete the task within 1000 episodes.

I had previously read the Wikipedia page on Q-Learning, but a description suitable for an encyclopedia entry is not always straightforward to put into code. For example, Wikipedia described learning rate is a value from 0 to 1 plus what it means when it is at the extremes of 0 or 1. But it doesn’t give any guidance on what kind of values are useful in real world examples. The tutorial used 0.618 and while there isn’t enough information on why that value was chosen, it served as a good enough starting point. For this and more related reasons, it was good to have a simple implementation.

After I got it running, it was time to start poking around to learn more. The first question was how fast the algorithm learned to solve the problem, and for that I wanted to plot the cumulative evaluation function reward against iterations. This was trivial with help of PyPlot and I obtained the graph at the top of this post. We can see a lot of learning progress within the first 100 episodes. There’s a mysterious degradation in capability around 175th episode, but the system mostly recovered by 200. After that, there were diminishing returns until about 400 and the agent made no significant improvements after that point.

This simple algorithm used an array that could represent all 500 states of the environment. With six possible actions, it was an array with 3000 entries initially filled with zero. I was curious how long it took for the entire problem space to be explored, and the answer seems to be roughly 50 episodes before there were 2400 nonzero entries and it never exceeded 2400. This was far faster than I had expected to take to explore 2400 states, and it was also a surprise that 600 entries in the array were never used.

What did those 600 entries represent? With six possible actions, it implies there are 100 unreachable states of the environment. I thought I’d throw that array into PyPlot and see if anything jumped out at me:

Taxi Q plotted raw

My mind is at a loss as to how to interpret this data. But I don’t know how important it is to understand right now – this is an environment whose entire problem space can be represented in memory, using discrete values, and these are luxuries that quickly disappear as problems get more complex. The real world is not so easily classified into discrete states, and we haven’t even involved neural networks yet. The latter is referred to as DQN (Deep Q-learning Network?) and is still yet to come.

The code I wrote for this exercise is available here.

Quick Overview: OpenAI Gym

Given what I’ve found so far, it looks like Unity would be a good way to train reinforcement learning agents, and Gazebo would be used afterwards to see how they work before deploying on actual physical robots. I might end up doing something different, but they are good targets to work towards. But where would I start? That’s where OpenAI Gym comes in.

It is a collection of prebuilt environments that are free and open for hobbyists, students, and researchers alike. The list of available environments range across a wide variety of problem domains – from text-based activity that should in theory be easy for computers, to full-on 3D simulations like what I’d expect to find in Unity and Gazebo. Putting them all under the same umbrella and easily accessed from Python in a consistent manner makes it simple to gradually increase complexity of problems being solved.

Following the Getting Started guide, I was able to install the Python package and run the CartPole-v0 example. I was also able to bring up its Atari subsystem in the form of MsPacman-v4. The 3D simulations used MuJoCo as its physics engine, which has a 30-day trial and after that it costs $500/yr for personal non-commercial use. At the moment I don’t see enough benefit to justify the cost so the tentative plan is to learn the basics of reinforcement learning on simple 2D environments. By the time I’m ready to move into 3D, I’ll use Unity instead of paying for MuJoCo, bypassing the 3D simulation portion of OpenAI Gym.

I’m happy OpenAI Gym provides a beginner-friendly set of standard reinforcement learning textbook environments. Now I’ll need to walk through some corresponding textbook examples on how to create an agent that learns to work in those environments.

Researching Simulation Speed in Gazebo vs. Unity

In order to train reinforcement learning agents quickly, we want our training environment to provide high throughput. There are many variables involved, but I started looking at two of them: how fast it would be to run a single simulation, and how easy it would be to run multiple simulation in parallel.

The Gazebo simulator commonly associated with ROS research projects has never been known for its speed. Gazebo environment for the NASA Space Robotic Challenge was infamous for slowing far below real time speed. Taking over 6 hours to simulate a 30 minute event. There are ways to speed up Gazebo simulation, but this forum thread implies it’s unrealistic to expect more than 2-3 times as fast as real time speed.

In contrast, Unity simulation can be cranked all the way up to 100 times real time speed. It’s not clear where the maximum limit of 100 comes from, but it is documented under limitations.md. Furthermore, it doesn’t seem to be a theoretical limit no one can realistically reach – at least one discussion on Unity ML Agents indicate people do indeed crank up time multiplier to 100 for training agents.

On the topic of running simulations in parallel, with Gazebo such a resource hog it is difficult to get multiple instances running. This forum thread explains it is possible and how it could be done, but at best it still feels like shoving a square peg in a round hole and it’ll be a tough act to get multiple Gazebo running. And we haven’t even considered the effort to coordinate learning activity across these multiple instances.

Things weren’t much better in Unity until recently. This announcement blog post describes how Unity has just picked up the ability to run multiple simulations on a single machine and, just as importantly, coordinate learning knowledge across all instances.

These bits of information further cements Unity as something I should strongly consider as my test environment for playing with reinforcement learning. Faster than real time simulation speed and option for multiple parallel instances are quite compelling reasons.

 

Quick Overview: Unity ML Agents

Out of all the general categories of machine learning, I find myself most interested in reinforcement learning. These problems (and associated solutions) are most applicable to robotics, forming the foundation of projects like Amazon’s DeepRacer. And the fundamental requirement of reinforcement learning is a training environment where our machine can learn by experimentation.

While it is technically possible to train a reinforcement learning algorithm in the real world with real robots, it is not really very practical. First, because a physical environment will be subject to wear and tear, and second, because doing things in the real world at real time takes too long.

For that reason there are many digital simulation environments in which to train reinforcement learning algorithms. I thought it would be an obvious application of robot simulation software like Gazebo for ROS, but this turned out to only be partially true. Gazebo only addresses half of the requirements: a virtual environment that can be easily rearranged and rebuilt and not subject to wear and tear. However, Gazebo is designed to run in a single instance, and its simulation engine is complex enough it can fall behind real time meaning it takes longer to simulation something than it would be in the real world.

For faster training of reinforcement learning algorithms, what we want is a simulation environment that can scale up to run multiple instances in parallel and can run faster than real time. This is why people started looking at 3D game engines. They were designed from the start to represent a virtual environment for entertainment, and they were built for performance in mind for high frame rates.

The physics simulation inside Unity would be less accurate than Gazebo, but it might be good enough for exploring different concepts. Certainly the results would be good enough if the whole goal is to build something for a game with no aspirations for adapting them to the real world.

Hence the Unity ML-Agents toolkit for training reinforcement learning agents inside Unity game engine. The toolkit is nominally focused on building smart agents for game non-player characters (NPC) but that is a big enough toolbox to offer possibilities into much more. It has definitely earned a spot on my to-do list for closer examination in the future.

Quick Overview: Autoware Foundation

ROS is a big world of open source robotics software development, and it’s hard to know everything that’s going on. One thing I’ve been doing to try to keep up is to read announcements made on ROS Discourse. I’ve seen various mentions of Autoware but it’s been confusing trying to figure out what it is from context so today I spent a bit of time to get myself oriented.

That’s when I finally figured out I was confused because the term could mean different things in different contexts. At the root of it all is Autoware Foundation, the non-profit organization supporting open source research and development towards autonomous vehicles. Members hail from universities to hardware vendors to commercial entities.

Autoware Foundation Banner

Under the umbrella of this Autoware Foundation organization is a body of research into self-driving cars using ROS 1.0 as foundation. This package of ROS nodes (and how they weave together for self-driving applications) is collectively Autoware.AI. Much of this work is directly visible in their main Github repository. However, this body of work has a limited future, as ROS 1.0 was built with experimental research in mind. There are some pretty severe and fundamental limitations when building applications where human lives are on the line, such as self-driving cars.

ROS 2.0 is a big change motivated by the desire to address those limitations, allow people to build robotics systems with much more stringent performance and safety requirements on top of ROS 2.0. Autoware is totally on board with this plan and their ROS 2.0-based project is collectively Autoware.Auto. It is less exploratory/experimental and more focused on working their way towards a specific set of milestones running on a specific hardware platform.

There are a few other ancillary projects all under the same umbrella working towards the overall goal. Some with their own catchy names like Autoware.IO (which is “coming soon” but it looks like a squatter has already claimed that domain.) and some without such catchy names. All of this explains why I was confused trying to figure out what Autoware was from context – it is a lot lot of things. And definitely well worth their own section of ROS Discourse.

 

First CTF At LayerOne 2019

The term “Capture the Flag” can mean a lot of very different things depending on context. In the context of a competition held at a computer security conference like LayerOne 2019 this past weekend, I found a technically oriented online digital scavenger hunt. There is a list of challenges, each of which starts with a clue that will lead the intrepid hunter towards an answer (“flag”) that can be submitted to increase their score.

What does it take to solve a challenge? Well, that’s entirely up to the organizers who can devise problems as simple and as difficult as they wished. I attended LayerOne last year though I did not participate in last year’s CTF. What I found everywhere else at LayerOne was a fun mix of activities that start with very beginner-friendly introductions that then climb steeply to still offer a challenge to longtime veterans.

It turns out their CTF is no different. There was one very beginner-friendly challenge — it was literally a reward for reading the hint and following instructions, no technical knowledge required. [Emily] was initially intimidated but quickly contributed by employing investigation skills from her journalism background. Thanks to her skills, our CTF team did not finish dead last.

To keep things on a friendly basis of competition, the targets of investigation are explicitly listed. A security challenge of “there’s a vulnerable computer somewhere nearby, find it.” might be interesting, but a bad idea to encourage probing every computer online. It would harm other conference attendees not participating in the CTF, it would be bad for hotel infrastructure and even other guests at the hotel.

While it is possible to just have a list of computer skill challenges in a CTF, organizers usually put in a little more effort to build around a theme. This year’s LayerOne CTF was about Star Trek. From the narratives presented as clues in many challenges, down to the LCARS style user interface of the main site. While we didn’t get very far in our CTF attempt, I appreciate the effort of organizers to engage beginners. Perhaps we’ll be better equipped the next time we come across one.

Mars 2020 Rover Will Carry Sawppy’s Name

Modern advances in nonvolatile memory storage can now pack a huge amount of data in a very little space and volume. Everyday consumers can now buy a microSD card representing this advance. One of the ways NASA has taken advantage of this is offering a program where people can submit their names to be carried onboard spacecraft in the form of digital data stored on a tiny flash memory chip.

Spaceflight is still very expensive, with every gram of mass and cubic centimeter of volume carefully planned and allocated. But with flash memory chips so small and light, NASA has decided it offers enough returns on publicity to be worth carrying onboard. Such programs award social media exposure and free coverage like this very blog post!

NASA JPL’s Mars 2020 program, the most visible component of which is a not-yet-named rover successor to Curiosity, will be a participant. There will be a small flash memory chip on board with names of people who cares to submit their name via the NASA web site set up for the purpose.

I don’t care very much about having my own name on board Mars 2020, but I loved the thought of having “Sawppy Rover” as one of the names on board that actual rover heading to Mars. I’ve submitted Sawppy’s name so hopefully a few bits of digital data representing Sawppy will accompany Mars 2020 to and travel across Martial terrain.

Slowing Sawppy Response For Smoother Driving

When I first connected my cheap joystick breakout board from Amazon, its high sensitivity was immediately apparent. Full range of deflection mapped to a very small range of physical motion. It was very hard to hold a position between center and full deflection. I was concerned this would become a hindrance, but it wasn’t worth worrying about until I actually got everything else up and running. Once Sawppy was driving around on joystick control, I got my own first impressions. Then in the interest of gathering additional data points, I took my rover to a SGVHAK meet to watch other people drive Sawppy with these super twitchy controls.

These data points agree: Sawppy’s twitchy controls are problematic to drive smoothly and it’s actually running between points fast enough for me to be worried about causing physical damage.

There were two proposed tracks to address this:

First thought was to replace the cheap Amazon joystick module with something that has a larger range of motion allowing finer control. [Emily] provided a joystick module salvaged from a remote control aircraft radio transmitter. Unlike arcade game console joysticks which demand fast twitch response, radio control aircraft demands smoothness which is what Sawppy would appreciate as well. The downside of using a new joystick module is the fact I would have to design and build a new enclosure for it, and there wasn’t quite enough time.

So we fell back to what hardware projects are always tempted to do: fix the problem in software. I modified the Arduino control code to cap the amount of change allowed between each time we read joystick values. By dampening the delta between each read, Sawppy became sluggish and less responsive. But this sluggishness also allowed smoother driving which is more important at the moment so that’s the software workaround in place for Maker Faire.

This code is currently in Sawppy’s Github repository starting with this code change and a few fixes that followed.

Sawppy and Makey

The mascot for Maker Faire is Makey the Robot. (Or possibly Mr. Makey to me…) As part of Sawppy’s Maker Faire experience, I wanted to make sure I got a good picture of Sawppy with Makey. I thought the mascot would surely be everywhere and it wouldn’t be hard to get a picture. The thought was not wrong… but finding one of the appropriate size and sitting in the right angle for sunlight and not otherwise swarmed with people proved to be a challenge.

The biggest and most promising Makey was a standing statue that [Emily] found and pointed out to me. Unfortunately the sunlight angle was not the best but we had fun with it. I started with an easy standard pose.

Sawppy and Makey 1

I went low to the ground to achieve a dramatic upwards camera angle.

Sawppy and Makey 2

Then [Emily] had a brilliant idea to pose Sawppy with Makey. She put one of Sawppy’s front wheels up on Makey’s pedestal, and turned the Kinect sensor bar with googly eyes to face the camera. This is a great picture.

Sawppy and Makey 3

After this picture, I looked for a smaller Makey closer to Sawppy’s size, and the best I found was on this sign directing people to something or another. The two robots are closer in proportion but it doesn’t have the energy of [Emily]’s pose.

Sawppy and Makey 4

And finally, when I passed the workshop area I also saw a partially disassembled Makey on display. It felt like a stage set up for something but is currently empty. But there was no time for question! I looked around, caught a brief gap in passing crowd, and snapped a picture of Sawppy here.

Sawppy and Makey 5

Meeting of Rovers at Maker Faire Bay Area 2019

The primary goal of taking Sawppy to Maker Faire Bay Area 2019 was to spread word of DIY Mars rover models to the greater maker community. But that was certainly not the only goal! There were many secondary goals, one of which was to meet [Marco] who has already received the word and built a Sawppy of his own.

Through Sawppy’s project page on Hackaday.io I learned of a few other rover builders who have built their own creations on top of my design. They are spread all around the world but I had never met one in person until Maker Faire. Even though we knew each other would be present, it brought a great big smile to my face when I saw [Marco]’s bright yellow Sawppy roll up to greet mine. We had hoped that we might see more rovers by builders that never communicated with us, but if any were present they had escaped my notice.

When walking through the area dedicated to educational maker groups, I had expected to see some sign of the JPL Open Source Rover but came up empty handed. If any completed rovers were rolling around I didn’t see them, and if any partial rovers were on table display I missed them. Though to be fair, I visited that building during one of the harder downpours so almost every attendee packed the indoor spaces making it hard to see everything with a rover underfoot.

But I did find members of the NorCal Mars Society rover project with a different focus than my project. They did not prioritize building a chassis that looked like Mars rovers, instead focusing on the control systems. There’s a camera feed for a remote operator and control system to run simulated Mars missions. Still, we were all part of the greater family of Mars rover enthusiasts and it was fun to have all the rovers meet up.

A Raincoat for Sawppy

Maker Faire Bay Area takes place at the San Mateo Event Center with both indoor and outdoor exhibits. As the dates got closer this year, weather forecast called for rain. This is probably not a good thing for attendance of the event and corresponding finances, but it’s also a concern for exhibitors as well. I, for one, did not design my roaming exhibit Sawppy for rainy weather.

The first and most obvious idea was to design and 3D print an umbrella mounting bracket for Sawppy. But I was worried about the umbrella catching wind to topple over the little rover. I was also worried about wind-driven wind flying sideways and landing on components. And lastly, carrying an extra umbrella is bulk I would rather do without.

Thus I moved on to the second idea: craft a raincoat out of plastic (garbage bags, basically) that I can secure more tightly against Sawppy’s equipment bay via magnets. Aluminum extrusion beams are not magnetic, but the M3 bolts are! This should offer marginally superior protection from the elements, and less bulk to carry around.

The project started with a large black garbage bag that I had cut open to create a single sheet. [Jasmine] (who had generously hosted [Emily] and myself Thursday night) thought the opaque cover was a shame and brought out a large clear plastic bag. This way people could still see inside Sawppy even when wearing the raincoat. I continued using the black bag as a trial run, and then used it as a template to cut Jasmine’s gift to form the final raincoat.

Sawppy raincoat template creation test

This custom-fitted raincoat only covered Sawppy’s equipment bay. To protect the rest of Sawppy, sandwich bags were placed over four corner steering motors, and a hotel shower cap was put over Sawppy’s head. Everything wrapped up nicely around Sawppy’s neck with a strip of velcro, again from [Jasmine]’s workshop. This compact arrangement was lighter and more compact than an umbrella when folded. And when deployed, Sawppy could go outdoors and romp in the rain.

Sawppy raincoat stowed

UPDATE: There’s now a video of Sawppy putting on this raincoat.

Sawppy Takes A Road Trip To Bay Area

Over the past few months Sawppy and I have been attending events in the greater Los Angeles area spreading our love of DIY Mars rovers. But now it’s time to go to the flagship Maker Faire Bay Area 2019 event. This will be a multi-day event away from home and that brings in a new set of concerns.

The first and most obvious concern is the multi-day length. Every previous public appearance was no more than an hour’s drive away from home, where I had all of my tools and parts to repair any problems at the end of the day. I could pack some tools and replacement parts but I can’t pack everything. It would be sad if Sawppy broke partway through the event in a way I couldn’t repair. On the upside, Maker Faire Bay Area would be the event to be where I could borrow or buy tools and components. Or even time on someone’s 3D printer to print replacement parts!

The second concern is the trip itself. I typically place Sawppy on a car seat however it can fit, and didn’t worry overly much about placement because the ride is short. Now Sawppy is looking at over six hours of driving, with associated six hours of bumps from the road. Would all the repetitive stress break something on Sawppy? To help alleviate this problem, I used my luggage suitcase to fill in the rear seat footwell creating a relatively flat surface for all six wheels to sit at even heights. To constrain Sawppy’s front-back movement, the battery tray and router was removed so everything fit cozily between the front and rear seat backs. To constrain side-to-side movement, the rear seat center armrest was lowered so Sawppy fit cozily between it and the door.

Over seven hours later, Sawppy arrived in the bay area and I was eager to see if everything still worked. My quick test was to reinstall the battery tray, router, and power up Sawppy to verify all components functioned. I was very relieved to see Sawppy start driving around as designed, seemingly unaffected by the long trip and ready to meet the crowds of Maker Faire.

Mounting Bracket For Sawppy Wireless Router

A natural part of a project like Sawppy the Rover is a long “to do” list. While its ever-growing nature might imply otherwise, things actually do get done when faced with motivation. For Maker Faire Bay Area 2019 my primary motivation was to get a wired controller up and running as backup in case of overcrowded radio frequencies. And now that I have a working (if imperfect) wired controller, I wanted to come back and tidy up the wireless side of the house.

After that initial episode of fighting on crowded 2.4 GHz band, Sawppy received a wireless router upgrade in the form of a dual-band Asus RT-AC1200. (Selected via the rigorous criteria of “It was on sale at Fry’s that day.”) Not only did this gave Sawppy greater range when operating on 2.4 GHz, it also meant Sawppy could operate on the 5 GHz band where there are far more channels to go share in crowded environments.

So that was good, and a wired controller backup is even better, but there’s a neglected part that I wanted to address before taking Sawppy in front of a big crowd: when I initially hooked up that Asus router, I connected all the wires and placed it in the equipment bay. No mount, just gravity. I intended to integrate the router properly some day and today is that day.

Sawppy wireless router

I want to mount this to the rear of Sawppy above most of the equipment bay, because that’s where real Mars rover Curiosity housed its communications equipment. Ever since I had a WiFi router at home, they seemed to have stayed roughly the same shape and size even though electronics have generally gotten smaller and more power efficient. So the first question I checked was whether the box is mostly empty space and we could transfer compact internals onto the rover?

Opening the lid did unveil some empty space, but not as much as I had thought there might be.

Sawppy wireless router opened

Furthermore, it doesn’t look like the antennae are designed to be removable. They’re firmly fixed to the enclosure, and their wires are soldered to the board.

Sawppy wireless router PCB

Seeing how unfriendly this design is to a transplant operation, I aborted the idea of extracting internals. We’ll use the case as-is, starting with designing and printing a base for the router. I originally intended to fasten the base using original router enclosure screws, but changed plans to using M3 screws like rest of Sawppy after I dropped one.

Sawppy wireless router new base

This base has two dovetails which can then fit in brackets that clip onto Sawppy’s extrusion beams.

Sawppy wireless router mounted

And voila! A rigid mount for my wireless router rigidly mounting it to Sawppy chassis instead of letting it bounce around in a tangle of wires like I’ve been doing the past few months. This is much more respectable to present to other attendees of Maker Faire Bay Area.

Sawppy Roving With Wired Handheld Controller

I now have a basic Arduino sketch for driving Sawppy using a joystick, I’ve built a handheld controller using an Arduino Nano and a joystick, and an input jack for interfacing with Sawppy. It’s time to put it all together for a system integration test.

Good news: Sawppy is rolling with the new wired controller! Now if there’s too much electromagnetic interference with Sawppy’s standard WiFi control system, we have a backup wired control option. This was the most important upgrade to get in place before attending Maker Faire Bay Area. As the flagship event, I expect plenty of wireless technology in close proximity at San Mateo and wanted this wired backup as an available option.

This successful test says we’re in good shape electrically and mechanically, at least in terms of working as expected. However, a part of “working as expected” also included very twitchy movement due to super sensitive joystick module used. There is very little range of physical joystick movement that maps to entire range of electrical values. In practice this means it is very difficult to drive Sawppy gently when using this controller.

At the very minimum, it doesn’t look very good for Sawppy’s to be seen as jittery and twitchy. Sharp motions also place stresses on Sawppy’s mechanical structure. I’m not worried about suspension parts breakage, but I am worried about the servos. Steering servo are under a lot of stress and couplers may break. And it’s all too easy to command a max-throttle drag racing start, whose sudden surge of current flow may blow the fuse.

I had wanted to keep the Arduino sketch simple, which meant it directly mapped joystick movement to Sawppy motion. But it looks like translating the sensitive joystick’s motion directly to overeager Sawppy is not a good way to go. I need to add more code to smooth out movement for the sake of Sawppy’s health.

Input Jack For Sawppy Wired Controller

I’ve got my handheld wired controller built and assembled, now it’s time to work on the other end and add a control input jack for Sawppy the Rover.

Again I wanted something relatively robust, I don’t want a tug on the wire to tear apart Sawppy internals. Fortunately the whole “tugging on wire” issue is well known and I could repurpose someone else’s solution for my project. In this particular case, I’m going to deploy the wire and jack salvaged from a Xbox 360 steering wheel. This cable formerly connected the foot pedal unit to the main steering unit. The foot pedal unit is subject to a lot of stomping abuse, which can shift the position of these pedals and result in tugs on this wire. It appeared ideal for handling the stresses I want it to endure in my application.

The jack has strong resemblance to standard landline telephone cable and may actually be the same thing, but because I salvaged both the jack and compatible wire for my project it didn’t matter if it was actually the same as phone jacks and lines. Using calipers I measured the jack’s dimensions and created a digital representation in Onshape CAD. I then modeled the rest of the bracket around that jack and printed its two parts.

Sawppy joystick jack printing

Here’s the mounting bracket front and back pieces along with salvaged jack, whose wires now have freshly crimped connectors for interfacing with LewanSoul BusLinker debug board.

Sawppy joystick jack unassembled

When assembled, the bracket grabs onto one of Sawppy’s aluminum extrusion beams. Tugs on the wire should transfer that force to aluminum beam instead of pulling wires out of the board.

Sawppy joystick jack assembled

I installed this jack between two major pieces in Sawppy’s structure. This ensures that it will not slide when tugged upon, which should help with strength.

Sawppy joystick jack installed

Sawppy Wired Controller Enclosure

I now have an assembly of circuit boards that has all the electronics I needed to create a wired controller for Sawppy the Rover. Now I need an enclosure to make it easy to hold, protecting both my skin against punctures by header pins and also protecting the soldered wires from damage.

The first task is to measure dimensions and iterate through design of how I would hold the assembly using 3D printed plastic. It evolved into two separate pieces that mate up with left and right sides of my prototype circuit board.

The next step is to design and print two small parts to hold on to the wire. The idea is to have it take some stress so tugs on the wire do not rip my 4-pin JST-XH connector from my circuit board. And finally, an exterior shell to wrap all of the components.

Sawppy handheld controller unassembled

The exterior shell was an opportunity to play with creating smooth comfortable hand-conforming organic shapes. Designing this in Onshape was a bit of an square peg in round hole situation: standard engineering CAD is tailored for precision and accuracy, not designing organic shapes. That’s the domain of 3D sculpting tools, but I made do with what I had available in Onshape.

Given a bit more time I could probably incorporate all the design lessons into a single 3D printed piece instead of five separate pieces, but time is short and this will suffice for Maker Faire Bay Area 2019.

Now that I have one end of my wired serial communication cable, it’s time to look at the other end.

Sawppy handheld controller assembled

Arduino Nano Forms Core Of Sawppy Wired Controller

At this point in the project, I have an Arduino sketch that reads an analog joystick’s position and calculates speed and position for Sawppy’s ten serial bus servos to execute that command. Now I turn my attention back to the hardware, which up until this point is a collection of parts connected by jumper wires. Good for experimental prototyping, not good for actually using in the field.

The biggest switch is from using an Arduino Uno clone to an Arduino Nano clone. The latter is far smaller and would allow me to package everything inside a single hand-held assembly. Both Arduino are based on the same ATmega328 chip and offers all the input and output I need for this project. Typically, beginners like to start with an Uno because of its selection of compatible Arduino Shields, but that is not a concern here.

This specific Arduino Nano will be mounted on a prototype perforated and plated circuit board. It is placed on one end of the board in order to keep its USB port accessible. Two other components were soldered to the same prototype board: a 4-pin JST-XH connector for power and serial communications, and an analog joystick module.

My mess of jumper wires were then replaced by short segments of wire that are soldered in place for greater reliability. This is a relatively simple project so there aren’t very many wire connections, and they all easily fit on the back.

Arduino nano with joystick on PCB back

In theory the Arduino sketch can be seamlessly switched over to this board. In practice I saw bootloader errors when I plugged in this board. It turns out, for this particular clone, I needed to select the “Tools” / “Processor” / “ATmega328P (Old Bootloader)” option in Arduino IDE. As a beginner I’m not completely sure what this meant, but I noticed sketch upload speed is significantly slower relative to the Uno. My source code was unchanged and it all still worked. A few test drive sessions verified this little hand held assembly could drive Sawppy as designed.

Next step: an enclosure.