Robot Brain Candidate: Up Board

When I did my brief survey of potential robotic brains earlier, I was dismissive of single-board computers competing with the Raspberry Pi. Every one I knew about had sales pitches about superior performance relative to the Pi, but none could match the broad adoption and hence software library support of a Raspberry Pi. At the time the only SBC I thought might be worthwhile were the Nvidia Jetson boards with specialized hardware. Other than that, I believed the growth path for robot brains that can run ROS is pretty much restricted to x64-based platforms like Chromebooks, Intel NUCs, and full-fledged laptop computers.

What I didn’t know at the time was that someone has actually put an Intel CPU on a Raspberry Pi sized circuit board computer: the Up board.

UPSlide3Right-EVT-3
Image from http://www.up-board.org

Well, now. This is interesting!

At first glance they even worked to keep the footprint of a Raspberry Pi, including the 40-pin GPIO headers and USB, Ethernet, and HDMI ports. However, the power and audio jacks are different, and the camera and display headers are gone.

It claims to run Windows 10, though it’s not clear if they meant the restricted IoT edition or the full desktop OS. Either way it shouldn’t be too much of a hurdle to get Ubuntu on one of these things running ROS. While the 40-pin GPIO claims to match a Raspberry Pi, it’s not clear how they are accessed from an operation system not designed for a Raspberry Pi.

And even more encouragingly: the makers of this board is not content to be an one-hit wonder, they’ve branched out to other tiny form factors that give us the ability to run x86 software.

The only downside is that the advantage is from size, not computational power. None of the CPUs I’ve seen mentioned are very fast. At best, they are roughly equivalent to the one in my Dell Inspiron 11 3180, just tinier. Still, these board offer some promising approaches to robot hardware. It’s worth revisiting if I get stuff running on my cheap Dell but need a smaller board.

ROS Notes: Hector SLAM Creates 2D Map From 3D Motion

rosorg-logo1While I was trying to figure out how to best declare ROS coordinate transform frames of reference for Phoebe, I came across a chart on hector_slam page detailing multiple frames of reference for a robot base. It turned out to be unnecessary for the GMapping algorithm I’m currently using for Phoebe, but it made me curious enough to take a closer look.

I have yet to try using hector_slam on Phoebe. I’m only aware of it as an alternative SLAM algorithm to gmapping. The two systems make different trade-offs and one might work better than the other under different circumstances. I knew hector_slam was the answer when some people asked if it’s possible to perform SLAM without wheel odometry data, but I knew there had to be more to it. The requirements for multiple frames is my entry point to understand what’s going on.

My key takeaway after reading the paper: gmapping is designed for a robot working strictly in flat 2D. In contrast, hector_slam is designed for platforms that may move about in 3 dimensions. Not just flat 2D, but also ground vehicles traversing rough terrain (accounting for pitch and roll motions) and even robotic aircraft. Not requiring wheel odometry data is obvious a big part of supporting air vehicles!

But if hector_slam is so much more capable, why isn’t everyone using it? That’s when we get to the flip side: for good results it requires a LIDAR with high scan rate and high accuracy. Locating robot position without wheel odometry is made possible by frequent distance data refreshes (40 Hz was mentioned in the paper). Unfortunately, this also dims prospect for use aboard Phoebe as the Neato LIDAR is neither fast (4 Hz) or accurate.

And despite its capability to process 3D data from robotic platforms that have 3D motion, hector_slam generates a 2D map. This implies the mapping navigation algorithms will not have access to captured 3D data. Plotting a path with this data, a short and rough route would look better than a longer smooth flat route. This makes me skeptical hector_slam will be interesting for Sawppy, but that’s to be determined later.

 

Phoebe Is Navigating Autonomously

I’ve been making progress (slowly but surely) thorough the ROS navigation stack tutorial to get it running on Phoebe, and finally reached the finish line.

After all the configuration YAML files were created, they were tied together into a launch file as parameters to the ROS node move_base. For now I’m keeping the pieces in independent launch files, so move_base is ran independently of Phoebe’s chassis functionality launch file and AMCL (launched using its default amcl_diff.launch).

After they were all running, a new RViz configuration was created to visualize local costmap and amcl particle cloud. And it was a huge mess! I was disheartened for a few seconds before I remembered seeing a similar mess when I first looked at navigation on a Gazebo simulation of TurtleBot 3 Burger. Before anything would work, I had to set the initial “2D Pose Estimate” to locate Phoebe on the map.

Once that was done, I set a “2D Nav Goal” via RViz, and Phoebe started moving! Looking on RViz I could see the map along with LIDAR scan plots and Phoebe’s digital representation from URDF. Those are all familiar from earlier. New to the navigation map is a planned path plotted in green taking account of the local cost map in gray. AMCL contributed the rest of the information on screen, with individual estimates drawn as little yellow arrows and estimated position in red.

Phoebe Nav2D 2

It’s pretty exciting to have a robot with basic intelligence for path planning, and not just a fancy remote control car.

Of course, there’s a lot of tuning to be done before things actually work well. Phoebe is super cautious and conservative about navigating obstacles, exhibiting a lot of halting and retrying behavior in narrower passageways even when there are still 10-15cm of clearance on each side. I’m confident there are parameter I could tune to improve this.

Less obvious are what I need to adjust to increase Phoebe’s confidence in relatively wide open areas, Phoebe would occasionally brake to a halt and hunt around a bit before resuming travel even when there’s plenty of space. I didn’t see an obstacle pop up on the local costmap, so it’s not clear what triggered this behavior.

(Cross-posted to Hackaday.io)

Navigation Stack Setup for Phoebe

rosorg-logo1Section 1 “Robot Setup” of this ROS Navigation tutorial page confirmed Phoebe met all the basic requirements for the standard ROS navigation stack. Section 2 “Navigation Stack Setup” is where I need to tell that navigation stack how to run on Phoebe.

I had already created a ROS package for Phoebe earlier to track all of my necessary support files, so getting navigation up and running is a matter of creating a new launch file in my existing directory for launch files. To date all of my ROS node configuration has been done in the launch file, but ROS navigation requires additional configuration files in YAML format.

First up in the tutorial were the configuration values common for both local and global costmap. This is where I saw the robot footprint definition, a little sad it’s not pulled from the URDF I just put together. Since Phoebe’s footprint is somewhat close to a circle, I went with the robot_radius option instead of declaring a footpring with an array of [x,y] coordinates. The inflation_radius parameter sounds like an interesting one to experiment with later pending Phoebe performance. The observation_sources parameter is interesting – it implies the navigation stack can utilize multiple sources simultaneously. I want to come back later and see if it can use a Kinect sensor for navigation. For now, Phoebe has just a LIDAR so that’s how I configured it.

For global costmap parameters, the tutorial values look equally applicable to Phoebe so I copied them as-is. For the local costmap, I reduced the width and height of the costmap window, because Phoebe doesn’t travel fast enough to need to look at 6 meters of surroundings, and I hoped reducing to 2 meters would reduce computation workload.

For base local planner parameters, I reduced maximum velocity until I have confidence Phoebe isn’t going to get into trouble speeding. The key modification here from tutorial values is changing holonomic_robot from true to false. Phoebe is a differential drive robot and can’t strafe sideways as a true holonomic robot can.

The final piece of section 2 is AMCL configuration. Earlier I’ve tried running AMCL on Phoebe without specifying any parameters (use defaults for everything) and it seemed to run without error messages, but I don’t yet have the experience to tell what good AMCL behavior is versus bad. Reading this tutorial, I see the AMCL package has pre-configured launch files. The tutorial called up amcl_omni.launch. Since Phoebe is a differential drive robot, I should use amcl_diff.launch instead. The RViz plot looks different than when I ran AMCL with all default parameters, but again, I don’t yet have the experience to tell if it’s an improvement or not. Let’s see how this runs before modifying parameters.

(Cross-posted to Hackaday.io.)

Checking If Phoebe Meets ROS Navigation Requirements

Now that basic coordinate transform frames have been configured with help of URDF and robot state publisher, I moved on to the next document: robot setup page. This one is actually listed slightly out of order list item on ROS navigation page, third behind the Basic Navigation Tuning Guide. I had started reading the “Tuning Guide” and saw that, in that introduction, the tuning guide assumes people have read the robot setup page. It’s not clear why they are out of order, but clearly robot setup needs to come first.

Right up front in Section 1 “Robot Setup” was a very helpful diagram labelled “Navigation Stack Setup” showing major building blocks for an autonomously navigating ROS robot. Even better, these blocks are color-coded as to their source. White blocks are part of the ROS navigation stack, gray parts are optional components outside of that stack, and blue indicates robot-specific code to interface with navigation stack.

overview_tf
Navigation Stack Setup diagram from ROS documentation

This gives me a convenient checklist to make sure Phoebe has everything necessary for ROS navigation. Clockwise from the right, they are:

  • Sensor source – check! Phoebe has a Neato LIDAR publishing laser scan sensor messages.
  • Base controller – check! Phoebe has a Roboclaw ROS node executing movement commands.
  • Odometry source – check! This is also provided by the Roboclaw ROS node reading from encoders.
  • Sensor transforms – check! This is what we just updated, from a hard-coded published transform to one published by robot state publisher based on information in Phoebe’s URDF.

That was the easy part. Section 2 was more opaque to this ROS beginner. It gave an overview of the configuration necessary for a robot to run navigation, but the overview assumes a level of ROS knowledge that’s at the limit of what I actually have in my head right now. It’ll probably take a few rounds of trial and error before I get everything up and running.

(Cross-posted to Hackaday.io)

Phoebe Digital Avatar in RViz

Now that Phoebe URDF has been figured out, it has been added to RViz visualization of Phoebe during GMapping runs. Before this point, Phoebe’s position and orientation (called a ‘pose‘ in ROS) is represented by a red arrow on the map. It’s been sufficient to get us this far, but a generic arrow is not enough for proper navigation because it doesn’t represent the space occupied by Phoebe. Now, with the URDF, the volume of space occupied by Phoebe is also visually represented on the map.

This is important for a human operator to gauge whether Phoebe can fit in certain spaces. While I was driving Phoebe around manually, it was a guessing game whether the red arrow will fit through a gap. Now with Phoebe’s digital avatar in the map, it’s a lot easier to gauge clearance.

I’m not sure if the ROS navigation stack will use Phoebe’s URDF in the same way. The primary reason the navigation tutorial pointed me to URDF is to get Phoebe’s transforms published properly in the tf tree using the robot state publisher tool. It’s pretty clear robot footprint information will be important for robot navigation for the same reason it was useful to human operation, I just don’t know if it’s the URDF doing that work or if I’ll end up defining robot footprint some other way. (UPDATE: I’ve since learned that, for the purposes of ROS navigation, robot footprint is defined some other way.)

In the meantime, here’s Phoebe by my favorite door to use for distance reference and calibration.

Phoebe By Door Posing Like URDF

And here’s the RViz plot, showing a digital representation of Phoebe by the door, showing the following:

  • LIDAR data in the form of a line of rainbow colored dots, drawn at the height of the Neato LIDAR unit. Each dot represents a LIDAR reading, with color representing the intensity of each return signal.
  • Black blocks on the occupancy map, representing space occupied by the door. Drawn at Z height of zero representing ground.
  • Light gray on the occupancy map representing unoccupied space.
  • Dark gray on the occupancy map representing unexplored space.

Phoebe By Door

(Cross-posted to Hackaday.io)

Next Phoebe Project Goal: ROS Navigation

rosorg-logo1When I started working on my own TurtleBot variant (before I even decided to call it Phoebe) my intention was to build a hardware platform to get first hand experience with ROS fundamentals. Phoebe’s Hackaday.io project page subtitle declared itself as a ROS robot for <$250 capable of SLAM. Now that Phoebe can map surroundings using standard ROS SLAM library ‘gmapping‘, that goal has been satisfied. What’s next?

One disappointment I found with existing ROS SLAM libraries is that the tutorials I’ve seen (such as this and this) expect a human to drive the robot during mapping. I had incorrectly assumed the robot would autonomously exploring its space, but “simultaneous location and mapping” only promises location and mapping – nothing about deciding which areas to map, and how to go about it. That is left to the human operator.

When I played with SLAM code earlier, I decided against driving the robot manually and instead invoked an existing module that takes a random walk through available space. A search on ROS Answers web site for something more sophisticated than a random walk resulted in multiple pointers to the explore module, but that code hasn’t been maintained since ROS “groovy” four versions ago. So one path forward is to take up the challenge of either update explore or write my own explorer.

That might be interesting, but once a map is built, what do we do with it? The standard ROS answer is the robot navigation stack. This collection of modules is what gives a ROS robot the ability to plan a path through a map, watch its progress through that plan, and update the plan in reaction to unexpected elements in the environment.

At the moment I believe it would be best to learn about the standard navigation stack and getting that up and running on Phoebe. I might return to the map exploration problem later, and if so, seeing how map data is used for navigation will give me better insights into what would make a better map explorer.

(Cross-posted to Hackaday.io)

Robot Disorientation Traced To Timing Mismatch

Once the Roboclaw ROS Node‘s wheel parameters were updated to match the new faster motors, Phoebe went on a mapping run. And the results were terrible! This is not a complete surprise. Based on previous experimentation with this LIDAR module, I’ve seen its output data distorted by motion. It doesn’t take a lot of motion – even a normal human walking pace is enough to cause visible distortion. So now that Phoebe’s motors are ten times faster than before, that extra speed also adds extra distortion.

While this is a problem, it might not be the only problem behind the poor map. I decided to dig into an anomaly I noticed while using RViz to calibrate wheel data against LIDAR data: there’s some movement that can’t be entirely explained by a LIDAR spinning at 240 RPM.

The idea behind this odometry vs. LIDAR plot on RViz is to see if wheel odometry data agrees with LIDAR data. My favorite calibration surface is a door – it is a nice flat surface for LIDAR signal return, and I could swing the door at various angles for testing. In theory, when everything lines up, movement in the calculated odometry would match LIDAR observed behavior, and everything that is static in the real world (like the door) would be static in the plot as well.

In order to tune the base_width parameter, I looked at the position of the door before turning, and position after turning. Adjusting base_width until they line up, indicating odometry matches LIDAR. But during the turn, the door moved relative to the robot before finishing at the expected position.

When Phoebe started turning (represented by red arrow) the door jumped out of position. After Phoebe stopped turning, the door snapped back to position. This means non-moving objects appear to the robot as moving objects, which would confuse any mapping algorithm.

Odom LIDAR Mismatch No Hack

I chased down a few dead ends before realizing the problem is a timing issue: the timestamp on the LIDAR data messages don’t line up with the timestamp on odometry messages. At the moment I don’t have enough data to tell who is at fault, the LIDAR node or the Roboclaw node, just that there is a time difference. Experimentation indicated the timing error is roughly on the order of 100 milliseconds.

Continuing the experiment, I hard-coded a 100ms timestamp delta. This is only a hack which does not address the underlying problem. With this modification, the door still moves around but at least it doesn’t jump out of place as much.

Odom LIDAR Mismatch 100ms hack

This timing error went unnoticed when Phoebe was crawling extremely slowly. But at Phoebe’s higher speed this timing error could not longer be ignored. Ideally all of the objects on the RViz plot would stay still, but we clearly see nonuniform distortion during motion. There may be room for further improvement, but I don’t expect I’ll ever get to ideal performance with such an inexpensive LIDAR unit. Phoebe may end up just having to go slowly while mapping.

(Cross-posted to Hackaday.io)

Using RViz to Validate Motor Movement Against LIDAR Data

The Roboclaw ROS node is responsible for calculating odometry information based on encoder values read from each wheel. In order to translate them into standard ROS units, it needs two parameters:

  • ticks_per_meter : To calculate physical distance traversed by each wheel, the code needs to know how many encoder counts it takes for the wheels to travel one meter.
  • base_width : To calculate how much the robot has turned, the code needs to know how far apart the two drive wheels are placed.

Both of these values needed updating after upgrading Phoebe to the second chassis. I got impatient with the slow speed of the first draft, so the motor gearboxes were changed out with ones that deliver less torque and precision but much faster top speed. This change of gearing would need a new ticks_per_meter value. And the second chassis is slightly wider than the first, which obviously changes the base_width value. Both of these could be calculated on paper, but that is only a starting point. Real world is always a little different from the theoretical and needs a little adjustment.

The easiest place to start is ticks_per_meter. Phoebe is placed on a flat surface, next to a ruler, and commanded to drive straight forward for a short distance. During this activity, the odometry data is monitored with rostopic echo /odom to see how far Phoebe thinks it has actually gone. If the ruler said Phoebe didn’t go as far as it thought it did, increase ticks_per_meter. If Phoebe overshot, reduce that value.

Once wheel travel was verified with a ruler, LIDAR is added to the picture. RViz is commanded to plot odometry data and LIDAR data together, and Phoebe is placed facing a door serving as a large flat surface for reference. The red arrow represents where Phoebe thinks it is, facing the horizontal line representing its LIDAR’s view of the door.

Phoebe Door Test 1 Start

Then Phoebe was commanded to move backwards 0.5m. If odometry data agrees with LIDAR data, the movement away from door and the door’s distance from robot would match, canceling each other out so the line representing the door would not move on the RViz plot. It looks like the ruler calibration worked out well, as we’re only a tiny bit off.

Phoebe Door Test 2 Back 0.5m.png

Once distance was verified, we move on to rotation. Command Phoebe to make a 90 degree turn clockwise and see if LIDAR plot agrees. Again, ideally the turn calculated from odometry would agree with the LIDAR plot, leaving the door in roughly the same place on the RViz plot. Ideally.

Phoebe Door Test 3 90 Deg Right Bad

In this case, however, the door shows a minor clockwise rotation. This change of position in LIDAR data indicates Phoebe didn’t turn as far as it thought it did. To adjust parameters so Phoebe’s calculations better align with actual motion, we can increase the base_width parameter. And if the door had rotated the other way (Phoebe turned further than it thought it did) the parameter should be decreased.

(Cross-posted to Hackaday.io)

Phoebe vs. Office Chair Round 2

Phoebe was built to roam my house, but the first draft chassis was unable to do so effectively due to a few problems that the second chassis aimed to solve. The first one was ground clearance, which was solved by raising the main chassis and sloping the bottom of the electronics tray. Sloping that leading edge gives Phoebe a better approach angle for smoothly transitioning between floor surfaces.

The second major problem was the LIDAR scanner’s height: it was too high to see the legs of an office chair. Hence the other major goal of the second chassis was to lower the LIDAR mounting point and hopefully bring an office chair’s legs into plane of view.

Placing the newly rebuilt Phoebe next to the chair looks promising at first glance. Unlike the taller first chassis, the LIDAR’s horizontal plane of sight is now low enough it should be able to see the legs.

Phoebe vs Chair Round 2

The proof is in the occupancy grid, and the RViz plot shows that Phoebe can now see the legs of the chair blocking its way.

Phoebe Sees Chair Legs

It’s not a very solid detection, though. Something about the surface texture and/or angle of the plastic results in a weak laser return.  And there’s the risk of a leg going undetected when if approached from the end, as the dark sloped rounded end of the chair leg is nearly invisible to LIDAR.

But it’s a huge improvement from before, where the LIDAR was too high to see any part of the starfish pattern. It’s good enough for us to proceed with the next task: integrate Phoebe’s new faster wheel drive motors into the system.

(Cross-posted to Hackaday.io)

ROS Notes: Map Resolution

Now that I’m driving my TurtleBot-derived robot Phoebe around and mapping my house, I’m starting to get some first-hand experience with robotic mapping. One of the most fascinating bits of revelation concerns map resolution.

When a robot launches the Gmapping module, one of the parameters (delta) dictates the granularity of the occupancy grid in meters. For example, setting it to 0.05 (the value used in TurtleBot 3 mapping demo) means each square in the grid is 0.05 meters or 5 cm on each side.

This feels reasonable for a robot that roams around a household. Large objects like walls would be fine, and the smallest common obstacle in a house like table legs can reasonably fill a 5cm x 5cm cell on the occupancy grid. If the grid cells were any larger, it would have trouble properly accounting for chair and table legs.

Low Resolution Sharp Map 5cm

So if we make the grid cells smaller, we would get better maps, right?

It’s actually not that simple.

The first issue stems from computation load. Increasing resolution drastically increases the amount of memory consumed to track the occupancy grid, and increases computation requirements to keep grid cells updated. The increase in memory consumption is easy to calculate. If we halve the grid granularity from 5cm to 2.5cm, that turns each 5cm square into four 2.5cm squares. Quadrupling the memory requirement for our occupancy grid. Tracking and maintaining this map is a lot more work. In my experience the mapping module has a lot harder time matching LIDAR scan data to the map, causing occasional skips in data processing that ends up reducing map quality.

The second issue stems from sensor precision. An inexpensive LIDAR like my unit salvaged from a Neato robot vacuum isn’t terribly precise, returning noisy distance readings that varies over time even if the robot and the obstacle are both standing still. When the noise exceeds map granularity, the occupancy grid starts getting “fuzzy”. For example, a solid wall might no longer be a single surface, but several nearby surfaces.

High Resolution Fuzzy Map 1cm

As a result of those two factors, arbitrarily increasing the occupancy map resolution can drastically increase the cost without returning a worthwhile improvement. This is a downside to going “too small”, and it was less obvious than the downside of going “too large.” There’s a “just right” point in between that makes the best trade-offs. Finding the right map granularity to match robots and their tasks is going to be an interesting challenge.

(Cross-posted to Hackaday.io)

Phoebe The Cartographer

Once odometry calculation math in the Roboclaw ROS driver was fixed, I could drive Phoebe (my TurtleBot variant) around the house and watch laser and odometry data plotted in RViz. It is exciting to see the data stream starting to resemble that of a real functioning autonomous robot! And just like all real robots… the real world does not match the ideal world. Our specific problem of the day is odometry drift: Phoebe’s wheel encoders are not perfectly accurate. Whether from wheel slippage, small debris on the ground, or whatever else, they cause the reported distance to be slightly different from actual distance traveled. These small errors accumulate over time, so the position calculated from odometry becomes less and less accurate as Phoebe drives.

The solution to odometry drift is to supplement encoder data with other sensors, using additional information can help correct for position drift. In the case of Phoebe and her  TurtleBot 3 inspiration, that comes in courtesy of the scanning LIDAR. If Phoebe can track LIDAR readings over time and build up a map, that information can also be used to locate Phoebe on the map. This class of algorithms is called SLAM for Simultaneous Location and Mapping. And because they’re fundamentally similar robots, it would be straightforward to translate TurtleBot 3’s SLAM demo to my Phoebe.

There are several different SLAM implementations available as ROS modules. I’ll start with Gmapping because that’s what TurtleBot 3 demo used. As input this module needs LIDAR data in the form of ROS topic /scan and also the transform tree published via /tf, where it finds the geometry relationship between odometry (which I just fixed), base, and laser. As output, it will generate an “occupancy grid”, a big table representing a robot’s environment in terms of open space, obstacle, or unknown. And most importantly for our purposes: it will generate a transform mapping map coordinate frame to the odom coordinate frame. This coordinate transform is the correction factor to be applied on top of odometry-calculated position, generated by comparing LIDAR data to the map.

Once all the pieces are in place, Phoebe can start mapping out its environment and also correct for small errors in odometry position as it drifts.

SLAM achievement: Unlocked!

Phoebe GMapping

(Cross-posted to Hackaday.io)

Roboclaw ROS Driver: Odometry Calculation Reversal

rosorg-logo1I’m making good progress on integrating ROS into my TurtleBot variant called Phoebe. I’ve just configured RViz to plot Phoebe’s location as reported by Roboclaw ROS driver‘s odometry information topic /odom. On the same plot, I’ve also placed data reported by Phoebe’s scanning LIDAR. Everything looked good when I commanded Phoebe to drive forward, but the geometry went askew as soon as Phoebe started turning. The LIDAR dots rotated one way, and the odometry reporting arrow rotated another. Something is wrong in the geometry calculation code.

For this particular problem I knew exactly where to start looking. When I assembled Phoebe, I knew I needed to configure motors in the way expected by the Roboclaw ROS driver. The README didn’t explicitly say which motor went on which side, so I went into the source code and got conflicting answers. In the motor driving routine (cmd_vel_callback) it performed calculation for right wheel motion and sent it to motor 1, and left wheel motion went to motor 2.

However, this doesn’t match encoder calculation code. The method EncodeOdom::update_publish expects its parameters to be (enc_left, enc_right). In method Node::run, it retrieves encoder values from motor 1 and saves it in variable enc1, and enc2 for motor 2, then calls update_publish(enc1, enc2). Which would treat encoder value of motor 1 as enc_left instead of enc_right, the opposite of what we’d want.

So the fact data on /odom is going the wrong way was not a surprise, but why didn’t the LIDAR transform also suffer the same problem? This was traced down to an earlier change submitted as a fix for the reverse transform problem – that commit added a negative sign in front of the calculated angle when broadcasting odom -> base_link transform. But that only masked and didn’t fix the underlying problem. The transform might look right, but /odom data is still all wrong, and variables like self.last_enc_right is actually holding the left side value.

The correct fix would be to reverse the parameter order when calling EncodeOdom::update_publish which correctly assigns encoder 1 count the right motor, and encoder 2 the left motor. And with the underlying problem fixed, we no longer need the negative sign so that can be deleted.

With this fix, Phoebe’s laser plot and odometry plot in RViz appear to agree with each other and both correspond correctly to the direction Phoebe is turning. I’ve submitted this fix as a pull request on the Roboclaw ROS driver.

Driving Miss Phoebe (Not Self-Driving… Yet)

Now that the Roboclaw ROS node is configured for Phoebe TurtleBot’s physical layout, and the code is running reliably instead of crashing, I can drive it around as a remote control car. This isn’t the end goal, of course. If all I wanted was a remote control car there were far cheaper and easier ways to get here. Right now the goal is just to verify functionality of the drive system and that it is properly integrated into the ROS infrastructure.

In order to send driving commands to Phoebe, I need something that publishes to the command & velocity topic /cmd_vel. On an autonomous robot, the navigation code would be in charge of sending these commands. But since it’s a pretty common task to move a robot around manually, there are standard ROS packages to drive a robot via human commands. The most basic is teleop_twist_keyboard which allows driving by pressing keys on a keyboard. Alternatively there is teleop_twist_joy for driving via a joystick.

Those remote control (or ‘tele operation’ in ROS parlance) nodes worked right off the bat, which is great. But I quickly got bored with driving Phoebe around on the carpet in front of me. First I launched RViz to visualize scanning LIDAR data as I did before. This was enough for me to drive Phoebe beyond my line of sight, watching surroundings in the form of the laser scan dots. After I verified that still worked, I stepped up the difficulty: I wanted RViz to plot laser data on top of odometry data in order to verify that Roboclaw ROS node is generating odometry data correctly.

To do this, I needed to participate in the ROS coordinate transform stack, the infrastructure to track all the frames of reference for relating robot components to each other in physical space. The Roboclaw ROS node publishes a transform to relate a robot’s position reference frame base_link to the odometry origin reference frame odom. The LIDAR ROS node publishes its data relative to its own neato_laser reference frame. As the robot builder, it was my job to write a transform to relate neato_laser frame to Phoebe’s base_link frame. Fortunately, ROS transform tutorials covered this exact task and I quickly got my desired RViz plot.

RViz with LaserScan and Odometry

It looks like the LIDAR scan plot from earlier, but now there’s an arrow indicating Phoebe’s position and direction. The bigger change not visible in this screen shot is that RViz is now plotting in the odometry frame. This means we no longer watch strictly from Phoebe’s viewpoint  where robot stays in the center of screen. The plot is now in odometry frame, and Phoebe should be moving relative to the map.

I drove Phoebe forward, and was happy to see the laser scan stayed still relative to the map and the red arrow representing Phoebe moved forward. But when I started turning Phoebe, the red arrow turned one way and the LIDAR plot moved the opposite way.

Something is wrong in the coordinate transform data, time for some debugging…

Roboclaw ROS Driver: Add Thread Synchronization

rosorg-logo1I now turn my attention to finding the root cause of random sporadic failures when the Roboclaw ROS driver makes calls into Roboclaw API. The most common symptom of these failures is the TypeError I addressed earlier, but avoiding crash by TypeError is only a band-aid. There were still other issues. Sometimes Phoebe’s movement stutters. And even more mysteriously, sometimes when Phoebe was supposed to be standing still, it would twitch for a fraction of a second.

Looking into ROS logs, the node never intended to send a movement. But it does send a continuous stream of “run at speed X” commands at a rate of ten per second, whether it is trying to move the robot or not. When staying still, that stream continues at the same rate constantly sending “run at speed zero”. The fact that my robot twitches when it’s supposed to be standing still tells me that “run at speed zero” command is occasionally corrupted into a “run at speed X” command. Which starts the motor moving for 1/10th of a second until it is stopped by the next non-corrupted “run at speed zero” command.

Any time there is a random sporadic failure, my instinct is to look for places where threading collisions may be taking place. Programming for multi-threaded environments can get tricky. When I wrote code for SGVHAK Rover, the intent was to make that code easy to understand and pick up. In that spirit, I explicitly kept everything on a single thread to avoid multi-threading issues.

But now we’re playing in the major leagues and there’s no avoiding multiple threads. ROS itself runs across multiple processes and threads, so it could scale to robots that have multiple on board computers, each of which has multiple processing cores. Fortunately, ROS implementation takes care of almost everything, each ROS node just has to make sure they’re doing the right thing with their own private data.

There are at least three different ROS topics involved in this Roboclaw ROS node:

  1. Driving commands received via /cmd_vel
  2. Odometry computation broadcast to /odom
  3. Diagnostics information

Each ROS topic is processed in own thread, so given three topics, we should expect at least three different threads who might all call into Roboclaw API at the same time to perform their tasks. Armed with this knowledge, I looked for code managing cross-thread access. My plan was to review that code to see if I can find any obvious problems with it.

That plan was changed when I found no code managing cross-thread access. I guess its absence qualifies as an obvious problem and it would certainly explain the kind of behavior I saw.

Not being a Python expert, I cruised StackOverflow for a pattern I could use to implement Python thread synchronization. I decided it was most straightforward to use the with keyword described on one of the later replies on this “Semaphores on Python” thread. Using this pattern makes the code change delta very straightforward to read.

There were a few initial calls into Roboclaw API to set things up, I left those alone. But as soon as the code started kicking off events that would have other threads (specifically, when it started the diagnostics thread) every following call into Roboclaw API is synchronized by a threading.Lock() object. With this modification, we can guarantee that only one thread will be performing serial communication to Roboclaw motor controller at any given time, and avoid data corruption by multiple threads trying to talk to the serial port at the same time.

Phoebe ran smoothly and reliably after this work. No more stutter in motion, and no more twitching when standing still. I’ve submitted the fix as a pull request.

Roboclaw ROS Driver: Encoder Count Logging Error

rosorg-logo1Once I fixed the ticks_per_meter error, I could drive Phoebe around by sending movement command messages to the /cmd_vel topic. However, I could not do this for very long (sometimes minutes, sometimes mere seconds) before the Roboclaw ROS driver would crash with the following error:

File "roboclaw_node.py", line 232, in run
    rospy.logdebug(" Encoders %d %d" % (enc1, enc2))
TypeError: %d format: a number is required, not NoneType

Once crashed, Phoebe no longer responds to movement commands, and that is bad. What’s going on here? The instance of TypeError that brought the whole thing crashing down gives us a place to start: one of its %d parameters (which could have been either enc1 or enc2) has a value of None instead of a number representing encoder count.

Looking through the source code file, I could see that the encoder values were initialized to None then replaced with an encoder count by a call to Roboclaw API ReadEncM1 / ReadEncM2. If this call should fail, there’s a try/except block to catch the error, but that leaves the encoder value at None, causing the crash I’ve experienced.

There exists an if statement whose intention might be to avoid this crash by checking for existence of the variable, but I don’t understand how it would help. Encoder count variables would always exist since they were declared and initialized to None.

There are two parts to this problem:

  1. Why is the Roboclaw API call failing?
  2. If it fails, how can we avoid crashing on TypeError?

Part 2 is easier so let’s address that first. TypeError complains when the variables are not numbers, so let’s explicitly check for numbers. Looking through trusty StackOverflow found this method of checking for numeric values in Python, so let’s use that to guard against TypeError.

This change would not address the root cause of API call failing, it’s just a band-aid to avoid fatally crashing and burning. When we fail to retrieve encoder count using Roboclaw API, our internal information become stale which causes more problems down the line. Problems like bad odometry calculations, which will cascade to more problems.

What we really need to do is to address the failing API call, which I’ll work on next. In the meantime this band-aid has been submitted via a pull request.

Roboclaw ROS Driver: Encoder Ticks Per Meter Of Travel

rosorg-logo1Now that Phoebe has taken physical form (imperfect as a first draft tends to be) it’s time to put together software necessary to run it. We’ve already previously established how to use the Neato LIDAR in ROS, so now focus turns to propulsion control via Roboclaw motor controllers. On their product information page, Roboclaw manufacturer linked to this Github repository with the label “Robot Operating System Driver”. So obviously that’s where I should start.

I had to look through this code earlier to learn “Motor 1” of the Roboclaw should be the right hand motor and “Motor 2” the left, but up to this point I’ve only ran this code enough to verify the correct motors turn in the correct direction. Now that I have to configure the driver parameter details to match my physical robot.

A critical parameter is ticks_per_meter, the number of encoder counts that transpire when the robot travels a meter. This encapsulates multiple pieces of information about a particular mechanical implementation: things like motor gear ratio and wheel diameter. This count is required for accurate velocity control, since velocity is specified in meters per second. It is also obviously important to calculate robot odometry, because Roboclaw tracks distance in encoder counts and ROS expects position information in meters.

To test this, I send a command “travel forward at 0.1 meters per second” for one second of time. Configuration is successful when this command makes Phoebe travel 10 centimeters, with corresponding confirmation in output odometry calculations. First I ran the test with default parameters, which resulted in a little over 1 centimeter of travel implying my robot’s ticks_per_meter is almost ten times that of default.

I then re-ran the test with a much higher value for ticks_per_meter parameter and the travel distance didn’t change. Then I tried even larger values, and smaller values, and no matter what the robot kept going forward the same amount. After I double checked that I was spelling my parameter correctly, I went looking into the code.

Thankfully the author had included a line to output parameter to ROS debug log system. (rospy.logdebug("ticks_per_meter %f", self.TICKS_PER_METER)) Once I found this debug message in the log, I was able to confirm that the default value was always used no matter what value I passed in.

Backtracking through the code, I found the cause: The call to retrieve parameter value has a typo, so it is looking for parameter tick_per_meter (singular tick) instead of the intended ticks_per_meter (plural ticks.) Once I added the missing ‘s‘, Phoebe started moving the correct distance. Once verified I submitted this fix via a pull request.

Establish Motor Directions For Phoebe TurtleBot

The first revision of Phoebe’s body frame has mounting points for the two drive wheels and the caster wheel. There are two larger holes to accommodate drive motor wiring bundle, and four smaller holes to mount a battery tray beneath the frame. Since this is the first rough draft, I didn’t bother spending too much time over thinking further details. We’ll wing it and take notes along the way for the next revision.

Phoebe Frame First Draft.PNG

After the wheels were installed, there was much happiness because the top surface of the frame sat level with the ground, indicating the height compensation (for height difference between motorized wheels and caster in the back) was correct or at least close enough.

Next, two holes were drilled to mechanically mount the Roboclaw motor control module. Once secured, a small battery was connected plus both motor drive power wires. Encoder data wires were not connected, just taped out of the way, as they were not yet needed for the first test: direction of motor rotation.

Phoebe TurtleBot Stage 1 PWM

The Roboclaw ROS node expects the robot’s right side motor to be connected as Motor #1, and the left as Motor #2. It also expects positive direction on both motors to correspond to forward motion.

I verified robot wiring using Ion Studio, the Windows-based utility published by the makers of Roboclaw. I used Ion Studio to command the motors via USB cable to verify the right motor rotates clockwise for positive motion, and the left motor counter-clockwise for positive motion. I got it right on the first try purely by accident, but it wouldn’t have been a big deal if one or both motors spun the wrong way. All I would have had to do is to swap the motor drive power wires to reverse their polarity.

(Cross-posted to Hackaday.io)

New Project: Phoebe TurtleBot

While my understanding of the open source Robot Operating System is still far from complete, I feel like I’m far enough along to build a robot to run ROS. Doing so would help cement the concepts covered so far and serve as an anchor to help explore new areas in the future.

Sawppy Rover is standing by, and my long-term plan has always been to make Sawppy smarter with ROS. Sawppy’s six-wheeled rocker-bogie suspension system is great for driving over rough terrain but fully exploiting that capability autonomously requires ROS software complexity far beyond what I could handle right now. Sawppy is still the goal for my ROS journey, but I need something simpler as an intermediate step.

A TurtleBot is the official ROS entry-level robot. It is far simpler than Sawppy with just two driven wheels and restricted to traveling over flat floors. I’ve been playing with a simulated TurtleBot 3 and it has been extremely helpful for learning ROS. Robotis will happily sell a complete TurtleBot 3 Burger for $550. This represents a discounted bundle price less than the sum of MSRP for all of its individual components, but $550 is still a nontrivial chunk of change. Looking over its capabilities, though, I’m optimistic I could implement critical TurtleBot functionality using parts already on hand.

Components for parts bin turtlebot phoebe

  • Onboard computer: This one is easy. I have several Raspberry Pi 3 sitting around I could draft into the task, with all necessary accessories like a microSD card, a case, and power supply.
  • Laser scanner: Instead of the Robotis LDS-01, I’ll use a scanning LIDAR salvaged from a Neato robot vacuum that I bought off eBay for experimentation. Somebody has already written a driver package to report its data as a ROS /scan topic and I’ve already verified it sends data that the ROS visualization tool RViz can understand.
  • Motor controller: Robotis OpenCR is a very capable board with lots of expansion capabilities, but I have a RoboClaw left from SGVHAK Rover (JPL Open Source Rover) project so I’ll use that instead. Somebody has already written a driver package to accept commands via ROS topic /cmd_vel and report motion via /odom, though I haven’t tried it yet.
  • Gearmotor + Encoder: TurtleBot 3 Burger’s wheels are driven by Robotis Dynamixel XL430-W250-T serial bus servos via their OpenCR board. Encoders are critical for executing motor commands sent to /cmd_vel correctly and for reporting odometry information via /odom. Fortunately some gearmotor+encoder from Pololu are available. These were formerly SGVHAK Rover’s steering motors, but after one of them broke they were all replaced with beefier motors. I’m going to borrow two of the remaining functioning units for this project.
  • Wheel: Robotis has a wheel designed to bolt onto their servos, and they have mounting hardware for their servos. I’m going to borrow the wheel and mounting hardware from a self-balancing robot toy gathering dust on a shelf. (Similar but not identical to this one.) That toy didn’t have encoders on its motors, but they have the same mounting points and output shaft as the Pololu gearmotor so it was an easy swap. The third wheel, a free wheeling caster, was salvaged from a retired office chair.
  • Chassis hardware: Robotis has designed a modular system (like an Erector Set) for building robot chassis like their TurtleBot variants. As for me… I have a 3D printer and I’m not afraid to use it.
  • Battery: I’ll be borrowing Sawppy’s 5200 mAh battery pack.

This forms the roster for my TurtleBot variant, with an incremental component cost of $0 thanks to the parts bin. “Parts Bin TurtleBot” is a mouthful to say and not a very friendly name. I looked at its acronym “PB-TB” and remembered R2-D2’s nickname “Artoo”. So I’m going to turn “PB” into Phoebe.

I hereby announce the birth of my TurtleBot variant, Phoebe!

(Cross-posted to Hackaday.io)

Examining Basic Requirements For Mapping in ROS

When I first looked at Gazebo robot simulation environment, I didn’t understand how to find the interface layer difference between a simulated robot and a physical robot. Now I’ve learned enough ROS to know: Look at the ROS Node graph to find a node named /gazebo. Anything provided by or consumed by that node is a virtual substitute provided by Gazebo. When the same ROS code stack is on a physical robot, nodes that interface with physical hardware replaces everything provided in simulation by /gazebo.

When I tested a cheap little laptop’s performance in ROS, I put this knowledge to use. Gazebo ran on a high-end computer and every other node ran on the laptop. This simulates the workload of running Gmapping algorithm as if the low-end laptop was mounted on a physical robot. But what, specifically, is required? Let’s look at the ROS node graph once again with rqt_graph. Here’s a graph generated while TurtleBot 3’s mapping demo is running in Gazebo:

TurtleBot 3 GMapping With UI

Here’s a slightly different graph, generated by running the same mapping task but with Gazebo’s GUI and ROS RViz visualization tool turned off. They are useful for the human developer but are not strictly necessary for the robot to run. We see the /gazebo_gui node has dropped out as expected, and the /map topic was also dropped because it was no longer being consumed by RViz for presentation.

TurtleBot 3 GMapping No UI

We can see the Gazebo-specific parts are quite modest in this particular exercise. A physical robot running Gmapping in ROS will need to subscribe to /cmd_vel so it can be told where to go, and provide laser distance scanning data via /scan so Gmapping can tell where it is. Gazebo also publishes the simulated robot’s state via /tf and /joint_states.

And now that I have a spinning LIDAR working in ROS to provide /scan, the next project is to build a robot chassis that can be driven via /cmd_vel. After that is complete, we can use it to learn about /tf and /joint_states.