SGVHAK Rover, Sawppy, and Phoebe at SGVLUG February 2019 Meeting

At the February 2019 meet for San Gabriel Valley Linux User’s Group (SGVLUG), Lan and I presented the story of rover building in our hardware hackers spinoff group a.k.a. SGVHAK. This is a practice run for our presentation at Southern California Linux Expo (SCaLE) in March. Naturally, the rovers themselves had to be present as visual aids.

20190214 Rovers at SGVLUG

We started the story in January 2018, when Lan gathered the SGVHAK group to serve as beta testers for Jet Propulsion Laboratory’s Open Source Rover project. Then we went through our construction process, which was greatly motivated by our desire to have SGVHAK rover up and running at least year’s SCaLE. Having a rover at SCaLE was not the end, it was only the beginning. I started building my own rover Sawppy, and SGVHAK rover continued to pick up hardware upgrades along the way.

On the software side, we have ambition to increase sophistication by adapting the open source Robot Operation System (ROS) which led to a small digression to Phoebe, my tool for learning ROS. Getting a rover to work effectively under ROS poses some significant challenges that we have yet to address, but if it was easy it wouldn’t be fun!

Since this was a practice talk, the Q&A session at the end was also a forum for feedback on how we could improve the talk for SCaLE. We had some good suggestions on how we might have a better smoother narrative through the story, and we’ll see what we can figure out by March.

Phoebe 1.0 Complete

Phoebe Chassis 2I started the Phoebe project with the goal of building something to apply what I’ve learned about ROS. Get some hands-on experience, learning the ropes. Now that Phoebe can map and autonomously navigate its environment, it is a good place to pause and evaluate potential paths forward. (Also: I have other demands on my time so I need to pause my Phoebe work anyway… and now is a great time.)

Option #1: Better Refinement

Phoebe can map surroundings then, using that map, navigate that environment. This level of functionality is on parity with the baseline functionality of TurtleBot 3. Though neither the mapping nor the navigation is quite as polished as performed by TurtleBot built by people who know what they are doing. For that, Phoebe’s ROS modules need tuning of their parameters to improve performance. There are also small bugs hiding in the system that need to be rooted out. I’m sure the ~100ms timing difference mystery is only the tip of the iceberg.

Risk: This is “the hard part” of not just building a robot, but building a good robot. And I know myself. Without a clear goal and visible progress towards that goal, I’m liable to get distracted or discouraged, trailing off and never really accomplishing.

Option #2: More ROS Functionality

I had been disappointed that the SLAM and navigation tutorials I’ve seen to date require a human to direct robot exploration. I had thought automated exploration would be part of SLAM but I was wrong. Thanks to helpful comments by Hackaday.io user Humpelstilzchen (who is building a pretty cool ROS robot too) I’ve now learned autonomous exploration is built on top of SLAM and Navigation.

So now that Phoebe can do SLAM and can navigate, adding one of the autonomous exploration modules would be the obvious next step.

Risk: It’s another ROS learning curve to climb.

Option #3: More Phoebe Functionality

Phoebe has wheel encoders and a LIDAR as input, and it might be interesting to add more. Ideas have included:

  • Obstacle detection to augment LIDAR, such as
    • Ultrasound distance sensor.
    • Infrared distance sensor (must avoid interference with LIDAR).
    • Bumpers with microswitches to detect collision.
  • IMU (inertial measurement unit).
  • Raspberry Pi camera or other video feed.

Risk: Over-complicating Phoebe, which was always intended to be a minimal-cost baseline entry into the world of ROS following the footstep of ROS TurtleBot.


Options 1 and 2 take place strictly in software, which means mechanical chassis will remain untouched.

Option 3 changes Phoebe hardware, and that would start deviating from TurtleBot. There’s value in being TurtleBot-compatible and hence value in taking a snapshot at this point in time.

Given the above review, I declare the mechanical construction project of Phoebe the TurtleBot complete for version 1.0. As part of this, I’ve also updated the README file on Phoebe’s Github repository to describe content. Because I know I’ll start forgetting!

Phoebe Is Navigating Autonomously

I’ve been making progress (slowly but surely) thorough the ROS navigation stack tutorial to get it running on Phoebe, and finally reached the finish line.

After all the configuration YAML files were created, they were tied together into a launch file as parameters to the ROS node move_base. For now I’m keeping the pieces in independent launch files, so move_base is ran independently of Phoebe’s chassis functionality launch file and AMCL (launched using its default amcl_diff.launch).

After they were all running, a new RViz configuration was created to visualize local costmap and amcl particle cloud. And it was a huge mess! I was disheartened for a few seconds before I remembered seeing a similar mess when I first looked at navigation on a Gazebo simulation of TurtleBot 3 Burger. Before anything would work, I had to set the initial “2D Pose Estimate” to locate Phoebe on the map.

Once that was done, I set a “2D Nav Goal” via RViz, and Phoebe started moving! Looking on RViz I could see the map along with LIDAR scan plots and Phoebe’s digital representation from URDF. Those are all familiar from earlier. New to the navigation map is a planned path plotted in green taking account of the local cost map in gray. AMCL contributed the rest of the information on screen, with individual estimates drawn as little yellow arrows and estimated position in red.

Phoebe Nav2D 2

It’s pretty exciting to have a robot with basic intelligence for path planning, and not just a fancy remote control car.

Of course, there’s a lot of tuning to be done before things actually work well. Phoebe is super cautious and conservative about navigating obstacles, exhibiting a lot of halting and retrying behavior in narrower passageways even when there are still 10-15cm of clearance on each side. I’m confident there are parameter I could tune to improve this.

Less obvious are what I need to adjust to increase Phoebe’s confidence in relatively wide open areas, Phoebe would occasionally brake to a halt and hunt around a bit before resuming travel even when there’s plenty of space. I didn’t see an obstacle pop up on the local costmap, so it’s not clear what triggered this behavior.

(Cross-posted to Hackaday.io)

Navigation Stack Setup for Phoebe

rosorg-logo1Section 1 “Robot Setup” of this ROS Navigation tutorial page confirmed Phoebe met all the basic requirements for the standard ROS navigation stack. Section 2 “Navigation Stack Setup” is where I need to tell that navigation stack how to run on Phoebe.

I had already created a ROS package for Phoebe earlier to track all of my necessary support files, so getting navigation up and running is a matter of creating a new launch file in my existing directory for launch files. To date all of my ROS node configuration has been done in the launch file, but ROS navigation requires additional configuration files in YAML format.

First up in the tutorial were the configuration values common for both local and global costmap. This is where I saw the robot footprint definition, a little sad it’s not pulled from the URDF I just put together. Since Phoebe’s footprint is somewhat close to a circle, I went with the robot_radius option instead of declaring a footpring with an array of [x,y] coordinates. The inflation_radius parameter sounds like an interesting one to experiment with later pending Phoebe performance. The observation_sources parameter is interesting – it implies the navigation stack can utilize multiple sources simultaneously. I want to come back later and see if it can use a Kinect sensor for navigation. For now, Phoebe has just a LIDAR so that’s how I configured it.

For global costmap parameters, the tutorial values look equally applicable to Phoebe so I copied them as-is. For the local costmap, I reduced the width and height of the costmap window, because Phoebe doesn’t travel fast enough to need to look at 6 meters of surroundings, and I hoped reducing to 2 meters would reduce computation workload.

For base local planner parameters, I reduced maximum velocity until I have confidence Phoebe isn’t going to get into trouble speeding. The key modification here from tutorial values is changing holonomic_robot from true to false. Phoebe is a differential drive robot and can’t strafe sideways as a true holonomic robot can.

The final piece of section 2 is AMCL configuration. Earlier I’ve tried running AMCL on Phoebe without specifying any parameters (use defaults for everything) and it seemed to run without error messages, but I don’t yet have the experience to tell what good AMCL behavior is versus bad. Reading this tutorial, I see the AMCL package has pre-configured launch files. The tutorial called up amcl_omni.launch. Since Phoebe is a differential drive robot, I should use amcl_diff.launch instead. The RViz plot looks different than when I ran AMCL with all default parameters, but again, I don’t yet have the experience to tell if it’s an improvement or not. Let’s see how this runs before modifying parameters.

(Cross-posted to Hackaday.io.)

Checking If Phoebe Meets ROS Navigation Requirements

Now that basic coordinate transform frames have been configured with help of URDF and robot state publisher, I moved on to the next document: robot setup page. This one is actually listed slightly out of order list item on ROS navigation page, third behind the Basic Navigation Tuning Guide. I had started reading the “Tuning Guide” and saw that, in that introduction, the tuning guide assumes people have read the robot setup page. It’s not clear why they are out of order, but clearly robot setup needs to come first.

Right up front in Section 1 “Robot Setup” was a very helpful diagram labelled “Navigation Stack Setup” showing major building blocks for an autonomously navigating ROS robot. Even better, these blocks are color-coded as to their source. White blocks are part of the ROS navigation stack, gray parts are optional components outside of that stack, and blue indicates robot-specific code to interface with navigation stack.

overview_tf
Navigation Stack Setup diagram from ROS documentation

This gives me a convenient checklist to make sure Phoebe has everything necessary for ROS navigation. Clockwise from the right, they are:

  • Sensor source – check! Phoebe has a Neato LIDAR publishing laser scan sensor messages.
  • Base controller – check! Phoebe has a Roboclaw ROS node executing movement commands.
  • Odometry source – check! This is also provided by the Roboclaw ROS node reading from encoders.
  • Sensor transforms – check! This is what we just updated, from a hard-coded published transform to one published by robot state publisher based on information in Phoebe’s URDF.

That was the easy part. Section 2 was more opaque to this ROS beginner. It gave an overview of the configuration necessary for a robot to run navigation, but the overview assumes a level of ROS knowledge that’s at the limit of what I actually have in my head right now. It’ll probably take a few rounds of trial and error before I get everything up and running.

(Cross-posted to Hackaday.io)

Phoebe Digital Avatar in RViz

Now that Phoebe URDF has been figured out, it has been added to RViz visualization of Phoebe during GMapping runs. Before this point, Phoebe’s position and orientation (called a ‘pose‘ in ROS) is represented by a red arrow on the map. It’s been sufficient to get us this far, but a generic arrow is not enough for proper navigation because it doesn’t represent the space occupied by Phoebe. Now, with the URDF, the volume of space occupied by Phoebe is also visually represented on the map.

This is important for a human operator to gauge whether Phoebe can fit in certain spaces. While I was driving Phoebe around manually, it was a guessing game whether the red arrow will fit through a gap. Now with Phoebe’s digital avatar in the map, it’s a lot easier to gauge clearance.

I’m not sure if the ROS navigation stack will use Phoebe’s URDF in the same way. The primary reason the navigation tutorial pointed me to URDF is to get Phoebe’s transforms published properly in the tf tree using the robot state publisher tool. It’s pretty clear robot footprint information will be important for robot navigation for the same reason it was useful to human operation, I just don’t know if it’s the URDF doing that work or if I’ll end up defining robot footprint some other way. (UPDATE: I’ve since learned that, for the purposes of ROS navigation, robot footprint is defined some other way.)

In the meantime, here’s Phoebe by my favorite door to use for distance reference and calibration.

Phoebe By Door Posing Like URDF

And here’s the RViz plot, showing a digital representation of Phoebe by the door, showing the following:

  • LIDAR data in the form of a line of rainbow colored dots, drawn at the height of the Neato LIDAR unit. Each dot represents a LIDAR reading, with color representing the intensity of each return signal.
  • Black blocks on the occupancy map, representing space occupied by the door. Drawn at Z height of zero representing ground.
  • Light gray on the occupancy map representing unoccupied space.
  • Dark gray on the occupancy map representing unexplored space.

Phoebe By Door

(Cross-posted to Hackaday.io)

Phoebe URDF: Fixing Functional Problems

Once I had a decent looking URDF for Phoebe up and running, I added it into the Phoebe launch files and started working on the problems exposed by putting it to work.

The first problems were the drive wheels. Visually, they were stuck at the origin and didn’t move with the rest of the robot. Looking through error messages I realized ROS had expected me to read wheel encoder values and publish them as joint state. Since I hadn’t done so, this meant the wheels (attached with “continuous” joint) didn’t know their location. Until I get around to processing wheel encoder values, the joint type was changed to “fixed” to attach them to the chassis.

Looking at the model from multiple angles, I realized I forgot the caster wheel. Since it’s not driven, it is represented as a simple sphere and also attached via a fixed joint.

That’s enough to start driving around as a single unit, but the robot movement in RViz was reversed front/back with LIDAR data plot. This was caused by the fact I forgot to tell ROS the LIDAR is pointed backwards on the robot. Once I had done so, the 180 degree yaw is visible on the object axis visualization: The LIDAR’s X-axis (red cylinder) is pointing backwards instead of forwards like all the other axis.

Phoebe RViz Axes Arrows No Name

The final set of changes might be more cosmetic than functional. When reading about differential drive robots in ROS, it was brought up several times that the robot’s X/Y origin base_link need to be lined up with the pivoting axis of the robot. However, it wasn’t clear where the Z axis is supposed to be. Perhaps this is different for each ROS mapping module? The algorithm hector_slam defined several frames but they don’t appear to be supported by gmapping.

I first defined Phoebe origin as the center point between its two drive wheel axles. When rendered in RViz, this means the Z plane intersects the middle of the robot. It seems to work well, but the visualization looks a bit odd. Intuitively I want the Z plane to represent the ground, so I decided to drop the robot origin to ground level. In the object visualization, this is visible as the purple arrow heads all pointing at a center point below the robot. If I learn this was a bad move later, I’ll have to change it back.

All these changes combined gave me a Phoebe URDF with minimal representation in RViz visualization of Phoebe behavior.

(Cross-posted to Hackaday.io)

Describe Phoebe For ROS Using URDF

Now that I’ve decided to bring up the ROS navigation stack for Phoebe, where do I start? Well, the ROS Wiki page for the subject is always a good place to start, as they tend to have a tutorial for the subject. ROS navigation is no exception.

The first recommended page is actually a familiar sight – the brief overview on tf was required reading back when I first assembled the chassis. At the time, I could get away with a very simple static publisher, because I just had to tell ROS how and where my Neato LIDAR is mounted on my robot chassis. But now I guess I need to advanced to the next step and publish robot state. And this means describing Phoebe in more detail for ROS using a XML syntax called URDF (Unified Robot Descriptor Format).

So in order to bring up ROS navigation on Phoebe, the navigation wiki page has pointed me to robot state publisher and also the ROS URDF Tutorial. To learn one thing I had to learn another, the typical bootstrap process when learning something new.

For the purposes of robot physics simulation, the robot should be described using very basic geometry: a combination of rectangular solids, cylinders, and spheres. This keeps the computation workload for collision detection simple. While the visual representation can be more complex than the collision detection representation, it doesn’t have to be. So for this first draft, I’ll just do a super simple Phoebe for visual representation, suitable for use in collision calculations if I get into that later.

I started with Phoebe’s Onshape CAD file.

Phoebe CAD Full

Taking the critical dimensions, I created a simplified version in Onshape CAD using just rectangular boxes and cylinders. This exercise makes it a fairly straightforward exercise to translate into URDF.

Phoebe CAD Simplified

By measuring the dimensions in CAD, I could declare a few primitives with URDF and see what it looks like in RViz for comparison against CAD. Once the visual appearance is roughly correct, it’s time to tune the details and make sure they work for ROS functional purposes.

Phoebe RViz Simplified

(Cross-posted to Hackaday.io)

Next Phoebe Project Goal: ROS Navigation

rosorg-logo1When I started working on my own TurtleBot variant (before I even decided to call it Phoebe) my intention was to build a hardware platform to get first hand experience with ROS fundamentals. Phoebe’s Hackaday.io project page subtitle declared itself as a ROS robot for <$250 capable of SLAM. Now that Phoebe can map surroundings using standard ROS SLAM library ‘gmapping‘, that goal has been satisfied. What’s next?

One disappointment I found with existing ROS SLAM libraries is that the tutorials I’ve seen (such as this and this) expect a human to drive the robot during mapping. I had incorrectly assumed the robot would autonomously exploring its space, but “simultaneous location and mapping” only promises location and mapping – nothing about deciding which areas to map, and how to go about it. That is left to the human operator.

When I played with SLAM code earlier, I decided against driving the robot manually and instead invoked an existing module that takes a random walk through available space. A search on ROS Answers web site for something more sophisticated than a random walk resulted in multiple pointers to the explore module, but that code hasn’t been maintained since ROS “groovy” four versions ago. So one path forward is to take up the challenge of either update explore or write my own explorer.

That might be interesting, but once a map is built, what do we do with it? The standard ROS answer is the robot navigation stack. This collection of modules is what gives a ROS robot the ability to plan a path through a map, watch its progress through that plan, and update the plan in reaction to unexpected elements in the environment.

At the moment I believe it would be best to learn about the standard navigation stack and getting that up and running on Phoebe. I might return to the map exploration problem later, and if so, seeing how map data is used for navigation will give me better insights into what would make a better map explorer.

(Cross-posted to Hackaday.io)

Phoebe Accessory: HDMI Plug

In most ROS demonstrations, the robots are running through a pristine laboratory environment. Phoebe is built to roam my home, which is neither a laboratory or pristine. This became a problem when Phoebe ran across some dust bunnies and picked them up with its leading edge.

When choosing an orientation for Raspberry Pi 3 on Phoebe’s electronics tray, I chose to make the HDMI port accessible so I could connect a monitor as necessary. This resulted in that port facing forward along with the micro-USB power port and the headphone jack. All three of these ports were plugged up with debris when Phoebe explored some paths less well-traveled.

After I cleaned up the mess, all three ports appeared to work, but I was worried about Phoebe encountering some less fluffy obstacles. The audio jack was not a high priority as Raspberry Pi default audio is notoriously noisy and I haven’t needed it. The power jack could be easily bypassed by sending power via the GIPO pins (as I’m doing right now). That leaves the HDMI port, which can be quite inconvenient if damaged.  If I need a screen on a Pi with damaged HDMI port, I’d need to buy or borrow a screen that goes into the alternate DSI port like the official Raspberry Pi touchscreen.

Fortunately, there are little plastic plugs that come with certain HDMI peripherals for protection during shipping. In my case, I had a small red HDMI plug that came with my MSI video card. I installed it on Phoebe’s Raspberry Pi to protect the HDMI port against future debris encounters. Now Phoebe has a red nose. If it should glow I might have to rename my robot to Rudolph the Red Nosed Robot.

But it doesn’t glow, so Phoebe won’t get a name change.

Phoebe HDMI Plug

(Cross-posted to Hackaday.io)

Phoebe Accessory: Battery Voltage Monitor

And now, a few notes on some optional accessories. These aren’t required for anyone building their own Phoebe, but are nice to have.

The first item is a battery voltage meter and alarm. While Phoebe can monitor battery voltage in software via Roboclaw API, I also wanted an always-available physical readout of battery voltage. On Sawppy I thought I just needed to show the battery’s output voltage, but the number is only good if I could read it. During Sawppy’s all-day outing at JPL, California sunlight was too bright to read the number and I couldn’t tell when my battery dropped below recommended level for lithium chemistry batteries.

Searching for a better solution, I found these battery voltage alarms (*). Not only do they display voltage, when the level gets too low they also sound a buzzer. Judging by its product description, these were designed for remote-control aircraft where it’s not convenient to read a small number up in the air.

The downside is that the alarm is designed to be audible while up in the air and buried inside a fuselage. When it is on the ground and right in front of my face, it is a piercing shriek. Which isn’t so bad if it only occurs during low battery… but it also sounds a test beep when I first plug it in. It is loud. Very loud. To save my eardrums, the alarm buzzer has been muffled with some cotton pulled from a cotton swab. It’s still loud, but no longer gives me a headache afterwards.

I’ve also soldered a JST-XH connector onto the unpolarized input pins to fit my battery’s balance charging plug. Having a polarized connector helps make sure I don’t plug the battery in backwards. Those exposed pins are also a short-circuit risk, which I crudely mitigated by wrapping a layer of servo tape around them. Finally servo tape is used to secure the alarm to Phoebe’s backbone.

Now I can drive Phoebe around the house, even out of sight, confident that if I ever run the battery too low I’ll be notified with an alarm.

(Cross-posted to Hackaday.io)


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Robot Disorientation Traced To Timing Mismatch

Once the Roboclaw ROS Node‘s wheel parameters were updated to match the new faster motors, Phoebe went on a mapping run. And the results were terrible! This is not a complete surprise. Based on previous experimentation with this LIDAR module, I’ve seen its output data distorted by motion. It doesn’t take a lot of motion – even a normal human walking pace is enough to cause visible distortion. So now that Phoebe’s motors are ten times faster than before, that extra speed also adds extra distortion.

While this is a problem, it might not be the only problem behind the poor map. I decided to dig into an anomaly I noticed while using RViz to calibrate wheel data against LIDAR data: there’s some movement that can’t be entirely explained by a LIDAR spinning at 240 RPM.

The idea behind this odometry vs. LIDAR plot on RViz is to see if wheel odometry data agrees with LIDAR data. My favorite calibration surface is a door – it is a nice flat surface for LIDAR signal return, and I could swing the door at various angles for testing. In theory, when everything lines up, movement in the calculated odometry would match LIDAR observed behavior, and everything that is static in the real world (like the door) would be static in the plot as well.

In order to tune the base_width parameter, I looked at the position of the door before turning, and position after turning. Adjusting base_width until they line up, indicating odometry matches LIDAR. But during the turn, the door moved relative to the robot before finishing at the expected position.

When Phoebe started turning (represented by red arrow) the door jumped out of position. After Phoebe stopped turning, the door snapped back to position. This means non-moving objects appear to the robot as moving objects, which would confuse any mapping algorithm.

Odom LIDAR Mismatch No Hack

I chased down a few dead ends before realizing the problem is a timing issue: the timestamp on the LIDAR data messages don’t line up with the timestamp on odometry messages. At the moment I don’t have enough data to tell who is at fault, the LIDAR node or the Roboclaw node, just that there is a time difference. Experimentation indicated the timing error is roughly on the order of 100 milliseconds.

Continuing the experiment, I hard-coded a 100ms timestamp delta. This is only a hack which does not address the underlying problem. With this modification, the door still moves around but at least it doesn’t jump out of place as much.

Odom LIDAR Mismatch 100ms hack

This timing error went unnoticed when Phoebe was crawling extremely slowly. But at Phoebe’s higher speed this timing error could not longer be ignored. Ideally all of the objects on the RViz plot would stay still, but we clearly see nonuniform distortion during motion. There may be room for further improvement, but I don’t expect I’ll ever get to ideal performance with such an inexpensive LIDAR unit. Phoebe may end up just having to go slowly while mapping.

(Cross-posted to Hackaday.io)

Using RViz to Validate Motor Movement Against LIDAR Data

The Roboclaw ROS node is responsible for calculating odometry information based on encoder values read from each wheel. In order to translate them into standard ROS units, it needs two parameters:

  • ticks_per_meter : To calculate physical distance traversed by each wheel, the code needs to know how many encoder counts it takes for the wheels to travel one meter.
  • base_width : To calculate how much the robot has turned, the code needs to know how far apart the two drive wheels are placed.

Both of these values needed updating after upgrading Phoebe to the second chassis. I got impatient with the slow speed of the first draft, so the motor gearboxes were changed out with ones that deliver less torque and precision but much faster top speed. This change of gearing would need a new ticks_per_meter value. And the second chassis is slightly wider than the first, which obviously changes the base_width value. Both of these could be calculated on paper, but that is only a starting point. Real world is always a little different from the theoretical and needs a little adjustment.

The easiest place to start is ticks_per_meter. Phoebe is placed on a flat surface, next to a ruler, and commanded to drive straight forward for a short distance. During this activity, the odometry data is monitored with rostopic echo /odom to see how far Phoebe thinks it has actually gone. If the ruler said Phoebe didn’t go as far as it thought it did, increase ticks_per_meter. If Phoebe overshot, reduce that value.

Once wheel travel was verified with a ruler, LIDAR is added to the picture. RViz is commanded to plot odometry data and LIDAR data together, and Phoebe is placed facing a door serving as a large flat surface for reference. The red arrow represents where Phoebe thinks it is, facing the horizontal line representing its LIDAR’s view of the door.

Phoebe Door Test 1 Start

Then Phoebe was commanded to move backwards 0.5m. If odometry data agrees with LIDAR data, the movement away from door and the door’s distance from robot would match, canceling each other out so the line representing the door would not move on the RViz plot. It looks like the ruler calibration worked out well, as we’re only a tiny bit off.

Phoebe Door Test 2 Back 0.5m.png

Once distance was verified, we move on to rotation. Command Phoebe to make a 90 degree turn clockwise and see if LIDAR plot agrees. Again, ideally the turn calculated from odometry would agree with the LIDAR plot, leaving the door in roughly the same place on the RViz plot. Ideally.

Phoebe Door Test 3 90 Deg Right Bad

In this case, however, the door shows a minor clockwise rotation. This change of position in LIDAR data indicates Phoebe didn’t turn as far as it thought it did. To adjust parameters so Phoebe’s calculations better align with actual motion, we can increase the base_width parameter. And if the door had rotated the other way (Phoebe turned further than it thought it did) the parameter should be decreased.

(Cross-posted to Hackaday.io)

Phoebe vs. Office Chair Round 2

Phoebe was built to roam my house, but the first draft chassis was unable to do so effectively due to a few problems that the second chassis aimed to solve. The first one was ground clearance, which was solved by raising the main chassis and sloping the bottom of the electronics tray. Sloping that leading edge gives Phoebe a better approach angle for smoothly transitioning between floor surfaces.

The second major problem was the LIDAR scanner’s height: it was too high to see the legs of an office chair. Hence the other major goal of the second chassis was to lower the LIDAR mounting point and hopefully bring an office chair’s legs into plane of view.

Placing the newly rebuilt Phoebe next to the chair looks promising at first glance. Unlike the taller first chassis, the LIDAR’s horizontal plane of sight is now low enough it should be able to see the legs.

Phoebe vs Chair Round 2

The proof is in the occupancy grid, and the RViz plot shows that Phoebe can now see the legs of the chair blocking its way.

Phoebe Sees Chair Legs

It’s not a very solid detection, though. Something about the surface texture and/or angle of the plastic results in a weak laser return.  And there’s the risk of a leg going undetected when if approached from the end, as the dark sloped rounded end of the chair leg is nearly invisible to LIDAR.

But it’s a huge improvement from before, where the LIDAR was too high to see any part of the starfish pattern. It’s good enough for us to proceed with the next task: integrate Phoebe’s new faster wheel drive motors into the system.

(Cross-posted to Hackaday.io)

Phoebe Chassis 2 Electronics Tray

Phoebe’s first chassis stacked vertically: motors and wheels on the bottom, electronics in the middle, and LIDAR up top. That had to change for chassis 2 due to the desire to lower height of LIDAR for better obstacle visibility. The electronics were squeezed to the front where they now occupy a tray dedicated to all electronics components. This tray was originally separated because the chassis would otherwise be too big to print on my printer. But as it turns out the separation also made it more convenient to iterate through ways to install electronics without having to reprint everything.

The unpredictability came from wiring: I didn’t want to cut wires attached to components (the LIDAR, the motor+encoder, and battery) to length, so there needs to be room to coil extra wire. I also needed to run power wires to voltage regulators. One producing 3 Volts for LIDAR spin motor, a second one adjusted to 5 volts for Raspberry Pi. The Roboclaw handles battery power directly to the motors, and has its own voltage regulator to drive its internal logic circuits plus motor encoders.

Phoebe Chassis 2 Electronics Tray Iterations

It took two iterations to get everything to fit nicely, but once I started driving Phoebe chassis 2 around I found a new problem: approach angle. Chassis 2 has a higher ground clearance relative to its predecessor, and I held the same ground clearance for the electronics tray. However, because the tray is hanging out in front, having the same clearance is not enough when transitioning between different floor heights. When going from linoleum to carpet, thick carpet can get caught on the tray’s front lip preventing further forward progress. And when transition from carpet to linoleum, the front lip contacts the linoleum as the rear caster is still sitting on carpet. They lift the two drive wheels off the ground and Phoebe is stuck, helplessly spinning drive wheels.

The solution is to angle the tray upwards so the bottom of the tray becomes a skid plate. This helps transitioning from linoleum to carpet, and does not pull the drive wheels off the ground when going from carpet to linoleum.

phoebe-v2-side-view-isometric with arrow

And thus we inadvertently find the third benefit of printing the electronics tray as a separate piece: we can print it at a different angle, with its flat bottom on the print bed, avoiding the need for print supports to generate the sloped surface.

With these changes, Phoebe can now roam through my entire house, freely traversing across the various terrains of a normal household.

Phoebe Chassis 2 Carpet to Linoleum

(Cross-posted to Hackaday.io)

Phoebe Chassis 2 Backbone

Once I aborted plans to split Phoebe’s second chassis design into top and bottom decks, most of the workload was concentrated into a main backbone structure that will support all three wheels – two motorized driven wheels and one caster wheel. It will also support the battery, which is the heaviest single component, and the scanning LIDAR unit salvaged from a Neato robot vacuum.

Aside from their size and weight, the common thread with these components is that I don’t expect them to change very much in Phoebe’s future. They are all of the core components of a TurtleBot: differential-drive for mobility and a LIDAR to sense its surroundings. If either of those primary items change, it’s really an entirely different robot and no longer an iteration of Phoebe.

What I do expect to evolve at a much higher rate are the electronics that will control the motors and read the sensors (both motor encoder and LIDAR.) They will be mounted on a separate electronics equipment tray which will be mounted to the front end of chassis 2 backbone. More detail on that later.

For rigidity I had planned to make everything a single piece, but I wasn’t able to figure out a good way to make a 3D-printable structure that can support the LIDAR module above the motors and still leave enough space for those motors to be installed. So the LIDAR front support became a separate C-shaped piece that is clipped onto the backbone after motors are installed.

Phoebe 2 Mechanical Backbone

Other than that concession to practicality, Phoebe’s new backbone is a single rigid structure that links all wheels together and supports everything except the electronics. Once I had all the major connection points sketched out, I put effort into aesthetics design and making the backbone look more like one smoothly blended and integrated unit. The arch connecting all three wheels reminded me of a similar arch aboard Star Trek: The Next Generation‘s Enterprise-D bridge. (It held computer displays for the officers on duty standing behind Captain Picard.)

Printing this design requires support structures for that arch, and took over 8 hours to print. (Plus another half hour for the separate C-shaped clip.) I’m pleased with the results and, as expected, it has held up well through multiple iterations of the electronics tray.

(Cross-posted to Hackaday.io)

Phoebe Chassis 2: Dividing Top/Bottom vs. Front/Back

With help of Onshape in-context modeling, I started creating a chassis for Phoebe to support an arrangement that fit every component compactly like a 3D puzzle.

It was immediately obvious that I would not be able to print the entire chassis in one piece, as the footprint is greater than my 3D printer’s 200mm x 200mm print area. As I typically design parts to be printed without supports, I started with an approach that would split the chassis into top/bottom pieces. The top deck would support Phoebe’s caster wheel and Neato LIDAR. The bottom deck would support all the electronics. Then the battery compartment and motors would be bolted in between the top and bottom decks.

Phoebe Bottom Deck Draft

Here’s one of the drafts of the low deck, with a representative sample motor, battery, and Pi. I was not terribly confident about placement of the electronics. I could model the individual parts in CAD but there will be a lot of wires going between them. Wires take up space, and they are flexible with spring that push against each other, making them difficult to model accurately in CAD.

I looked at the electronics tray here and foresaw printing several iterations as I worked through wiring challenges. The bottom deck also includes part of the battery tray and motor mounts. Since these two things aren’t expected to change, printing each iteration would waste a lot of time going over the same thing. This was the first strike against a top/bottom split.

The second strike is physical strength: since top deck would house the caster wheel sticking out the back, it has leverage to put a lot of stress on however the top and bottom decks will be joined together. No matter if I used glue, or bolts, or something else. Foreseeing the structural loads, I decided it made more sense to have a single strong backbone connecting both motorized wheels and caster wheel.

This will require printing with supports, but the resulting strength should be worth the effort, and it isn’t likely to change as I evolve through the electronics. So I abandoned the top/bottom split and changed to a front-back split, with the strong single piece supporting all mechanical parts in the back.

(Cross-posted to Hackaday.io)

Onshape In-Context Modeling For Phoebe’s Second Chassis

Digitally laying out major components of a project in 3D space is something I’ve done for many projects, from my FreeNAS Box, to Luggable PC, to Sawppy the Rover. Doing it again for to figure out a more compact layout for Phoebe’s second chassis wasn’t a big deal in itself. However, this time the exercise will have a much more direct impact, thanks to a relatively new feature in Onshape.

For my past exercises, once I had decided upon a layout I would take measurements of relative position and dimensions of spaces between them. I would then copy those numbers to new drawings and build parts from those drawings. This workflow is functional but feels silly. The layout information is in the computer, why can’t I use them back in the drawings for components?

I’m not sure what the answer is, but whatever they may be, they are no longer relevant: modern CAD software now offer the ability to take assemblies of parts and use information from the assembly in drawings. They go by various names. SolidWorks documentation refers to this as top-down design. Onshape calls their version in-context modeling. Whatever the name, it’s a system that allowed me to reverse my design process. In the first chassis, I built a simple plate and bolt parts on it as I went. Now with the help of in-context modeling, I’ve arranged all the components in a game of 3D puzzle before creating a chassis to deliver that arrangement.

Using in-context modeling, I don’t have to copy & paste dimensions and risk introducing errors in the process. I also have the option to move parts around my layout and have all design dimensions update automatically. That last part doesn’t work quite as well as advertised, though I’m not sure what’s fundamental problem and what are just minor bugs they’ll fix later. But it works well enough today for me to believe in-context modeling will have a role in all my future projects.

In Context Editing 2

(Cross-posted to Hackaday.io)

Phoebe’s Component Layout Is A 3D Jigsaw Puzzle

Phoebe’s first chassis was just a rough draft to get first hand exposure trying to get all the parts my TurtleBot variant needed to talk and work with each other. What that exposure taught me is I need to improve packaging space efficiency and create a much more compact robot. Only then could I satisfy the competing requirements of increasing ground clearance and lowering LIDAR sensor height.

To work on this puzzle in three dimensions, I started by holding parts up against each other. But I quickly ran out of hands to track all their related positions so I moved on to do it digitally. First I created 3D representations of the major parts. They didn’t have to be very detailed, just enough for me to block out the space they’ll need. Then they were all imported into a single Onshape assembly so I could explore how to fit them together.

In Context Editing

I turned the caster forward, as if the robot was travelling backwards, because that position represents the maximum amount of space it needs. My battery is the heaviest single component, so for best balance it needs to be mounted somewhere between the drive wheels and the caster. Relative to the first draft chassis, the battery was rotated to allow more ground clearance, but that also pushed the caster a little further back than before.

In the first chassis, electronic components like the Roboclaw motor controller and Raspberry Pi 3 were sandwiched above the motors and below the LIDAR. They’ve been moved to the front in order to lower LIDAR height. The lowest point of the LIDAR module – its spinning motor – was dropped in between wheel drive motors. This required turning the LIDAR 180 degrees – what used to be “front” is now “back” – but we should be able to describe that frame of reference by updating its corresponding ROS component transform.

(Cross-posted to Hackaday.io)

Speedy Phoebe: Swapping Gearbox For 370 Motors

The first rough draft chassis for Phoebe worked well enough for me to understand some pretty serious problems with that design. Now that I have a better idea what I’m doing, it’s time to start working on a new chassis incorporating lessons learned. And since all the components will be taken apart anyway, it would be a good time to address another problem: Phoebe’s speed. More precisely, the lack thereof.

Phoebe’s motor + encoder + gearbox unit was borrowed from the retired parts bin for SGVHAK Rover. Since they were originally purchased to handle steering, priority was on precision and torque rather than speed. It worked well enough for Phoebe to move around, but their slow speed meant it took quite some time to map a room.

The motor mounts used for Phoebe’s first draft chassis were repurposed from a self-balancing robot toy, which had a similar motor coupled with a different ratio gearbox. That motor was not suitable for ROS work because there was no encoder on the motor, but perhaps we could swap its gearbox with the motor that does have an encoder.

Identical output shaft and mount

Here’s the “Before” picture: Self-balancing robot motor + gearbox on the left, former SGVHAK rover steering encoder + motor + gearbox on the right. The reason I was able to use the self balancing robot’s motor mount and wheel is because they both had the same output shaft diameter and mount points. While they have identical diameter, the steering gearbox is noticeably longer than the balancing robot gearbox.

Both Are 370 Motors

Both of these motors conform to a generic commodity form factor called “370”. A search for “370 motor” on Alibaba will find many companies making different motors with different options of voltage, speed, etc. All are physically similar in size. Maybe even identical? As for what “370” means… best guess is that it originally referred to overall length of the motor at 370 millimeters. It doesn’t specifically mean anything for remaining motor dimensions, but “370 motor” has probably become a de-facto standard. (Like 608 bearings.)

After a few screws were removed, both gearboxes were easily disassembled. We can see a few neat things: the plate mounting the gearbox to the motor had multiple holes to accommodate three different patterns. Either these gearboxes were designed to fit on multiple different motors, or some 370 motors are made with different bolt patterns than others.

Gearboxes Removed

Fortunately, both motors (one with encoder, one without) seem to have the same bolt pattern. And more importantly – the gear mounted on the motor output shaft seems to be identical as well! I don’t have to pull the gear off one shaft and mount it on another, which is great because that process tends to leave the gear weaker. With identical gears already mounted on the motor output shaft, I can literally bolt on the other gearbox and complete the swap.

After - Gearboxes swapped

Voila! The motor with encoder now has a different gear ratio that should allow Phoebe to run a lot faster. The slow one was advertised to be 227:1 ratio. I don’t have specification sheet for the fast gearbox, but turning one shaft and counting revolutions of the other indicates a roughly 20:1 ratio. So theoretically Phoebe’s top speed has been increased ten-fold. Would that be too fast and cause Phoebe to run out of control? Would it be unable to slow to a sufficiently low crawl speed for Phoebe to cautiously explore new worlds? We won’t know until we try!

(Cross-posted to Hackaday.io)