Speedy Phoebe: Swapping Gearbox For 370 Motors

The first rough draft chassis for Phoebe worked well enough for me to understand some pretty serious problems with that design. Now that I have a better idea what I’m doing, it’s time to start working on a new chassis incorporating lessons learned. And since all the components will be taken apart anyway, it would be a good time to address another problem: Phoebe’s speed. More precisely, the lack thereof.

Phoebe’s motor + encoder + gearbox unit was borrowed from the retired parts bin for SGVHAK Rover. Since they were originally purchased to handle steering, priority was on precision and torque rather than speed. It worked well enough for Phoebe to move around, but their slow speed meant it took quite some time to map a room.

The motor mounts used for Phoebe’s first draft chassis were repurposed from a self-balancing robot toy, which had a similar motor coupled with a different ratio gearbox. That motor was not suitable for ROS work because there was no encoder on the motor, but perhaps we could swap its gearbox with the motor that does have an encoder.

Identical output shaft and mount

Here’s the “Before” picture: Self-balancing robot motor + gearbox on the left, former SGVHAK rover steering encoder + motor + gearbox on the right. The reason I was able to use the self balancing robot’s motor mount and wheel is because they both had the same output shaft diameter and mount points. While they have identical diameter, the steering gearbox is noticeably longer than the balancing robot gearbox.

Both Are 370 Motors

Both of these motors conform to a generic commodity form factor called “370”. A search for “370 motor” on Alibaba will find many companies making different motors with different options of voltage, speed, etc. All are physically similar in size. Maybe even identical? As for what “370” means… best guess is that it originally referred to overall length of the motor at 370 millimeters. It doesn’t specifically mean anything for remaining motor dimensions, but “370 motor” has probably become a de-facto standard. (Like 608 bearings.)

After a few screws were removed, both gearboxes were easily disassembled. We can see a few neat things: the plate mounting the gearbox to the motor had multiple holes to accommodate three different patterns. Either these gearboxes were designed to fit on multiple different motors, or some 370 motors are made with different bolt patterns than others.

Gearboxes Removed

Fortunately, both motors (one with encoder, one without) seem to have the same bolt pattern. And more importantly – the gear mounted on the motor output shaft seems to be identical as well! I don’t have to pull the gear off one shaft and mount it on another, which is great because that process tends to leave the gear weaker. With identical gears already mounted on the motor output shaft, I can literally bolt on the other gearbox and complete the swap.

After - Gearboxes swapped

Voila! The motor with encoder now has a different gear ratio that should allow Phoebe to run a lot faster. The slow one was advertised to be 227:1 ratio. I don’t have specification sheet for the fast gearbox, but turning one shaft and counting revolutions of the other indicates a roughly 20:1 ratio. So theoretically Phoebe’s top speed has been increased ten-fold. Would that be too fast and cause Phoebe to run out of control? Would it be unable to slow to a sufficiently low crawl speed for Phoebe to cautiously explore new worlds? We won’t know until we try!

(Cross-posted to Hackaday.io)

Phoebe’s Nemesis: Office Chair

A little real-world experience with Phoebe has revealed several problems with my first rough draft chassis design. The second problem on the list is Phoebe’s LIDAR height: It sits too high to detect certain obstacles like an office chair. In the picture below, Phoebe has entered a perilous situation without realizing it. The LIDAR’s height meant Phoebe only sees the chair’s center post, thinking it is a safe distance away, blissfully ignorant of chair legs that have completely blocked its path.

Phoebe Faces Office Chair

Here is the RViz plot of the situation, representing Phoebe’s perspective. A red arrow indicates Phoebe’s position and direction. Light gray represents cells in the map occupancy grid thought to be open space, and black are cells that are occupied by an obstacle. Office chair’s center post is represented by two black squares at the tip of the red arrow, and the chair’s legs are absent because Phoebe never saw them with LIDAR.

Phoebe and Office Chair Post

This presents obvious problems in collision avoidance as Phoebe can’t avoid chair legs that it can’t see. Mounting height for Phoebe’s LIDAR has to be lowered in order to detect office chair legs.

Now that I’ve seen this problem firsthand, I realize it would also be an issue for TurtleBot 3 Burger. It has a compact footprint, and its parts are built upwards. This meant it couldn’t see office chair legs, either. But that’s OK as long as the robot is constrained to environments where walls are vertical and tall, like the maze seen in TurtleBot 3 Navigation demo. Phoebe would work well in such constrained environments too, but I’m not interested in constrained environments. I want Phoebe to roam my house.

Which leads us to Waffle Pi, the other TurtleBot 3 model. it has a larger footprint than the Burger, but it is a squat shape allowing LIDAR to be mounted lower and still have a clear view all around the top of the robot.

So I need to raise the bottom of Phoebe for ground clearance, and also lower the top for LIDAR mount. If the LIDAR can be low enough to look just over the top of the wheels, that should be good enough to see an office chair’s legs. Will I find a way to fit all of Phoebe’s components into this reduced height range? That’s the challenge at hand.

(Cross-posted to Hackaday.io)

Phoebe’s Nemesis: Floor Transitions

Right now my TurtleBot-based robot Phoebe is running around on a very rough first draft of chassis design. It was put together literally in an afternoon in the interest of time. Just throw the parts together so we can see if the idea will even work. Well, it did! And I’m starting to find faults with the first draft chassis that I want to address on the next version for a much better thought-out design.

The first fault is the lack of ground clearance. When I switched my mentality from the rough terrain capable Sawppy rover to a flat ground TurtleBot like Phoebe, I didn’t think the latter would need very much ground clearance at all. As a result, Phoebe’ battery pack hung between the driving wheels and caster, with only a few millimeters of clearance between the bottom of the battery tray and the ground.

Phoebe Ground Clearance Flat

If I’m not climbing rocks, I asked myself, why would I need ground clearance?

Well, I’ve found my answer: my home has rooms with carpet, rooms with linoleum, and rooms with tile. The transition between these surfaces are not completely flat. They’re pretty trivial for a walking human being, but for poor little Phoebe they are huge obstacles. Driving across the doorway from carpet to linoleum would cause Phoebe to get stuck on its battery belly.

Phoebe Ground Clearance Threshold

“More ground clearance” is a goal for Phoebe’s next chassis.

(Cross-posted to Hackaday.io)

ROS Notes: Map Resolution

Now that I’m driving my TurtleBot-derived robot Phoebe around and mapping my house, I’m starting to get some first-hand experience with robotic mapping. One of the most fascinating bits of revelation concerns map resolution.

When a robot launches the Gmapping module, one of the parameters (delta) dictates the granularity of the occupancy grid in meters. For example, setting it to 0.05 (the value used in TurtleBot 3 mapping demo) means each square in the grid is 0.05 meters or 5 cm on each side.

This feels reasonable for a robot that roams around a household. Large objects like walls would be fine, and the smallest common obstacle in a house like table legs can reasonably fill a 5cm x 5cm cell on the occupancy grid. If the grid cells were any larger, it would have trouble properly accounting for chair and table legs.

Low Resolution Sharp Map 5cm

So if we make the grid cells smaller, we would get better maps, right?

It’s actually not that simple.

The first issue stems from computation load. Increasing resolution drastically increases the amount of memory consumed to track the occupancy grid, and increases computation requirements to keep grid cells updated. The increase in memory consumption is easy to calculate. If we halve the grid granularity from 5cm to 2.5cm, that turns each 5cm square into four 2.5cm squares. Quadrupling the memory requirement for our occupancy grid. Tracking and maintaining this map is a lot more work. In my experience the mapping module has a lot harder time matching LIDAR scan data to the map, causing occasional skips in data processing that ends up reducing map quality.

The second issue stems from sensor precision. An inexpensive LIDAR like my unit salvaged from a Neato robot vacuum isn’t terribly precise, returning noisy distance readings that varies over time even if the robot and the obstacle are both standing still. When the noise exceeds map granularity, the occupancy grid starts getting “fuzzy”. For example, a solid wall might no longer be a single surface, but several nearby surfaces.

High Resolution Fuzzy Map 1cm

As a result of those two factors, arbitrarily increasing the occupancy map resolution can drastically increase the cost without returning a worthwhile improvement. This is a downside to going “too small”, and it was less obvious than the downside of going “too large.” There’s a “just right” point in between that makes the best trade-offs. Finding the right map granularity to match robots and their tasks is going to be an interesting challenge.

(Cross-posted to Hackaday.io)

Phoebe The Cartographer

Once odometry calculation math in the Roboclaw ROS driver was fixed, I could drive Phoebe (my TurtleBot variant) around the house and watch laser and odometry data plotted in RViz. It is exciting to see the data stream starting to resemble that of a real functioning autonomous robot! And just like all real robots… the real world does not match the ideal world. Our specific problem of the day is odometry drift: Phoebe’s wheel encoders are not perfectly accurate. Whether from wheel slippage, small debris on the ground, or whatever else, they cause the reported distance to be slightly different from actual distance traveled. These small errors accumulate over time, so the position calculated from odometry becomes less and less accurate as Phoebe drives.

The solution to odometry drift is to supplement encoder data with other sensors, using additional information can help correct for position drift. In the case of Phoebe and her  TurtleBot 3 inspiration, that comes in courtesy of the scanning LIDAR. If Phoebe can track LIDAR readings over time and build up a map, that information can also be used to locate Phoebe on the map. This class of algorithms is called SLAM for Simultaneous Location and Mapping. And because they’re fundamentally similar robots, it would be straightforward to translate TurtleBot 3’s SLAM demo to my Phoebe.

There are several different SLAM implementations available as ROS modules. I’ll start with Gmapping because that’s what TurtleBot 3 demo used. As input this module needs LIDAR data in the form of ROS topic /scan and also the transform tree published via /tf, where it finds the geometry relationship between odometry (which I just fixed), base, and laser. As output, it will generate an “occupancy grid”, a big table representing a robot’s environment in terms of open space, obstacle, or unknown. And most importantly for our purposes: it will generate a transform mapping map coordinate frame to the odom coordinate frame. This coordinate transform is the correction factor to be applied on top of odometry-calculated position, generated by comparing LIDAR data to the map.

Once all the pieces are in place, Phoebe can start mapping out its environment and also correct for small errors in odometry position as it drifts.

SLAM achievement: Unlocked!

Phoebe GMapping

(Cross-posted to Hackaday.io)

Roboclaw ROS Driver: Odometry Calculation Reversal

rosorg-logo1I’m making good progress on integrating ROS into my TurtleBot variant called Phoebe. I’ve just configured RViz to plot Phoebe’s location as reported by Roboclaw ROS driver‘s odometry information topic /odom. On the same plot, I’ve also placed data reported by Phoebe’s scanning LIDAR. Everything looked good when I commanded Phoebe to drive forward, but the geometry went askew as soon as Phoebe started turning. The LIDAR dots rotated one way, and the odometry reporting arrow rotated another. Something is wrong in the geometry calculation code.

For this particular problem I knew exactly where to start looking. When I assembled Phoebe, I knew I needed to configure motors in the way expected by the Roboclaw ROS driver. The README didn’t explicitly say which motor went on which side, so I went into the source code and got conflicting answers. In the motor driving routine (cmd_vel_callback) it performed calculation for right wheel motion and sent it to motor 1, and left wheel motion went to motor 2.

However, this doesn’t match encoder calculation code. The method EncodeOdom::update_publish expects its parameters to be (enc_left, enc_right). In method Node::run, it retrieves encoder values from motor 1 and saves it in variable enc1, and enc2 for motor 2, then calls update_publish(enc1, enc2). Which would treat encoder value of motor 1 as enc_left instead of enc_right, the opposite of what we’d want.

So the fact data on /odom is going the wrong way was not a surprise, but why didn’t the LIDAR transform also suffer the same problem? This was traced down to an earlier change submitted as a fix for the reverse transform problem – that commit added a negative sign in front of the calculated angle when broadcasting odom -> base_link transform. But that only masked and didn’t fix the underlying problem. The transform might look right, but /odom data is still all wrong, and variables like self.last_enc_right is actually holding the left side value.

The correct fix would be to reverse the parameter order when calling EncodeOdom::update_publish which correctly assigns encoder 1 count the right motor, and encoder 2 the left motor. And with the underlying problem fixed, we no longer need the negative sign so that can be deleted.

With this fix, Phoebe’s laser plot and odometry plot in RViz appear to agree with each other and both correspond correctly to the direction Phoebe is turning. I’ve submitted this fix as a pull request on the Roboclaw ROS driver.

Driving Miss Phoebe (Not Self-Driving… Yet)

Now that the Roboclaw ROS node is configured for Phoebe TurtleBot’s physical layout, and the code is running reliably instead of crashing, I can drive it around as a remote control car. This isn’t the end goal, of course. If all I wanted was a remote control car there were far cheaper and easier ways to get here. Right now the goal is just to verify functionality of the drive system and that it is properly integrated into the ROS infrastructure.

In order to send driving commands to Phoebe, I need something that publishes to the command & velocity topic /cmd_vel. On an autonomous robot, the navigation code would be in charge of sending these commands. But since it’s a pretty common task to move a robot around manually, there are standard ROS packages to drive a robot via human commands. The most basic is teleop_twist_keyboard which allows driving by pressing keys on a keyboard. Alternatively there is teleop_twist_joy for driving via a joystick.

Those remote control (or ‘tele operation’ in ROS parlance) nodes worked right off the bat, which is great. But I quickly got bored with driving Phoebe around on the carpet in front of me. First I launched RViz to visualize scanning LIDAR data as I did before. This was enough for me to drive Phoebe beyond my line of sight, watching surroundings in the form of the laser scan dots. After I verified that still worked, I stepped up the difficulty: I wanted RViz to plot laser data on top of odometry data in order to verify that Roboclaw ROS node is generating odometry data correctly.

To do this, I needed to participate in the ROS coordinate transform stack, the infrastructure to track all the frames of reference for relating robot components to each other in physical space. The Roboclaw ROS node publishes a transform to relate a robot’s position reference frame base_link to the odometry origin reference frame odom. The LIDAR ROS node publishes its data relative to its own neato_laser reference frame. As the robot builder, it was my job to write a transform to relate neato_laser frame to Phoebe’s base_link frame. Fortunately, ROS transform tutorials covered this exact task and I quickly got my desired RViz plot.

RViz with LaserScan and Odometry

It looks like the LIDAR scan plot from earlier, but now there’s an arrow indicating Phoebe’s position and direction. The bigger change not visible in this screen shot is that RViz is now plotting in the odometry frame. This means we no longer watch strictly from Phoebe’s viewpoint  where robot stays in the center of screen. The plot is now in odometry frame, and Phoebe should be moving relative to the map.

I drove Phoebe forward, and was happy to see the laser scan stayed still relative to the map and the red arrow representing Phoebe moved forward. But when I started turning Phoebe, the red arrow turned one way and the LIDAR plot moved the opposite way.

Something is wrong in the coordinate transform data, time for some debugging…

Roboclaw ROS Driver: Add Thread Synchronization

rosorg-logo1I now turn my attention to finding the root cause of random sporadic failures when the Roboclaw ROS driver makes calls into Roboclaw API. The most common symptom of these failures is the TypeError I addressed earlier, but avoiding crash by TypeError is only a band-aid. There were still other issues. Sometimes Phoebe’s movement stutters. And even more mysteriously, sometimes when Phoebe was supposed to be standing still, it would twitch for a fraction of a second.

Looking into ROS logs, the node never intended to send a movement. But it does send a continuous stream of “run at speed X” commands at a rate of ten per second, whether it is trying to move the robot or not. When staying still, that stream continues at the same rate constantly sending “run at speed zero”. The fact that my robot twitches when it’s supposed to be standing still tells me that “run at speed zero” command is occasionally corrupted into a “run at speed X” command. Which starts the motor moving for 1/10th of a second until it is stopped by the next non-corrupted “run at speed zero” command.

Any time there is a random sporadic failure, my instinct is to look for places where threading collisions may be taking place. Programming for multi-threaded environments can get tricky. When I wrote code for SGVHAK Rover, the intent was to make that code easy to understand and pick up. In that spirit, I explicitly kept everything on a single thread to avoid multi-threading issues.

But now we’re playing in the major leagues and there’s no avoiding multiple threads. ROS itself runs across multiple processes and threads, so it could scale to robots that have multiple on board computers, each of which has multiple processing cores. Fortunately, ROS implementation takes care of almost everything, each ROS node just has to make sure they’re doing the right thing with their own private data.

There are at least three different ROS topics involved in this Roboclaw ROS node:

  1. Driving commands received via /cmd_vel
  2. Odometry computation broadcast to /odom
  3. Diagnostics information

Each ROS topic is processed in own thread, so given three topics, we should expect at least three different threads who might all call into Roboclaw API at the same time to perform their tasks. Armed with this knowledge, I looked for code managing cross-thread access. My plan was to review that code to see if I can find any obvious problems with it.

That plan was changed when I found no code managing cross-thread access. I guess its absence qualifies as an obvious problem and it would certainly explain the kind of behavior I saw.

Not being a Python expert, I cruised StackOverflow for a pattern I could use to implement Python thread synchronization. I decided it was most straightforward to use the with keyword described on one of the later replies on this “Semaphores on Python” thread. Using this pattern makes the code change delta very straightforward to read.

There were a few initial calls into Roboclaw API to set things up, I left those alone. But as soon as the code started kicking off events that would have other threads (specifically, when it started the diagnostics thread) every following call into Roboclaw API is synchronized by a threading.Lock() object. With this modification, we can guarantee that only one thread will be performing serial communication to Roboclaw motor controller at any given time, and avoid data corruption by multiple threads trying to talk to the serial port at the same time.

Phoebe ran smoothly and reliably after this work. No more stutter in motion, and no more twitching when standing still. I’ve submitted the fix as a pull request.

LIDAR Completes First Draft of Phoebe TurtleBot

With the motors connected to Roboclaw, their direction and encoder in sync, and PID values tuned, Phoebe can be driven around via ROS /cmd_vel topic and report its movement via /odom. However, Phoebe has no awareness of its surroundings, which is where the LIDAR module comes in.

Salvaged from a Neato robot vacuum (and bought off eBay), it is the final major component to be installed on Phoebe. Since this is a rough first draft, the most expedient way to install the device is to drill a few holes for M3 standoffs, and mount the module on top of them. This allows the module clear lines of sight all around the robot, while sitting level with the ground. It is also installed as close to the center of the robot as practical. I don’t know if a center location is critical, but intuitively it seems to be a good thing to have. We’ll know more once we start driving it around and see what it does.

By this point the rough draft nature of the project is very visible. The LIDAR spin motor sticks out below the module the furthest, and the motor inadvertently sits right on top of the Raspberry Pi’s Ethernet port, which is the tallest point on a Pi. Raising the LIDAR high enough so they don’t collide left a lot of empty space between the two modules. Which is not wasted at the moment, because the wiring mess is getting out of control and could use all the space it can occupy.

The next version should lay things out differently to make everything neater. In the meantime, it’s time to see if we can drive this robot around and watch its LIDAR plot. And once that basic process has been debugged, that should be everything necessary to enable ROS projects to give Phoebe some level of autonomy.

Phoebe TurtleBot Stage 3 LIDAR

(Cross-posted to Hackaday.io)

Phoebe Receives Raspberry Pi Brain After PID Tuning

Once the motor’s spin direction was sorted out, I connected both encoders to verify A/B signals are in sync with motor direction. Again this is checked by commanding motor movement via Ion Studio software and watching the reported encoder value.

When wired correctly, encoder counter will increase when motor is commanded to spin in the positive direction, and decrease when motor spins negative. If hooked up wrong, the encoder value will decrease when the motor spins positive, and vice versa. The fix is simple: power down the system, and swap the A/B quadrature encoder signal wires.

Once the motor direction is verified correct, and encoder wires verified to match motor direction, we can proceed to the final phase of Roboclaw setup: determine PID coefficients for motor control.

PID tuning is something of a black art. Fortunately, while a perfect tune is very difficult to obtain, it’s not that hard to get to “good enough.” Furthermore, Ion Studio features an “Auto Tune” option to automatically find functional PID coefficients. During SGVHAK Rover construction we had no luck getting it to work and resorted to tuning PID coefficients manually. Fortunately, this time around Ion Studio’s automatic PID tuning works. I’m not sure what changed, but I’m not going to complain.

Once PID coefficients have been written to Roboclaw NVRAM, we no longer need to use the Windows-based Ion Studio software. From here on out, we can use a Raspberry Pi to control our motors. The Pi 3 was mounted so its microSD card remains accessible, as well as its HDMI port and USB ports. This meant trading off access to GPIO pins but we’re not planning to use them just yet so that’s OK.

Software-wise, the Raspberry Pi 3’s microSD card has a full desktop installation of ROS Kinetic on top of Ubuntu Mate 16.04 compiled for Raspberry Pi. In addition to all the Robotis software for TurtleBot 3, it also has a clone of the ROS control node, as well as a clone of the Neato LIDAR control node.

The wiring is not very neat or pretty but, again, this is just a rough first draft.

Phoebe TurtleBot Stage 2 Encoder Pi

(Cross-posted to Hackaday.io)

Establish Motor Directions For Phoebe TurtleBot

The first revision of Phoebe’s body frame has mounting points for the two drive wheels and the caster wheel. There are two larger holes to accommodate drive motor wiring bundle, and four smaller holes to mount a battery tray beneath the frame. Since this is the first rough draft, I didn’t bother spending too much time over thinking further details. We’ll wing it and take notes along the way for the next revision.

Phoebe Frame First Draft.PNG

After the wheels were installed, there was much happiness because the top surface of the frame sat level with the ground, indicating the height compensation (for height difference between motorized wheels and caster in the back) was correct or at least close enough.

Next, two holes were drilled to mechanically mount the Roboclaw motor control module. Once secured, a small battery was connected plus both motor drive power wires. Encoder data wires were not connected, just taped out of the way, as they were not yet needed for the first test: direction of motor rotation.

Phoebe TurtleBot Stage 1 PWM

The Roboclaw ROS node expects the robot’s right side motor to be connected as Motor #1, and the left as Motor #2. It also expects positive direction on both motors to correspond to forward motion.

I verified robot wiring using Ion Studio, the Windows-based utility published by the makers of Roboclaw. I used Ion Studio to command the motors via USB cable to verify the right motor rotates clockwise for positive motion, and the left motor counter-clockwise for positive motion. I got it right on the first try purely by accident, but it wouldn’t have been a big deal if one or both motors spun the wrong way. All I would have had to do is to swap the motor drive power wires to reverse their polarity.

(Cross-posted to Hackaday.io)

Test Frame To Help Arrange Phoebe’s Wheels

Since Phoebe will be a TurtleBot variant built out of stuff already in my parts bin, these parts won’t necessarily fit together well. The first thing to do is to figure out how to make the wheels work together. A simple test frame will mount Phoebe’s two drive wheels and see how they cooperate. And besides, building a two-wheel test chassis is how I’ve started many robot projects and that’s worked out well so far. So now let’s make another one to follow in the grand tradition of two wheel test chassis built to test parts going into SGVHAK Rover and Sawppy Rover.

Phoebe TurtleBot Two Wheel Test Frame

For Phoebe, this simple test chassis established the following:

  • I used a caliper to measure wheel mounting bracket dimensions, and they are accurate enough to proceed. They are spaced the correct distance apart, and their diameter is large enough for M4 bolts to slide through without being so large that the resulting wheel wobbles.
  • The 5mm thick slender connecting members are too weak. The next frame will have greater thickness and generally beefier structure.
  • I wanted a 20cm track. (Left-right distance between wheel centers.) I measured the dimensions for my wheel assembly but the measurements were a little off. Now I know how much to adjust for the next frame.
  • And most importantly: this frame allowed direct comparison of drive wheel resting height against caster wheel height. They were both awkward shapes to measure with a ruler so having the flat surface of a frame makes the measurement easier. Their relative height difference needs to be accounted for in the next frame in order to have a robot body that is level with the ground.

(Cross-posted to Hackaday.io)

Cost to Build Phoebe From Scratch

I chose components for Phoebe (“PB” or Parts Bin) TurtleBot because they were already available to me in one context or another. But not everyone has the same stuff in their own hoard. As an exercise for completeness, below is an estimate of what it would cost to build Phoebe if parts had to be purchased. Naturally, any parts already on hand can be subtracted from the cost. (The expected audience here is likely to have at least a Raspberry Pi 3 and battery to spare.)

Components for parts bin turtlebot phoebe

  • Onboard computer: A Raspberry Pi 3 with microSD card, case, and DC power supply will add up to roughly $50.
  • Laser scanner: LIDAR salvaged off Neato robot vacuum cleaners are on eBay with “Buy It Now” prices of $50-$75 at time of this writing. People living in a major metropolis like Los Angeles can find entire Neato vacuums on Craigslist in a similar price range.
  • Motor controller: A Roboclaw with 7A capacity can be purchased directly from the manufacturer for $70. It is overkill for this project, but it was their entry-level product and it was already on hand. Lower-cost alternatives probably exist.
  • Gearmotor + Encoder + Wheel: Buying the motors I’m using from Pololu would be $35 each without bracket or wheel. However, similar units including mounting bracket and wheel are available on Amazon for $20 each.
  • Caster wheel: A caster wheel can be salvaged off a piece of broken furniture for free. If you have to buy a caster, don’t pay more than $3.
  • Battery: The battery pack I’m using are available for $25 each, but it’s far more battery than necessary for this project. A far smaller pack for $10-15 would be sufficient.

Sum total: $238, which still does not include the following:

  • 3D printer filament.
  • Electrical connectors and wiring.
  • Bolts, nuts, and other assembly hardware.

But given room for improvement (cheaper motor controller and battery) a whole package to build Phoebe from scratch should be possible for under $250, less than half of a TurtleBot 3 Burger.

(Cross-posted to Hackaday.io)

New Project: Phoebe TurtleBot

While my understanding of the open source Robot Operating System is still far from complete, I feel like I’m far enough along to build a robot to run ROS. Doing so would help cement the concepts covered so far and serve as an anchor to help explore new areas in the future.

Sawppy Rover is standing by, and my long-term plan has always been to make Sawppy smarter with ROS. Sawppy’s six-wheeled rocker-bogie suspension system is great for driving over rough terrain but fully exploiting that capability autonomously requires ROS software complexity far beyond what I could handle right now. Sawppy is still the goal for my ROS journey, but I need something simpler as an intermediate step.

A TurtleBot is the official ROS entry-level robot. It is far simpler than Sawppy with just two driven wheels and restricted to traveling over flat floors. I’ve been playing with a simulated TurtleBot 3 and it has been extremely helpful for learning ROS. Robotis will happily sell a complete TurtleBot 3 Burger for $550. This represents a discounted bundle price less than the sum of MSRP for all of its individual components, but $550 is still a nontrivial chunk of change. Looking over its capabilities, though, I’m optimistic I could implement critical TurtleBot functionality using parts already on hand.

Components for parts bin turtlebot phoebe

  • Onboard computer: This one is easy. I have several Raspberry Pi 3 sitting around I could draft into the task, with all necessary accessories like a microSD card, a case, and power supply.
  • Laser scanner: Instead of the Robotis LDS-01, I’ll use a scanning LIDAR salvaged from a Neato robot vacuum that I bought off eBay for experimentation. Somebody has already written a driver package to report its data as a ROS /scan topic and I’ve already verified it sends data that the ROS visualization tool RViz can understand.
  • Motor controller: Robotis OpenCR is a very capable board with lots of expansion capabilities, but I have a RoboClaw left from SGVHAK Rover (JPL Open Source Rover) project so I’ll use that instead. Somebody has already written a driver package to accept commands via ROS topic /cmd_vel and report motion via /odom, though I haven’t tried it yet.
  • Gearmotor + Encoder: TurtleBot 3 Burger’s wheels are driven by Robotis Dynamixel XL430-W250-T serial bus servos via their OpenCR board. Encoders are critical for executing motor commands sent to /cmd_vel correctly and for reporting odometry information via /odom. Fortunately some gearmotor+encoder from Pololu are available. These were formerly SGVHAK Rover’s steering motors, but after one of them broke they were all replaced with beefier motors. I’m going to borrow two of the remaining functioning units for this project.
  • Wheel: Robotis has a wheel designed to bolt onto their servos, and they have mounting hardware for their servos. I’m going to borrow the wheel and mounting hardware from a self-balancing robot toy gathering dust on a shelf. (Similar but not identical to this one.) That toy didn’t have encoders on its motors, but they have the same mounting points and output shaft as the Pololu gearmotor so it was an easy swap. The third wheel, a free wheeling caster, was salvaged from a retired office chair.
  • Chassis hardware: Robotis has designed a modular system (like an Erector Set) for building robot chassis like their TurtleBot variants. As for me… I have a 3D printer and I’m not afraid to use it.
  • Battery: I’ll be borrowing Sawppy’s 5200 mAh battery pack.

This forms the roster for my TurtleBot variant, with an incremental component cost of $0 thanks to the parts bin. “Parts Bin TurtleBot” is a mouthful to say and not a very friendly name. I looked at its acronym “PB-TB” and remembered R2-D2’s nickname “Artoo”. So I’m going to turn “PB” into Phoebe.

I hereby announce the birth of my TurtleBot variant, Phoebe!

(Cross-posted to Hackaday.io)

Dell Inspiron 11 3000 (3180) As Robot Brain Candidate

Well, I should have seen this coming. Right after I wrote I wanted to be disciplined about buying hardware, that I wanted to wait until I know what I actually needed, a temptation rises to call for a change in plans. Now I have a Dell Inspiron 11 3000 (3180) on its way even though I don’t yet know if it’ll be a good ROS brain for Sawppy the Rover.

Dell Notebook Inspiron 11 3000 3180

The temptation was Dell’s Labor Day sale. This machine in its bare-bones configuration has a MSRP of $200 and can frequently be found on sale for $170-$180. To kick off their sale event, Dell made a number of them available for $130 and that was too much to resist.

This particular hardware chassis is also sold as a Dell Chromebook, so the hardware specs are roughly in line with the Chromebook comments in my previous post. We’ll start with the least exciting item: the heart is a low-end dual-core x86 CPU, an AMD E2-9000e that’s basically the bottom of the totem pole for Intel-compatible processors. But it is a relatively modern 64-bit chip enabling options (like WSL) not available on the 32-bit-only CPUs inside my Acer Aspire or Latitude X1.

The CPU is connected to 4GB of RAM, far more than the 1GB of a Raspberry Pi and hopefully a comfortable amount for sensor data processing. Main storage is listed as 32GB of eMMC flash memory which is better than a microSD card of a Pi, if only by a little. The more promising aspect of this chassis is the fact that it is also sometimes sold with a cheap spinning platter hard drive so the chassis can accommodate either type of storage as confirmed by the service manual. If I’m lucky (again), I might be able to swap it out with a standard solid state hard drive and put Ubuntu on it.

It has most of the peripherals expected of a modern laptop. Screen, keyboard, trackpad, and a webcam that might be repurposed for machine vision. The accessory that’s the most interesting for Sawppy is a USB 3 port necessary for a potential depth camera. As a 11″ laptop, it should easily fit within Sawppy’s equipment bay with its lid closed. The most annoying hardware tradeoff for its small size? This machine does not have a hard-wired Ethernet port, something even a Raspberry Pi has. I hope its on-board wireless networking is solid!

And lastly – while this computer has Chromebook-level hardware, this particular unit is sold with Windows 10 Home. Having the 64-bit edition installed from the factory should in theory allow Windows Subsystem for Linux. This way I have a backup option to run ROS even if I can’t replace the eMMC storage module with a SSD. (And not bold enough to outright destroy the Windows 10 installation on eMMC.)

Looking at the components in this package, this is a great deal: 4GB of DDR4 laptop memory is around $40 all on its own. A standalone license of Windows 10 Home has MSRP of $100. That puts us past the $130 price tag even before considering the rest of the laptop. If worse comes to worst, I could transfer the RAM module out to my Inspiron 15 for a memory boost.

But it shouldn’t come to that, I’m confident even if this machine proves to be insufficient as Sawppy’s ROS brain, the journey to that enlightenment will be instructive enough to be worth the cost.

Anticipating Limitations of a Raspberry Pi 3 Robot Brain

rosorg-logo1As investigation into ROS continues, it’s raising concern that a self-contained autonomous robot will likely need a brain more powerful than a Raspberry Pi. It’s a very capable little computing platform and worked well serving as Sawppy’s brain when operated as a remote-control vehicle. But when the rover needs to start thinking on its own, would a little single-board computer prove to be limiting?

CPU

raspberry-pi-logoThe most obvious point of concern is the low power ARM CPU. The raw processing capability of the chip is actually fairly respectable, and probably won’t be the biggest problem. But there are two downsides with the chip:

  1. It has a very small memory cache, so the chip will have problems working with large data sets. (Say, a detailed map of the robot’s surroundings.) Without a large cache or high bandwidth memory, the CPU will sit idle as it starves for data.
  2. It has limited heat dissipation. Under sustained load, the CPU will heat up and reach the point where it will have to slow itself down to avoid overheating.

Both of these are consistent with design objective of the chip. It is very good at quickly completing a task using its high-speed CPU, then wait for its next task. This type of workload is common to devices like cell phones. In contrast, a robot has to process sensor inputs, evaluate its current condition, and decide what to do about it in a constantly running loop. There’s no “wait for next task” downtime where the computer can cool down and clean up its memory cache.

Memory

The main memory is also a point of concern. There’s only 1GB of RAM on board a Raspberry Pi 3 with no option for expansion. This is already pretty cramped to run a modern operating system, never mind the robotic software we’d like to run on top. To mitigate limitations of small RAM, modern operating systems can page memory out to storage. But that just makes the next problem worse…

Storage

A Raspberry Pi uses commodity microSD flash memory as main storage. These devices are designed for usage scenarios like holding photos in a digital camera. Each bit of capacity is only expected to be written to a handful of times in its lifetime. But when serving as main storage of a Raspberry Pi actively running complex applications (or serving as paged memory) high traffic sections of the microSD may receive new write data several times a second, leading to premature failure.

Raspberry Pi in the TurtleBot 3

A Raspberry Pi 3 serves as the on-board brains of a TurtleBot 3 ‘Burger’ and ‘Waffle Pi’ variants. I had been curious how they got around the problems above and the answer is they’ve divided up workload of a robot brain across multiple computers. The Raspberry Pi 3 reads sensor data and outputs motor data, but performs little computation itself. Sensor data is sent over the network to a desktop computer who does the computation, evaluation, and decision making. Once an action is decided, the desktop computer sends motor commands over the network back to the Raspberry Pi.

This is a cost-effective approach because anyone doing robotics research will already have a powerful desktop computer where development is taking place. By offloading computation to said computer and keeping the robot’s on-board processing simple, it makes the robot a lot cheaper.

This is fine for development, but the fact TurtleBot 3 makers chose this approach reinforces the suspicion that an actual self-contained autonomous robot will need something more than a Raspberry Pi.

DIY Evaporative (“Swamp”) Cooler Build – Results

When my orange Home Depot bucket DIY swamp cooler started running, the stream of cool air blasting on my face was a very refreshing change from the oppressive heat of this southern California summer. However, this crude home-built unit does have some room for improvement, and I brought it into a local maker meet to seek ideas. Here it is at the meet sitting next to its inspiration, an earlier unit built out of a TrueValue white bucket.

Dual bucket swamp coolers

First problem: the cooling pad imparts a scent on the air stream. It’s not a bad smell – I describe it as bales of hay – but it gets pretty strong as the cooler runs. There’s a chance this will fade with use, but right now it definitely gets strong enough to be a turnoff. Especially when I am smelling it all the time.

Not helping with the above problem: water cycling through the unit sometimes drips out of holes. On solid surfaces, this collects as a puddle and has to be wiped up. But on carpeted surfaces if unchecked this water will soak into the carpet and underlying padding, making the room smell like bales of hay even in the absence of the cooler.

Also not helping: when transporting the cooler (like, say, to the maker meet) water may slosh out of the side. If this water soaks into the car’s upholstery, the car will start to smell like hay, too.

An idea to address dripping is to put a fine mesh between the bucket and the pad, such as the one installed on the white unit. This reduces but does not completely eliminate the water issue. A second potential line of defense is to build some sort of small catch gutter around bucket perimeter to catch and drain water back into the bucket. A small catch rim would be enough to handle small drips.

Lastly, the rate of evaporation isn’t high enough to require a constantly running pump. For better energy efficiency, the pump only needs to run intermittently, but that requires smarts and control equipment absent from this bargain-basement build.

This was an instructive (and cheap) first attempt at a home-built swamp cooler. There may be an improved iteration in the future, but for now it works well enough.

 

DIY Evaporative (“Swamp”) Cooler Build

And once again, I’m reminded that summers in southern California can get really, really hot. Sitting in the heat trying to comprehend details of ROS gets very tiresome very quickly. Normal house fans can’t cut it anymore. Air conditioning is an obvious solution, but it racks up my electric bill. But in the relatively dry climate of California, there is another option: the evaporative cooler (“swamp cooler”).

I could buy one, but a local maker tried the DIY option and that looked like it might be interesting. The key find enabling such an experiment is a cooler pad designed for the purpose and available for just $5.

My first attempt was to put that pad inside an old 10-gallon pail already in the house and just collecting dust. Unfortunately, the plastic for the pail proved too brittle to withstand drilling. Whether it is from old age or if the compound is fundamentally brittle I can’t tell, and the actual reason doesn’t matter anyway.

Brittle pail

Plan B: Use a cheap Home Depot 5-gallon bucket. Using rulers to measure out spacing, ventilation holes were drilled in the bucket. It was tedious but surprisingly took less time than I had feared.

Holey Bucket

Once holes were drilled, the pad was placed inside the bucket along with some water. A cheap pump designed for decorative water fountains were placed inside along with some hoses to circulate water, keeping the cooling pad damp.

For air circulation: I first tried to adapt my house fan but it proved too unwieldy. Moving on to plan B, I used a computer cooling fan salvaged from a dead ATX power supply. It bolted directly to the bucket lid and, with aid of a simple 3D-printed adapter to a flexible dryer duct, its output air can be directed to where it does the most good. Here’s my cooler, next to its inspiration.

The output air stream feels a lot cooler than ambient air, so the project is an overall success. But there is room for improvement…

HTML with Bootstrap Control Interface for ROSBot

While learning ROS, I was confident that it would be possible to replicate the kind of functionality I had built for SGVHAK rover. That is to say: putting up a HTML-based user interface for the user and talking to robot mechanical based on user input. Except that, in theory, the modular nature of ROS and its software support should mean it’ll take less time to build one. Or at least, it should be for someone who had already invested in the learning curve of ROS infrastructure.

At the time I didn’t know how long it would take to ramp up on ROS. I’m also a believer that it is educational to do something the hard way once to learn the ropes. So SGVHAK Rover received a pretty simple robot control built from minimal use of frameworks. Now that I’m ramping up on ROS, I’m debating whether it’s worthwhile to duplicate the functionality for self-education’s sake or if I want to go straight to something more functional than a remote control car.

This week I have confirmation a ROS web interface pretty simple to do: this recent post on Medium described one way of creating a web-based interface for a ROS robot. The web UI framework used in this tutorial is Bootstrap, and the sample robot is ROSBot. The choice of robot is no surprise since the Medium post was written by CEO of Husarion, maker of the robot. At MSRP of $1,299 it is quite a bit out of my budget for my own ROS experimentation at least for now. Still, the information on Medium may be useful if I tackle this project myself for a different robot, possibly SGVHAK rover or Sawppy.

Processed by: Helicon Filter;

 

JPL Open Source Rover is Officially Official

OSR RocksBack in January of this year I joined a team of pre-release beta testers for a project out of nearby Jet Propulsion Laboratory (JPL). While not exactly a state secret, we were asked not to overtly broadcast or advertise the project until after JPL’s own publicity office started doing so. This publicity release happened two days ago so the JPL Open Source Rover is now officially public.

Our team members drew from SGVHAK, so we’ve been calling our rover SGVHAK rover instead of JPL open source rover. Past blog entries talking about SGVHAK’s customization were described as done in contrast to a vague undefined “baseline rover.” I’ve gone back and edited those references (well, at least the ones I could find) to point to JPL’s rover web site. Which had gone live a few weeks ago but that was a “soft opening” until JPL’s publicity office made everything officially public.

After SGVHAK team completed the rover beta build in March, I went off on my own to build Sawppy the Rover as a much more affordable alternative to a rover model. To hit that $500 price point, I had described the changes and trade-offs against SGVHAK rover but it was really against JPL’s open source rover. I’ve fixed up these old blog posts minimally – the references are now correct though some of the sentence structures got a little awkward.

As part of JPL’s open source rover project, they had established a public web forum where people can post information about their builds. To share our story I’ve gone ahead and created a forum thread for SGVHAK rover, and a separate one for Sawppy.

I look forward to seeing what other people will build.