Hackaday Badge Power Source

When I got the Hackaday Belgrade 2018 badge playing music, I noticed the LCD screen brightness would visibly pulse when a note is playing. I thought it might be an intentional visual effect to go with the beat of the music, but I didn’t see any sign of code to do so intentionally. The next most obvious explanation, then, would be a dip in screen supply voltage when the speaker amplifier is drawing power.

Hackaday Badge Power

If this is the case, then the problem should be related to voltage regulation on the badge. Can we improve on this situation? I looked on the schematic for the voltage regulator and… hmm… there doesn’t seem to be one.

It looks like the badge is running directly on the pair of AA batteries. The positive terminal is the voltage supply rail, and the negative terminal is the ground plane. So there isn’t anything working to keep the supply voltage constant when the battery level dips, and users see a change in LCD screen backlight brightness.

The lack of voltage regulation also means the most obvious power upgrade carries some risk. Last year’s Hackaday camera badge saw several upgrades from its pair of AA batteries to a single lithium ion battery cell. We were cautioned against doing it, but some people went ahead anyway and seemed successful.

With the microcontroller knowledge I learned over the past year, I understand the warning: The PIC32 chips at the heart of both badges are 3.3V parts and according to their datasheet, they are only officially rated for operation at up to 3.8V. A lithium ion battery cell’s nominal voltage is 3.7V which would be fine. But (and this is a BIG BUT) a fully charged lithium ion battery cell delivers 4.2V directly. This is well into “At Your Own Risk” territory.

So yeah – last year some people connected a lithium cell and that seemed OK, but it’s going beyond spec. I expect that some people will again perform the same upgrade to their badge this year. Personally? I’m not going to do it.

If I need to power my badge with anything other than AA batteries, I’ll remove the AA pair and power the badge through its expansion header. The +V pin connects directly to the supply rail, and GND connects directly to the ground plane. Putting a 3.0-3.3V regulated voltage on those pins should power the badge nicely.

Note that in this case, when I disconnect the external power supply the AA pair will still need to be reinstalled for a firmware upgrade, as the programmer (PICkit 3 or similar) is not able to supply enough power to run the badge.

Hackaday Badge LCD Screen

The main user interface for the Hackaday Belgrade 2018 badge is the LCD screen up front and center. Looking at the badge’s main menu, we can tell it can display text characters. 40 columns in width, and 20 rows in height according to DISP_BUFFER_WIDE and DISP_BUFFER_HIGH in hw.h. This is just a little under an Apple II’s capabilities, which are 40 columns wide and 24 characters high.

Hackaday Badge Main Menu Straight

Based on badge startup animation and the user program demo art, this screen is not strictly limited to character display. However, at first glance it’s hard to tell if what we saw are creative text art or if we can do general purpose graphics on this screen. Where can we get this information? The datasheet for the screen, of course. Based on the badge schematic, we have a model number to use in a web search.

Hackaday Badge LCD

And it was a very easy search! The display unit is from a company whose product model numbers correspond to the unit’s capabilities. It starts with NHD which is the company name Newhaven Display Inc, followed by a 2.4 indicating screen’s physical size of 2.4″ in diagonal, and 240320 meaning a graphics resolution of 240 by 320 pixels, etc.

One unexpected attribute of this LCD module is that it has an integrated controller chip. The display module datasheet has all the relevant electrical details, but for the specifics of data flow and command set, it asks the user to go look in the datasheet for the Sitronix ST7789V controller.

Newhaven Display’s web site has an “Application Notes” section for their products. Clicking on the link for the 2.4″ display with ST7789 controller points to this fragment of C code, which looks a lot like some of the badge display interface code in disp.c.

Also in disp.c is the text display code and a hard-coded basic font. So all the character display stuff is on the badge for us to hack. This is a very promising start to exploring the graphics capability of the badge. I’ll definitely return to dig deeper.

Hackaday Badge Music

Hackaday Badge AudioAfter looking at the Hackaday Belgrade 2018 badge‘s onboard RGB LED, I moved on to looking over the audio subsystem to see how it accomplished its three-voice audio functionality. My first guess was that music playback was handled by a peripheral of some sort. On the board schematic I saw that the speaker was connected to a chip labelled LM4890 so it seemed like an obvious candidate for audio peripheral. However, after downloading and reading the datasheet for LM4890, I learned the chip only functions as an amplifier to take a low-powered audio waveform (via pins labeled +IN and -IN) as input and push that same waveform out at a speaker-appropriate level of power. So yes, it is a dedicated audio peripheral, but not a tone or music generator.

So where’s the music coming from? I see on the schematic capacitors and resistors but nothing else that would generate sound waves, except maybe what’s connected to the PIC32’s pins D0 through D3. Perhaps the PIC32 has a built-in music peripheral?

Looking in the code, I started tracing from the BASIC side with the tune statement, handled by tune_statement in ubasic.c. It calls sound_play_notes in hw.c. A few more straightforward C call tracing ended at sound_set_generator which flips some hardware control bits and puts the desired frequency in a hardware register. What are the results of these actions?

Searching on the specific keywords in set_sound_generator didn’t get me anywhere immediately. Reading the code more carefully led to a key insight: for sound generator 0, it deals with the number 2. For generator 1, number 3, and for generator 2, number 4. After running around in circles for a bit, I figured out these are PIC32 hardware timer peripherals. These bits control PIC hardware timers 2, 3, and 4 whose actions are handled by Timer2Handler, Timer3Handler, and Timer4Handler in hw.c. Every time the timer interrupt fires, the handler inverts a pin named GEN_0_PIN / GEN_1_PIN / GEN_2_PIN defined to be LATDbits.LATD1 / LATDbits.LATD2 / LATDbits.LATD1 which matches up with the PIC32 pins on the schematic.

So it’s not a music peripheral like I originally guessed. They are three of the PIC32’s generic timer peripherals, each used to toggle a pin on and off at a set frequency. These three timers are responsible for the three voices, whose waveforms are merged and sent into a LM4890 chip (lower center of picture below) to drive the speaker (center of picture).

Hackaday Badge Audio

Hackaday Badge RGB LED

The canonical introductory activity in microcontroller programming is to blink a LED. The Hackaday Belgrade 2018 badge makes this easy because there’s already an LED on board. Actually three LEDs – a red, a green, and a blue inside a single integrated unit.

Badge RGB LED

To make this even easier to access, this LED can be commanded from the onboard BASIC interpreter enhanced with badge-specific command led. It makes the LED blinking activity nearly trivial. This is a great way to get people started in a way that is as non-intimidating as possible

The custom led command in the BASIC interpreter is handled by the function led_statement inside ubasic.c, which calls the function set_led inside hw.c. Custom user programs written in C can call set_led as well, or copy code from set_led to manipulate LED hardware directly. They set the state of several predefined PIC hardware pins. In hw.h, we see the following

#define LED_R LATDbits.LATD6
#define LED_G LATFbits.LATF1
#define LED_B LATDbits.LATD7

Belgrade Badge LEDs

These pins match up with what we see on the schematic, wired to three pins on the controller each in series with a current-limiting resistor and the corresponding LED.

So if someone wants to blink the LED on/off, they are all set. The infrastructure exists to do so from either BASIC or from C.

However, if they want to do something more sophisticated than just on or off – such as dimming, pulsing, or mixing the three LEDs to create custom colors, the existing infrastructure is not enough. In order to create a light intensity level somewhere between full on and full off, additional code will be required. The PIC is perfectly capable of creating this pulse-width modulated (PWM) activity on an output pin, it’d take just a bit of code, and should be one of the easier custom coding project to tackle.

Hackaday Badge User Program Template

As part of the retro computing theme, the Hackaday Badge offers a BASIC interpreter and an emulated Z80 computer running CP/M. However, there’s also provision for people who want to get closer to the hardware. This took the form of a “User Program” option on the main menu, which points to a sample C program for modification and experimentation. This C program has access to all the badge system infrastructure utilized by the aforementioned BASIC interpreter and Z80 emulation.

Since I had the luxury of a badge on hand, the easy thing to do first is to launch the sample program and see what it does. I can see some text printed on screen, and a prompt for a key press. Once I pressed a key (no need to hit ENTER) the program switched over to a graphics drawing demonstration.

Hackaday Badge User Program

The colorful patterns cycled through with a very visible scan rate, taking roughly two seconds to update pixels from the top to bottom of the screen. My first reaction was: “Gosh, I hope 0.5 frames per second is not the fastest it can go.”

Once I saw it in action, it was time to dive into the source code. Here the text I saw was drawn using the same commands to draw the main menu: clear screen, set color, set X/Y position, and output text.

The first bit of novelty was processing the key press. Unlike the menu, the non-blocking keyboard check is interleaved with text drawing commands that could continue executing while waiting for a key press. This will be useful in things like game loops, where we want the action to keep going even if the user hasn’t pressed anything.

After the key press is the drawing demo. It is using a bitwise operator to update screen contents on every pass. And here we have good news: Not only is there an explicit delay in here (there’s a code comment that says “less than 1 ms”) the screen update is also taking place one pixel at a time, the least efficient method possible.

So the graphics demo update rate is definitely NOT the fastest the badge can go. How fast can we push it? That’s something to test in the near future.

Hackaday Badge Main Menu

When exploring a new codebase, it’s a great luxury to also reference it in running form, a luxury I have with the Hackaday badge code project and a physical badge on hand to see it run. What’s the first point of interaction with running code? The main menu! So that’s where I decided to start looking at details of the code.

Hackaday Badge Main Menu Straight

From the main() function, the main menu is handled by function badge_menu() in file badge.c. The first thing it calls is showmenu() in the same file which draws everything visible onscreen for the main menu. Including title bar, screen border, menu entries, and the user input prompt. This is a great reference for writing code to output text on screen.

Most of the code in badge_menu() reads user key presses and builds up the typed command in menu_buff. Upon pressing ENTER, the command is checked against the list of known commands. This is a chunk of code that can easily be recycled for processing user text input.

When a user enters a command that’s none of the recognized list items, the badge selects one of a set of error messages at random. This pseudo-random number is seeded with the standard srand() call using a PIC chip’s timer counter value at the time of user’s first key press. Seeding with a time value is common practice, but usually done with a real-time clock on the assumption that the current time is unpredictable. Here, the unpredictability comes from the amount of time a human user would take before pushing their first key after powerup, every person has a slightly different reaction time.

When an user command is recognized, badge_menu() calls into corresponding code to make things happen. The menu entries are straightforward, but there are a series of “easter egg” behavior. Rather than a direct string comparison, which spoils the surprise by embedding the secret code in source code, the responses are actually keyed against a hash of the string.

Hackaday Badge Code Exploration in MPLAB X IDE

MPLAB X logo

Now that the Hackaday badge project compiles successfully on my computer, it’s time to look around and get oriented with the structure of this code project. Part of the orientation is actually getting re-oriented with Microchip’s own MPLAB X IDE for developing software running on their chips, like the PIC32MX chip that’s at the center of the badge.

I only use MPLAB X when dealing with PIC code. While it’s not exactly my favorite, I would agree it is sufficient to be a productive tool. The features most relevant to me right now are for code navigation. MPLAB parses the project files enough to knows how pieces of code are linked and lets me traverse those links easily.

For exploration, the following two key combinations are super useful:

Control + B: Go to declaration/definition

Alt + Left: Go back

While it is possible to use a text search to do both of these things, having an IDE that understands the project and makes navigation simple is a real time saver. With these keystrokes I could take a deeper look inside a particular function to see what it does, repeating to trace calls further if necessary. And when I’ve had enough with a particular area of code, go back to where I was before I started digging.

But of course, these tools are only useful once I have a starting point. Looking over the project files, I thought main.c sounded like a great place to start and indeed it was. There was just a short snippet of code in the main() function but it is the root of all functionality.

hw_init();
badge_init();
if (KEY_BRK==0) post();
if ((SHOW_SPLASH)&(K_SHIFTR==1)) boot_animation();
badge_menu();

  • hw_init() initialize all the PIC settings upon startup. What the pins do, which peripherals are activated, set things to default values, etc.
  • badge_init() seemed redundant but Control+B lets me see its comment saying this is work done whenever badge wakes up from sleep. So hw_init() is for a cold boot, and badge_init() is for resuming from sleep.
  • If a specific key is pressed upon power-up, there’s post(). The code looks like some sort of self-test, which implies POST = Power-On Self-Test.
  • The badge does have a little startup animation, which is apparently launched by boot_animation() if a compile-time flag and a runtime key both agree it should be run.
  • Finally, badge_menu() which is a loop for the badge main menu. This call never returns.

Most of main.c actually consist of comments which invites hackers to look around, find certain items discussed in the comment by using Control + Shift + F to search on strings in comments.

I will absolutely accept that invitation.

Hackaday Badge requires PIC32 Legacy Peripheral Library

The Hackaday Superconference is in a few weeks, and as part of preparing for the conference, I have a badge from the Hackaday event this past May in Belgrade. Supposedly the upcoming Supercon badge will be a very close successor to this badge so I’m going to dig in and understand as much as I can about it.

The first task is to get the badge firmware project file up and running. There is a repository up on Github. I see the device is built around Microchip’s PIC32 line of processors, so obviously I needed to get my MPLAB X IDE updated and running.

When I tried to build the project as-is, my first errors were related to C standard compliance, which I’ve seen before in the context of working with Microchip’s 8-bit chips but could be addressed the same way.

Then I ran into the second compiler error:

fatal error: peripheral/adc10.h: No such file or directory

An internet search found this thread on Microchip’s developer forums, which indicated I need to download something called “PIC32 Legacy Peripheral Libraries” which is a separate download link on the same page as the XC32 compiler download.

PIC32 Legacy Peripheral Library

It is an archive file that, once unpacked, is an executable installer. Everything was relatively straightforward except for the installation path. By default it puts all the library files under my home directory and used version number of an old compiler. (On my Ubuntu machine, that translated to /home/roger/microchip/xc32/v1.40) which I guess could work given some project path updates. But it made more sense to install into the directory for my currently installed compiler, so the project path doesn’t have to be updated. (On my Ubuntu machine, that translated to /opt/microchip/xc32/v2.10.)

Once installed, the project built successfully!

And after I did this investigative work, I found that there were already instructions telling me I’d need the legacy library. So this turned out to be a failure to RTFM but I learned something in the process, so all good.

Robot Brain Candidate: Up Board

When I did my brief survey of potential robotic brains earlier, I was dismissive of single-board computers competing with the Raspberry Pi. Every one I knew about had sales pitches about superior performance relative to the Pi, but none could match the broad adoption and hence software library support of a Raspberry Pi. At the time the only SBC I thought might be worthwhile were the Nvidia Jetson boards with specialized hardware. Other than that, I believed the growth path for robot brains that can run ROS is pretty much restricted to x64-based platforms like Chromebooks, Intel NUCs, and full-fledged laptop computers.

What I didn’t know at the time was that someone has actually put an Intel CPU on a Raspberry Pi sized circuit board computer: the Up board.

UPSlide3Right-EVT-3
Image from http://www.up-board.org

Well, now. This is interesting!

At first glance they even worked to keep the footprint of a Raspberry Pi, including the 40-pin GPIO headers and USB, Ethernet, and HDMI ports. However, the power and audio jacks are different, and the camera and display headers are gone.

It claims to run Windows 10, though it’s not clear if they meant the restricted IoT edition or the full desktop OS. Either way it shouldn’t be too much of a hurdle to get Ubuntu on one of these things running ROS. While the 40-pin GPIO claims to match a Raspberry Pi, it’s not clear how they are accessed from an operation system not designed for a Raspberry Pi.

And even more encouragingly: the makers of this board is not content to be an one-hit wonder, they’ve branched out to other tiny form factors that give us the ability to run x86 software.

The only downside is that the advantage is from size, not computational power. None of the CPUs I’ve seen mentioned are very fast. At best, they are roughly equivalent to the one in my Dell Inspiron 11 3180, just tinier. Still, these board offer some promising approaches to robot hardware. It’s worth revisiting if I get stuff running on my cheap Dell but need a smaller board.

ROS Notes: Hector SLAM Creates 2D Map From 3D Motion

rosorg-logo1While I was trying to figure out how to best declare ROS coordinate transform frames of reference for Phoebe, I came across a chart on hector_slam page detailing multiple frames of reference for a robot base. It turned out to be unnecessary for the GMapping algorithm I’m currently using for Phoebe, but it made me curious enough to take a closer look.

I have yet to try using hector_slam on Phoebe. I’m only aware of it as an alternative SLAM algorithm to gmapping. The two systems make different trade-offs and one might work better than the other under different circumstances. I knew hector_slam was the answer when some people asked if it’s possible to perform SLAM without wheel odometry data, but I knew there had to be more to it. The requirements for multiple frames is my entry point to understand what’s going on.

My key takeaway after reading the paper: gmapping is designed for a robot working strictly in flat 2D. In contrast, hector_slam is designed for platforms that may move about in 3 dimensions. Not just flat 2D, but also ground vehicles traversing rough terrain (accounting for pitch and roll motions) and even robotic aircraft. Not requiring wheel odometry data is obvious a big part of supporting air vehicles!

But if hector_slam is so much more capable, why isn’t everyone using it? That’s when we get to the flip side: for good results it requires a LIDAR with high scan rate and high accuracy. Locating robot position without wheel odometry is made possible by frequent distance data refreshes (40 Hz was mentioned in the paper). Unfortunately, this also dims prospect for use aboard Phoebe as the Neato LIDAR is neither fast (4 Hz) or accurate.

And despite its capability to process 3D data from robotic platforms that have 3D motion, hector_slam generates a 2D map. This implies the mapping navigation algorithms will not have access to captured 3D data. Plotting a path with this data, a short and rough route would look better than a longer smooth flat route. This makes me skeptical hector_slam will be interesting for Sawppy, but that’s to be determined later.

 

Phoebe 1.0 Complete

Phoebe Chassis 2I started the Phoebe project with the goal of building something to apply what I’ve learned about ROS. Get some hands-on experience, learning the ropes. Now that Phoebe can map and autonomously navigate its environment, it is a good place to pause and evaluate potential paths forward. (Also: I have other demands on my time so I need to pause my Phoebe work anyway… and now is a great time.)

Option #1: Better Refinement

Phoebe can map surroundings then, using that map, navigate that environment. This level of functionality is on parity with the baseline functionality of TurtleBot 3. Though neither the mapping nor the navigation is quite as polished as performed by TurtleBot built by people who know what they are doing. For that, Phoebe’s ROS modules need tuning of their parameters to improve performance. There are also small bugs hiding in the system that need to be rooted out. I’m sure the ~100ms timing difference mystery is only the tip of the iceberg.

Risk: This is “the hard part” of not just building a robot, but building a good robot. And I know myself. Without a clear goal and visible progress towards that goal, I’m liable to get distracted or discouraged, trailing off and never really accomplishing.

Option #2: More ROS Functionality

I had been disappointed that the SLAM and navigation tutorials I’ve seen to date require a human to direct robot exploration. I had thought automated exploration would be part of SLAM but I was wrong. Thanks to helpful comments by Hackaday.io user Humpelstilzchen (who is building a pretty cool ROS robot too) I’ve now learned autonomous exploration is built on top of SLAM and Navigation.

So now that Phoebe can do SLAM and can navigate, adding one of the autonomous exploration modules would be the obvious next step.

Risk: It’s another ROS learning curve to climb.

Option #3: More Phoebe Functionality

Phoebe has wheel encoders and a LIDAR as input, and it might be interesting to add more. Ideas have included:

  • Obstacle detection to augment LIDAR, such as
    • Ultrasound distance sensor.
    • Infrared distance sensor (must avoid interference with LIDAR).
    • Bumpers with microswitches to detect collision.
  • IMU (inertial measurement unit).
  • Raspberry Pi camera or other video feed.

Risk: Over-complicating Phoebe, which was always intended to be a minimal-cost baseline entry into the world of ROS following the footstep of ROS TurtleBot.


Options 1 and 2 take place strictly in software, which means mechanical chassis will remain untouched.

Option 3 changes Phoebe hardware, and that would start deviating from TurtleBot. There’s value in being TurtleBot-compatible and hence value in taking a snapshot at this point in time.

Given the above review, I declare the mechanical construction project of Phoebe the TurtleBot complete for version 1.0. As part of this, I’ve also updated the README file on Phoebe’s Github repository to describe content. Because I know I’ll start forgetting!

Phoebe Is Navigating Autonomously

I’ve been making progress (slowly but surely) thorough the ROS navigation stack tutorial to get it running on Phoebe, and finally reached the finish line.

After all the configuration YAML files were created, they were tied together into a launch file as parameters to the ROS node move_base. For now I’m keeping the pieces in independent launch files, so move_base is ran independently of Phoebe’s chassis functionality launch file and AMCL (launched using its default amcl_diff.launch).

After they were all running, a new RViz configuration was created to visualize local costmap and amcl particle cloud. And it was a huge mess! I was disheartened for a few seconds before I remembered seeing a similar mess when I first looked at navigation on a Gazebo simulation of TurtleBot 3 Burger. Before anything would work, I had to set the initial “2D Pose Estimate” to locate Phoebe on the map.

Once that was done, I set a “2D Nav Goal” via RViz, and Phoebe started moving! Looking on RViz I could see the map along with LIDAR scan plots and Phoebe’s digital representation from URDF. Those are all familiar from earlier. New to the navigation map is a planned path plotted in green taking account of the local cost map in gray. AMCL contributed the rest of the information on screen, with individual estimates drawn as little yellow arrows and estimated position in red.

Phoebe Nav2D 2

It’s pretty exciting to have a robot with basic intelligence for path planning, and not just a fancy remote control car.

Of course, there’s a lot of tuning to be done before things actually work well. Phoebe is super cautious and conservative about navigating obstacles, exhibiting a lot of halting and retrying behavior in narrower passageways even when there are still 10-15cm of clearance on each side. I’m confident there are parameter I could tune to improve this.

Less obvious are what I need to adjust to increase Phoebe’s confidence in relatively wide open areas, Phoebe would occasionally brake to a halt and hunt around a bit before resuming travel even when there’s plenty of space. I didn’t see an obstacle pop up on the local costmap, so it’s not clear what triggered this behavior.

(Cross-posted to Hackaday.io)

Navigation Stack Setup for Phoebe

rosorg-logo1Section 1 “Robot Setup” of this ROS Navigation tutorial page confirmed Phoebe met all the basic requirements for the standard ROS navigation stack. Section 2 “Navigation Stack Setup” is where I need to tell that navigation stack how to run on Phoebe.

I had already created a ROS package for Phoebe earlier to track all of my necessary support files, so getting navigation up and running is a matter of creating a new launch file in my existing directory for launch files. To date all of my ROS node configuration has been done in the launch file, but ROS navigation requires additional configuration files in YAML format.

First up in the tutorial were the configuration values common for both local and global costmap. This is where I saw the robot footprint definition, a little sad it’s not pulled from the URDF I just put together. Since Phoebe’s footprint is somewhat close to a circle, I went with the robot_radius option instead of declaring a footpring with an array of [x,y] coordinates. The inflation_radius parameter sounds like an interesting one to experiment with later pending Phoebe performance. The observation_sources parameter is interesting – it implies the navigation stack can utilize multiple sources simultaneously. I want to come back later and see if it can use a Kinect sensor for navigation. For now, Phoebe has just a LIDAR so that’s how I configured it.

For global costmap parameters, the tutorial values look equally applicable to Phoebe so I copied them as-is. For the local costmap, I reduced the width and height of the costmap window, because Phoebe doesn’t travel fast enough to need to look at 6 meters of surroundings, and I hoped reducing to 2 meters would reduce computation workload.

For base local planner parameters, I reduced maximum velocity until I have confidence Phoebe isn’t going to get into trouble speeding. The key modification here from tutorial values is changing holonomic_robot from true to false. Phoebe is a differential drive robot and can’t strafe sideways as a true holonomic robot can.

The final piece of section 2 is AMCL configuration. Earlier I’ve tried running AMCL on Phoebe without specifying any parameters (use defaults for everything) and it seemed to run without error messages, but I don’t yet have the experience to tell what good AMCL behavior is versus bad. Reading this tutorial, I see the AMCL package has pre-configured launch files. The tutorial called up amcl_omni.launch. Since Phoebe is a differential drive robot, I should use amcl_diff.launch instead. The RViz plot looks different than when I ran AMCL with all default parameters, but again, I don’t yet have the experience to tell if it’s an improvement or not. Let’s see how this runs before modifying parameters.

(Cross-posted to Hackaday.io.)

Checking If Phoebe Meets ROS Navigation Requirements

Now that basic coordinate transform frames have been configured with help of URDF and robot state publisher, I moved on to the next document: robot setup page. This one is actually listed slightly out of order list item on ROS navigation page, third behind the Basic Navigation Tuning Guide. I had started reading the “Tuning Guide” and saw that, in that introduction, the tuning guide assumes people have read the robot setup page. It’s not clear why they are out of order, but clearly robot setup needs to come first.

Right up front in Section 1 “Robot Setup” was a very helpful diagram labelled “Navigation Stack Setup” showing major building blocks for an autonomously navigating ROS robot. Even better, these blocks are color-coded as to their source. White blocks are part of the ROS navigation stack, gray parts are optional components outside of that stack, and blue indicates robot-specific code to interface with navigation stack.

overview_tf
Navigation Stack Setup diagram from ROS documentation

This gives me a convenient checklist to make sure Phoebe has everything necessary for ROS navigation. Clockwise from the right, they are:

  • Sensor source – check! Phoebe has a Neato LIDAR publishing laser scan sensor messages.
  • Base controller – check! Phoebe has a Roboclaw ROS node executing movement commands.
  • Odometry source – check! This is also provided by the Roboclaw ROS node reading from encoders.
  • Sensor transforms – check! This is what we just updated, from a hard-coded published transform to one published by robot state publisher based on information in Phoebe’s URDF.

That was the easy part. Section 2 was more opaque to this ROS beginner. It gave an overview of the configuration necessary for a robot to run navigation, but the overview assumes a level of ROS knowledge that’s at the limit of what I actually have in my head right now. It’ll probably take a few rounds of trial and error before I get everything up and running.

(Cross-posted to Hackaday.io)

Phoebe Digital Avatar in RViz

Now that Phoebe URDF has been figured out, it has been added to RViz visualization of Phoebe during GMapping runs. Before this point, Phoebe’s position and orientation (called a ‘pose‘ in ROS) is represented by a red arrow on the map. It’s been sufficient to get us this far, but a generic arrow is not enough for proper navigation because it doesn’t represent the space occupied by Phoebe. Now, with the URDF, the volume of space occupied by Phoebe is also visually represented on the map.

This is important for a human operator to gauge whether Phoebe can fit in certain spaces. While I was driving Phoebe around manually, it was a guessing game whether the red arrow will fit through a gap. Now with Phoebe’s digital avatar in the map, it’s a lot easier to gauge clearance.

I’m not sure if the ROS navigation stack will use Phoebe’s URDF in the same way. The primary reason the navigation tutorial pointed me to URDF is to get Phoebe’s transforms published properly in the tf tree using the robot state publisher tool. It’s pretty clear robot footprint information will be important for robot navigation for the same reason it was useful to human operation, I just don’t know if it’s the URDF doing that work or if I’ll end up defining robot footprint some other way. (UPDATE: I’ve since learned that, for the purposes of ROS navigation, robot footprint is defined some other way.)

In the meantime, here’s Phoebe by my favorite door to use for distance reference and calibration.

Phoebe By Door Posing Like URDF

And here’s the RViz plot, showing a digital representation of Phoebe by the door, showing the following:

  • LIDAR data in the form of a line of rainbow colored dots, drawn at the height of the Neato LIDAR unit. Each dot represents a LIDAR reading, with color representing the intensity of each return signal.
  • Black blocks on the occupancy map, representing space occupied by the door. Drawn at Z height of zero representing ground.
  • Light gray on the occupancy map representing unoccupied space.
  • Dark gray on the occupancy map representing unexplored space.

Phoebe By Door

(Cross-posted to Hackaday.io)

Phoebe URDF: Fixing Functional Problems

Once I had a decent looking URDF for Phoebe up and running, I added it into the Phoebe launch files and started working on the problems exposed by putting it to work.

The first problems were the drive wheels. Visually, they were stuck at the origin and didn’t move with the rest of the robot. Looking through error messages I realized ROS had expected me to read wheel encoder values and publish them as joint state. Since I hadn’t done so, this meant the wheels (attached with “continuous” joint) didn’t know their location. Until I get around to processing wheel encoder values, the joint type was changed to “fixed” to attach them to the chassis.

Looking at the model from multiple angles, I realized I forgot the caster wheel. Since it’s not driven, it is represented as a simple sphere and also attached via a fixed joint.

That’s enough to start driving around as a single unit, but the robot movement in RViz was reversed front/back with LIDAR data plot. This was caused by the fact I forgot to tell ROS the LIDAR is pointed backwards on the robot. Once I had done so, the 180 degree yaw is visible on the object axis visualization: The LIDAR’s X-axis (red cylinder) is pointing backwards instead of forwards like all the other axis.

Phoebe RViz Axes Arrows No Name

The final set of changes might be more cosmetic than functional. When reading about differential drive robots in ROS, it was brought up several times that the robot’s X/Y origin base_link need to be lined up with the pivoting axis of the robot. However, it wasn’t clear where the Z axis is supposed to be. Perhaps this is different for each ROS mapping module? The algorithm hector_slam defined several frames but they don’t appear to be supported by gmapping.

I first defined Phoebe origin as the center point between its two drive wheel axles. When rendered in RViz, this means the Z plane intersects the middle of the robot. It seems to work well, but the visualization looks a bit odd. Intuitively I want the Z plane to represent the ground, so I decided to drop the robot origin to ground level. In the object visualization, this is visible as the purple arrow heads all pointing at a center point below the robot. If I learn this was a bad move later, I’ll have to change it back.

All these changes combined gave me a Phoebe URDF with minimal representation in RViz visualization of Phoebe behavior.

(Cross-posted to Hackaday.io)

Describe Phoebe For ROS Using URDF

Now that I’ve decided to bring up the ROS navigation stack for Phoebe, where do I start? Well, the ROS Wiki page for the subject is always a good place to start, as they tend to have a tutorial for the subject. ROS navigation is no exception.

The first recommended page is actually a familiar sight – the brief overview on tf was required reading back when I first assembled the chassis. At the time, I could get away with a very simple static publisher, because I just had to tell ROS how and where my Neato LIDAR is mounted on my robot chassis. But now I guess I need to advanced to the next step and publish robot state. And this means describing Phoebe in more detail for ROS using a XML syntax called URDF (Unified Robot Descriptor Format).

So in order to bring up ROS navigation on Phoebe, the navigation wiki page has pointed me to robot state publisher and also the ROS URDF Tutorial. To learn one thing I had to learn another, the typical bootstrap process when learning something new.

For the purposes of robot physics simulation, the robot should be described using very basic geometry: a combination of rectangular solids, cylinders, and spheres. This keeps the computation workload for collision detection simple. While the visual representation can be more complex than the collision detection representation, it doesn’t have to be. So for this first draft, I’ll just do a super simple Phoebe for visual representation, suitable for use in collision calculations if I get into that later.

I started with Phoebe’s Onshape CAD file.

Phoebe CAD Full

Taking the critical dimensions, I created a simplified version in Onshape CAD using just rectangular boxes and cylinders. This exercise makes it a fairly straightforward exercise to translate into URDF.

Phoebe CAD Simplified

By measuring the dimensions in CAD, I could declare a few primitives with URDF and see what it looks like in RViz for comparison against CAD. Once the visual appearance is roughly correct, it’s time to tune the details and make sure they work for ROS functional purposes.

Phoebe RViz Simplified

(Cross-posted to Hackaday.io)

Next Phoebe Project Goal: ROS Navigation

rosorg-logo1When I started working on my own TurtleBot variant (before I even decided to call it Phoebe) my intention was to build a hardware platform to get first hand experience with ROS fundamentals. Phoebe’s Hackaday.io project page subtitle declared itself as a ROS robot for <$250 capable of SLAM. Now that Phoebe can map surroundings using standard ROS SLAM library ‘gmapping‘, that goal has been satisfied. What’s next?

One disappointment I found with existing ROS SLAM libraries is that the tutorials I’ve seen (such as this and this) expect a human to drive the robot during mapping. I had incorrectly assumed the robot would autonomously exploring its space, but “simultaneous location and mapping” only promises location and mapping – nothing about deciding which areas to map, and how to go about it. That is left to the human operator.

When I played with SLAM code earlier, I decided against driving the robot manually and instead invoked an existing module that takes a random walk through available space. A search on ROS Answers web site for something more sophisticated than a random walk resulted in multiple pointers to the explore module, but that code hasn’t been maintained since ROS “groovy” four versions ago. So one path forward is to take up the challenge of either update explore or write my own explorer.

That might be interesting, but once a map is built, what do we do with it? The standard ROS answer is the robot navigation stack. This collection of modules is what gives a ROS robot the ability to plan a path through a map, watch its progress through that plan, and update the plan in reaction to unexpected elements in the environment.

At the moment I believe it would be best to learn about the standard navigation stack and getting that up and running on Phoebe. I might return to the map exploration problem later, and if so, seeing how map data is used for navigation will give me better insights into what would make a better map explorer.

(Cross-posted to Hackaday.io)

Phoebe Accessory: HDMI Plug

In most ROS demonstrations, the robots are running through a pristine laboratory environment. Phoebe is built to roam my home, which is neither a laboratory or pristine. This became a problem when Phoebe ran across some dust bunnies and picked them up with its leading edge.

When choosing an orientation for Raspberry Pi 3 on Phoebe’s electronics tray, I chose to make the HDMI port accessible so I could connect a monitor as necessary. This resulted in that port facing forward along with the micro-USB power port and the headphone jack. All three of these ports were plugged up with debris when Phoebe explored some paths less well-traveled.

After I cleaned up the mess, all three ports appeared to work, but I was worried about Phoebe encountering some less fluffy obstacles. The audio jack was not a high priority as Raspberry Pi default audio is notoriously noisy and I haven’t needed it. The power jack could be easily bypassed by sending power via the GIPO pins (as I’m doing right now). That leaves the HDMI port, which can be quite inconvenient if damaged.  If I need a screen on a Pi with damaged HDMI port, I’d need to buy or borrow a screen that goes into the alternate DSI port like the official Raspberry Pi touchscreen.

Fortunately, there are little plastic plugs that come with certain HDMI peripherals for protection during shipping. In my case, I had a small red HDMI plug that came with my MSI video card. I installed it on Phoebe’s Raspberry Pi to protect the HDMI port against future debris encounters. Now Phoebe has a red nose. If it should glow I might have to rename my robot to Rudolph the Red Nosed Robot.

But it doesn’t glow, so Phoebe won’t get a name change.

Phoebe HDMI Plug

(Cross-posted to Hackaday.io)

Phoebe Accessory: Battery Voltage Monitor

And now, a few notes on some optional accessories. These aren’t required for anyone building their own Phoebe, but are nice to have.

The first item is a battery voltage meter and alarm. While Phoebe can monitor battery voltage in software via Roboclaw API, I also wanted an always-available physical readout of battery voltage. On Sawppy I thought I just needed to show the battery’s output voltage, but the number is only good if I could read it. During Sawppy’s all-day outing at JPL, California sunlight was too bright to read the number and I couldn’t tell when my battery dropped below recommended level for lithium chemistry batteries.

Searching for a better solution, I found these battery voltage alarms. Not only do they display voltage, when the level gets too low they also sound a buzzer. Judging by its product description, these were designed for remote-control aircraft where it’s not convenient to read a small number up in the air.

Phoebe Voltmeter

The downside is that the alarm is designed to be audible while up in the air and buried inside a fuselage. When it is on the ground and right in front of my face, it is a piercing shriek. Which isn’t so bad if it only occurs during low battery… but it also sounds a test beep when I first plug it in. It is loud. Very loud. To save my eardrums, the alarm buzzer has been muffled with some cotton pulled from a cotton swab. It’s still loud, but no longer gives me a headache afterwards.

I’ve also soldered a JST-XH connector onto the unpolarized input pins to fit my battery’s balance charging plug. Having a polarized connector helps make sure I don’t plug the battery in backwards. Those exposed pins are also a short-circuit risk, which I crudely mitigated by wrapping a layer of servo tape around them. Finally servo tape is used to secure the alarm to Phoebe’s backbone.

Now I can drive Phoebe around the house, even out of sight, confident that if I ever run the battery too low I’ll be notified with an alarm.

(Cross-posted to Hackaday.io)