Laser+Speaker Lissajous Proof of Concept

With LRWave 1.0 complete, I could focus on the mechanical bits of a Lissajous machine driven by that web app. The goal is to build a more accessible Lissajous machine that does not have the risk presented by high voltages involved in driving a CRT. It will not look as good as a CRT, but that’s the tradeoff:

  • High voltage electron beam in a CRT replaced by far lower voltage LED laser diode.
  • CRT deflection yokes replaced by audio speakers.

The proof of concept rig is driven by the same thrift store amplifier used in the successful CRT Lissajous curve demo. This time it will be driving speakers, which is what it was designed for, instead of CRT deflection yokes. The speakers came from the same source as that CRT: a Sony KP-53S35 rear projection television we took apart for parts so we could embark on projects like this.

Hypothesis: If we attach a mirror to a speaker, then point a laser beam at that mirror, the reflected beam will be displaced by the movement of that speaker. By using two speakers and adjusting beam path through them, we can direct a laser beam among two orthogonal axis X and Y via stereo audio waveform generated by LRWave.

For the initial test, mirrors were taped directly on speaker cones and arranged so laser beam is projected to the ceiling. This produced a satisfactory Lissajous curve. Then the mirror configuration were changed to test another hypothesis: instead of direct attachment to speaker cone, tape the mirror so it sits between the fixed speaker frame and the moving speaker cone. This was expected to provide greater beam deflection, which it did in the pictured test rig. However, the resulting Lissajous curves were distorted due to flex by the plastic mirrors and not-very-secure tape.

RPTV Speakers and masking tape

Experimenting with maximum deflection range, I pushed the speakers too far and burned one up. For a brief few seconds the laser beams were visible, reflected by the smoke of an overloaded speaker coil.

  1. I could see the laser beams, cool!
  2. Um… why am I able to see the laser beams?
  3. [sniff sniff]
  4. Oh no, the magic smoke is escaping!

The Lissajous curve collapsed into a flat line as one deflection axis stopped deflecting, and that ended experimentation for the day.

LRWave 1.0 Complete

The (mostly cosmetic) fine tuning have been done and the LRWave web app is sitting at a good state to declare version 1.0 complete. Now I can move onward to the hardware components of my laser Lissajous project.

LRWaveV1.0

Some tidbits worthy of note are:

  • MDC Web theme colors can be applied via CSS styles as per documentation. What’s (currently) missing from the documentation is the requirement I also have to add @import "@material/theme/mdc-theme"; to my style sheet. In the absence of that import directive, the theme CSS styles have no effect, causing me to bang my forehead against a non-respnsive brick wall. Now the following visual elements finally worked as designed:
    • App has a dark background, because I’m building this for a laser Lissajous curve project and I expect to use this app mostly in the dark.
    • The column titles “Left” and “Right” now has theme colors.
    • The waveform drop-down box options are now visible and not white-on-white. (Change is not visible in screenshot.)
  • Web Audio API implementation in Chrome would cause audible pops when pausing or resuming playback. This is true even if I write code to fade volume. (Fade out to zero upon pause, fade in from zero upon resume.) Since this fade code added complexity to my app but failing to eliminate the audible pops, I did not commit fade code to my app.
  • The browser tab favicon now works… I am not sure why but next item might be related:
  • I’ve added an instance of the icon to my app’s background. Now there’s a little app icon at top center of app background. Perhaps this change helped kicked the browser favicon code into action?
  • Responsive layout is much improved – the app was always fine in portrait mode in a compact phone resolution, but now it no longer looks embarrassing on larger displays.
  • Added support for Safari browser. (Safari uses webkitAudioContext while other browsers use AudioContext.)
  • Wrote up a README.md for the project.

The whole project was done via Cloud 9 IDE. My AWS cost for this experiment was a grand total of $0.43.

LRWave EC2 Cost

Github project: https://github.com/Roger-random/lrwave

Published live version: https://roger-random.github.io/lrwave/

LRWave Core Functions Complete

After a few hours of JavaScript coding, all the core functionality I set out to implement is running. At least, at a basic level. (This link will go to the live version of LRWave, which should continue to improve as I learn.)

  • Two-channel (left and right) function generator
  • Frequency adjustment
    • User can type in anything from 0 to 24 kHz
    • Buttons to adjust frequency up/down in 1Hz steps.
  • Waveform selection available in Web Audio API
    • Sine
    • Square
    • Sawtooth
    • Triangle
  • Gain adjustment
    • User can type in anything from 0% to 100%
  • Play/pause button.

LRWave on Nexus 5

Since I hate web pages that start playing some sound upon load without user interaction, I don’t start playing sound until the user has pushed the play button. It turns out I have accidentally followed the guidance from Google, which will not allow pages using Web Audio API to play sound before user input. I don’t always agree with Google, but I’m glad we’re aligned here.

The JavaScript portion of this project have been relatively easy. I found Web Audio API to be straightforward to use in my simple little function generation app. There are some refinements to be made: there are audible pops as settings are updated. Right now I perform those updates immediately, but they should ramp from old to new value in order to avoid abrupt changes. Everything else dealing with user interface are standard event handler code.

What I found challenging is the aesthetics side of things. Proper HTML layout is still a struggle for me, but I know projects like this will eventual lead to mastery (or at least competence) with responsive web layout. In the meantime, my app looks fine on a phone in portrait orientation but things start to get weird quickly as browser window size grows.

Individual components in MDC Web have worked well so far, with the exception of the slider control. I tried to use it for gain but it didn’t behave as I expected so, after a half an hour scratching my head, I set it aside for later. Another difficulty was MDC theming. I wanted the background to be black and I haven’t found my mistake when trying to do it within MDC theming module.

On the web browser interaction side, I designed an icon and added a reference in the header. Google Chrome recognizes it enough to use the icon on the recently opened sites window, but it doesn’t use the icon on the browser tab and I don’t understand why.

Lots of learning so far, but more learning plus refinement ahead…

Building A MDC Web App With Only The Parts I Need

Material Design logoOne criticism of using Materialize CSS is that we were pulling down the entire library and all of its resources for our project, whether we are using them or not. Aside from the obvious inefficient use of bandwidth, it also presented a challenge when we wanted SGVHAK Rover to be usable independently without an internet connection. This meant we had to make a copy of the entire library in our project to serve locally.

To avoid this problem, MDC Web (Material Design Components for Web) is designed so projects could pull in features piecemeal. Now each web app only has to download the parts they needed. The reason this isn’t typically done is because of another inefficiency: there is overhead per HTTP transaction so we can’t have MDC web components come down in tiny little individual pieces – the overhead would easily wipe out any space savings.

To avoid that problem, MDC Web uses webpack to bundle all the individual pieces into a consolidated file that only requires a single HTTP download transaction. So instead of a HTML file demanding tens to hundreds of individual CSS and JavaScript files, we have a single HTML file, that loads a single bundled CSS file, and a single bundled JavaScript file. These bundles are standalone pieces that can be served by any web server independent of MDC Web webpack infrastructure. Maybe even copied to a rover.

I was looking forward to this functionality and was happy to see it was covered in Section 5 of the Getting Started guide. After putting in the configuration required, it was a matter of typing npm run build to generate my compact representation. I copied the generated bundle.css and bundle.js files and my index.html to a separate directory for the next test.

When I served up my Getting Started project via the development server npm start, Chrome developer console showed that the simple button required downloading 462KB of support files. That’s roughly the same order of magnitude as what it would take to download Materialize and all supporting information, and I was eager to see improvement.

I then went to the separate directory with the built bundles. To ensure that I’m indeed free from npm and webpack, I served these bundle files using an entirely different infrastructure: Python 3’s development web server. The environment variables $PORT and $IP were already set for a Cloud 9 environment and were reused:

python3 -m http.server $PORT --bind $IP

In the browser developer console, I could see that these files – and only these files – were downloaded. They added up to 32KB which was larger than I had hoped for in such a simple example, but maybe there’s room for further optimization. In any case, that’s definitely a tremendous improvement over 462KB.

Using Cloud 9 To Explore Web Development

I’ve started going through the Getting Started guide for Google’s Material Design Components for Web (MDC Web). This is a prerequisite for Material tutorials and I’ve already run into one problem traced to an out-of-date installation of Node.js. Given that I’m learning a new piece of software, I’m sure I’ll run into more problems that require modifying my computer. As mistakes are likely due to my learning status, I’m wary of potentially messing up my main Ubuntu environment.

What I need right now is an enclosed sandbox for experimentation, and I’ve already set up virtual machines I could use for this purpose. But since this is a web-based project, I decided to go all-in on the web world and try doing it online with a web-based development environment: Cloud 9.

Cloud 9 logo colorI’ve played with Cloud 9 before, back when it was a startup with big dreams. It has since been acquired by Amazon and folded into the wide portfolio of Amazone Web Services (AWS). As far as I knew Cloud 9 has always run on AWS behind the scenes, but now a user is exposed to the underlying mechanisms. It means we now have better control over the virtual machines we’re running, which is mostly good. It also means users have to pay for those virtual machine resources, which is fairly inexpensive (I expect to pay less than $1 for this experiment) but isn’t as good as the “Free” it used to be. The saddest part is that it’s no longer “point-click-go” simple to get started. Getting Cloud 9 properly setup means climbing the learning curve for managing AWS security and permissions, which can be substantial.

Access Permissions

AWS has an entire document focused on authorization and access control for Cloud 9. For someone like myself, who just want to play with Cloud 9 but also want to safely partition it off from the rest of my AWS account, the easiest thing to do is to create a new user account within my AWS dashboard. This account can be assigned a predefined access policy called AWSCloud9User, and that’ll be enough to get started. When logged in to this dedicated account I can be confident mistakes won’t accidentally damage anything else I have in AWS.

Network Permissions

With the power of fine-grained virtual machine control comes the responsibility of configuring it to act the way we want. When I last used Cloud 9, launching a piece of web hosting software on my VM meant it was immediately accessible from the internet. That meant I could bring it up on another browser window at my desk to see how it looks. However, this is no longer the default behavior.

When running my VM in the form of an Amazon EC2 instance like now, it has its own network firewall settings to deal with. Not only that, the VM is in its own private network (Amazon VPC) which has its own network firewall settings. Both of these firewalls must be configured to allow external access if I’m to host web content (as when exploring MDC Web) and wish to see it on my own machine.

There’s a lot of documentation online for using Cloud 9. The specific configuration settings that need to be changed are found under “Previewing Running Applications” in section “Share a Running Application over the Internet

First Step In Material Design Adventure Foiled By Ubuntu’s Default Old NodeJS

With the decision to tackle a new web-based software project, the next decision is what web-based UI framework to build the app in. My last web-based software project was to build an UI for SGVHAK rover at the beginning of the year. In the fast-paced world of front-end web development, that’s ancient history.

The rover UI project used Materialize CSS library to create an interface that follows Google’s Material Design guidelines. At the time, Google offered web developers a library called “Material Design Lite” but with the caveat they’re revamping the entire web development experience. There was little point in climbing the learning curve for a deprecated library like MDL. As Materialize CSS was close to using Bootstrap, a known quantity, the choice was appropriate for the situation.

Now, at the end of the year, we have Google’s promised revamp in the form of Material Design Components for Web. I am still a fan of Material so my signal generator utility project LRWave will be my learning project to use Google’s new library.

Diving right into the Getting Started Guide during a local coding meetup, I got as far as the end of “Step 1” executing npm start when I ran into my first error :

ERROR in ./app.scss
    Module build failed: SyntaxError: Unexpected token {
        at exports.runInThisContext (vm.js:53:16)
        at Module._compile (module.js:374:25)
        at Object.Module._extensions..js (module.js:417:10)
        at Module.load (module.js:344:32)
        at Function.Module._load (module.js:301:12)
        at Module.require (module.js:354:17)
        at require (internal/module.js:12:17)

[…]

webpack error

Since app.scss was only three lines for Step 1, it was trivial to verify there’s no typo to account for an unexpected “{“. A web search for this error message implicated an out-of-date installation of NodeJS, which triggered a memory from when I first started experimenting with NodeJS on my installation of Ubuntu 16.04. When trying to run node at the command line, a fresh install of Ubuntu 16.04 would tell the user:

The program 'node' is currently not installed. You can install it by typing:
sudo apt install nodejs-legacy

That suffix “legacy” is a pretty good hint this thing is old. Running node --version returned v4.2.6. Checking the Node JS website, today’s LTS is v10.14.2 and latest is 11.4.0. So yes, 4.2.6 is super old! Fortunately there’s a pointer to a more updated set of binaries maintained by Nodesource that plays well with Ubuntu’s built-in package management system. Following those directions automatically uninstalled legacy binaries and replaced them with up-to-date versions.

Once I’m running on non-ancient binaries, I could continue following MDC Web’s Getting Started guide on my computer. But this experience motivated me to look into a different option…

New Project: LRWave

LRWave Logo 128And now we switch gears to address something that came up earlier. During my successful collaboration session with [Emily] to draw Lissajous curves on an old CRT, one of the deflection coils was driven by a stereo amplifier based on signals from an app running on my phone. What I had wanted to try was to drive both deflection axis from the amplifier using different signals sent to the left and right stereo channels. Unfortunately, the simple tone generator app I had use doesn’t allow independent channel control. Well, somebody should write an app to do this, and the best way to make sure the app does what I want is to write it myself. Thus formed a convenient excuse for me to dive back into the software world for my next project.

In this day and age, the easiest way to distribute a piece of code is to make it a web app of some flavor. The basic way is to create a sound file (MP3 or similar) on the server with the desired wave form, and download to the browser for playback. But this is pretty clunky. It would be great if there’s a way to generate and manipulate audio wave forms using JavaScript running on the browser, no server-side code required.

I started with the HTML5 <audio> tag, which would allow implementing the “basic way” above of playing a sound clip generated on the server but not the preferred path. For the more advanced approach to work, there would need to be a browser-side API that are supported by modern browsers. Which implies a W3C standard of some sort. I started dreaming up technical terms that might let me find such an API, and after a few searches with fancy words like “frequency generation API” I found the much simpler named Web Audio API. Double-checking on the “Can I Use” reference site, we can see it’s widely implemented enough to be interesting.

Since it’s a simple utility, it has a simple utilitarian name of “LRWave.” Just something to generate different wave forms to be sent out to left and right channels. Following the simplicity of the app, its logo is two instances of the Google Material speaker icon facing left and right. In between them are two different waves from a public domain SVG file signifying the intent that it sends different waves to each speaker.

Sawppy Post-Faire Cleanup

When I work on Sawppy, I test and run indoors. At DTLA Maker Faire Sawppy ran all over, both indoors and out. Most of the time people were playing with Sawppy on a piece of artificial turf at Maguire Gardens. This is an outdoor space where people would walk their dogs, raising obvious sanitation concerns running Sawppy on my home carpet after the event.

Well, after a long day of work, who doesn’t enjoy kicking off their shoes and soaking their feet? I could give Sawppy the same royal treatment. All six wheels were removed and soaked in a tub filled with a mixture of water and household bleach. A retired toothbrush was used to scrub off dirt particles clinging to the wheel. Hopefully this removed most of the contaminants Sawppy might have picked up during the event.

Sawppy kicks off shoes

It was also a good time to perform an inspection to see how Sawppy held up mechanically. In addition to the set screw mentioned yesterday, a few chassis mounting screws have fallen out and need to be replaced. I designed plenty of redundancy in these mounts so there was little risk of Sawppy falling apart.

Sawppy lost fasteners

After a few hours of soaking, the wheels were hung up to dry like old socks. What has six rover wheels but is not a rover? This laundry rack.

Sawppy laundry

(Cross-posted to Hackaday.io)

Sawppy at DTLA Mini Maker Faire

Yesterday Sawppy went on an adventure to the downtown Los Angeles Mini Maker Faire. There Sawppy found a receptive and appreciative audience. There were a lot of enchanted kids, interested parents, and other makers who might be building their own Sawppy rovers.

The morning started out with Sawppy sitting on a table alongside a few different builds of JPL open source rover. Eric’s build is on the left in black and white, Santa Susana High School build is on the right with purple printed parts.

Taking Sawppy around and talking to individuals about Sawppy was a lot of fun and something I’ve done in other contexts before. I have hopes for a few of the contacts to develop into something cool for Sawppy’s future. What’s new this time was that I also signed up to give a short 15-minute presentation about Sawppy and that took more work and preparation. Thanks to the 2-minute “lightning talk” opportunities at Hackaday LA the past few months I’m less nervous about public speaking than I used to be, but I still got pretty stressed about it. I’m sure it’s a matter of practice and the more I can take advantage of such opportunities the better I’ll get.

Roger Sawppy

Outside of the presentation, Sawppy and I spent most of our time on the astroturf across the walkway from the officially assigned display area. It was a hilly part of the park which meant there were no tables or booths set up there, and it was a good place to demonstrate rover suspension in action. I had a spare phone set up to be Sawppy control and handed the control to anyone who wanted to pilot Sawppy for a bit.

Sawppy on lawn.jpg

Most were content to run around the turf. Some of the little ones tried to run Sawppy into their siblings. A few ran into the bushes beyond the turf for a more rugged demonstration of Sawppy chassis. A perpetual favorite is to have Sawppy climb over shoes.

Sawppy running over feet

Thanks to refinements to improve robustness over the past few months, Sawppy came out of the experience with only a slightly wobbly left rear wheel that was easily repaired by tightening the set screw on the left rear steering servo coupler. A great improvement over earlier outings!

(Cross-posted to Hackaday.io)

A Photo Studio Under The Desk

A conversation about Pixelblaze digressed into photography and how I had taken some of the pictures I used. Some on this blog, some on Hackaday.io, and some elsewhere. It was a quick little project and today I’ll walk through it, illustrated with some pictures.

The problem I wanted to solve is one shared by many other makers: how to take good pictures to show off projects to the world. The dream solution is to have a full-blown photo studio where I have control over lighting. Cast on a neutral backdrop so all the focus is on the subject, free of background distractions. In reality, few of us can set aside the room required by a serious photo studio. Especially makers: every square foot consumed by a photo studio is a square foot not used for making!

My solution was to put a tiny photo studio under a computer desk. Most of the time the desk will be used for normal desk duties, home of my Luggable PC among other equipment. When I sit at the desk, I put my legs under the table.

Under Table Photo Studio 1 - Computer desk

Aside from the curtain rods sticking conspicuously out the side, all the elements of my tiny photo studio can be stowed out of the way. The white fabric is a curtain from IKEA, mounted to the rod closest to the camera and draped over the far side rod.

Under Table Photo Studio 2 - Stowed with chairs

Bolted under the desk is a cheap LED light fixture from Costco. The segment of curtain draped between the two rods serve to diffuse light from this fixture.

Under Table Photo Studio 3 - light fixture

The curtain rods are suspended under the table using simple 3D-printed brackets held onto existing table leg brackets.

Under Table Photo Studio 4 - curtain rod bracket

When it’s time to take some pictures, the chairs are moved out of the way.

Under Table Photo Studio 5 - stowed no chairs

A few magnets salvaged from hard drives hold folded-up fabric to the far side table leg brackets. Moving magnets aside allows the extra fabric to drop to the floor. The segment of curtain between the far side rod and the floor serve as the backdrop.

Under Table Photo Studio 6 - released

Remainder of the curtain can now be unfurled to serve as the photo studio floor. Sometimes I take the few minutes necessary to smooth wrinkles out of the fabric, sometimes I don’t.

Under Table Photo Studio 7 - unfurled

When the light fixture is turned on, it’s showtime!

Under Table Photo Studio 8 - lights camera action

This little photo studio under the computer desk is where I took most of the pictures illustrating Sawppy assembly. The animated GIF illustrating Luggable PC Mark I assembly. Plus many other pictures on this blog, most recently the picture of Supercon goodie bag contents.

 

Sawppy Will Be At DTLA Mini Maker Faire

The Downtown Los Angeles (DTLA) Mini Maker Faire, hosted at the Los Angeles Public Library central location, is coming up this weekend and my rover Sawppy will be among the many maker projects at the event.

DTLA Mini Maker Faire Website

Sawppy will be one of several rovers present. JPL’s Open Source Rover team should be there with their original build, SGVHAK will be there with the beta build rover I contributed to, which inspired my Sawppy and they’ll all be hanging out together.

The JPL team will also be giving a brief presentation in the KLOS Children’s Theater upstairs about their rover project, followed by an even briefer presentation by me on building Sawppy. Both of these talks are listed on the workshop schedule though (as far as I know) there is no hands-on workshop activity planned. Sawppy will be present and running for people to see up close, but no assembly (and certainly no disassembly!) is planned. I may bring an extra corner steering unit for people to play with, and they’ll be welcome to take that apart and put it back together, but not much beyond that.

(Cross-posted to Hackaday.io)

Sawppy Sees Brief Internet Fame

A few days ago I noticed a sudden spike in internet traffic to Sawppy – page views on my personal blog, Sawppy’s Hackaday.io project page, the Github repo, and YouTube video all rose dramatically. It took a little digging around various statistics reporting pages to figure out where the interest was coming from. Answer: someone had submitted Sawppy to Hacker News giving Sawppy a brief taste of internet fame.

Given the general attention span of the internet at large, the traffic disappeared just as quickly as it came. But in that brief moment in time, a few thousand people spared a few seconds (or more) of their lives to look over Sawppy and that’s more than what I had before.

Sawppy SpikeAnd this bit of exposure might lead to other interesting projects down the line. It seems to have caught the eye of someone with interest in the Pi Wars robot competition. Sawppy’s current configuration is indeed controlled by a Raspberry Pi, but according to contest rules Sawppy is too big to fit as-is. I’m not sure a six-wheeled rocker-bogie suspension would be useful for any contest objectives (challenges) in Pi Wars. But it would absolutely make my day if I see one of the competitors downscale Sawppy to fit in the size envelope, thereby creating a “Sawppy Jr.”

(Cross-posted to Hackaday.io)

Phoebe 1.0 Complete

Phoebe Chassis 2I started the Phoebe project with the goal of building something to apply what I’ve learned about ROS. Get some hands-on experience, learning the ropes. Now that Phoebe can map and autonomously navigate its environment, it is a good place to pause and evaluate potential paths forward. (Also: I have other demands on my time so I need to pause my Phoebe work anyway… and now is a great time.)

Option #1: Better Refinement

Phoebe can map surroundings then, using that map, navigate that environment. This level of functionality is on parity with the baseline functionality of TurtleBot 3. Though neither the mapping nor the navigation is quite as polished as performed by TurtleBot built by people who know what they are doing. For that, Phoebe’s ROS modules need tuning of their parameters to improve performance. There are also small bugs hiding in the system that need to be rooted out. I’m sure the ~100ms timing difference mystery is only the tip of the iceberg.

Risk: This is “the hard part” of not just building a robot, but building a good robot. And I know myself. Without a clear goal and visible progress towards that goal, I’m liable to get distracted or discouraged, trailing off and never really accomplishing.

Option #2: More ROS Functionality

I had been disappointed that the SLAM and navigation tutorials I’ve seen to date require a human to direct robot exploration. I had thought automated exploration would be part of SLAM but I was wrong. Thanks to helpful comments by Hackaday.io user Humpelstilzchen (who is building a pretty cool ROS robot too) I’ve now learned autonomous exploration is built on top of SLAM and Navigation.

So now that Phoebe can do SLAM and can navigate, adding one of the autonomous exploration modules would be the obvious next step.

Risk: It’s another ROS learning curve to climb.

Option #3: More Phoebe Functionality

Phoebe has wheel encoders and a LIDAR as input, and it might be interesting to add more. Ideas have included:

  • Obstacle detection to augment LIDAR, such as
    • Ultrasound distance sensor.
    • Infrared distance sensor (must avoid interference with LIDAR).
    • Bumpers with microswitches to detect collision.
  • IMU (inertial measurement unit).
  • Raspberry Pi camera or other video feed.

Risk: Over-complicating Phoebe, which was always intended to be a minimal-cost baseline entry into the world of ROS following the footstep of ROS TurtleBot.


Options 1 and 2 take place strictly in software, which means mechanical chassis will remain untouched.

Option 3 changes Phoebe hardware, and that would start deviating from TurtleBot. There’s value in being TurtleBot-compatible and hence value in taking a snapshot at this point in time.

Given the above review, I declare the mechanical construction project of Phoebe the TurtleBot complete for version 1.0. As part of this, I’ve also updated the README file on Phoebe’s Github repository to describe content. Because I know I’ll start forgetting!

Phoebe Is Navigating Autonomously

I’ve been making progress (slowly but surely) thorough the ROS navigation stack tutorial to get it running on Phoebe, and finally reached the finish line.

After all the configuration YAML files were created, they were tied together into a launch file as parameters to the ROS node move_base. For now I’m keeping the pieces in independent launch files, so move_base is ran independently of Phoebe’s chassis functionality launch file and AMCL (launched using its default amcl_diff.launch).

After they were all running, a new RViz configuration was created to visualize local costmap and amcl particle cloud. And it was a huge mess! I was disheartened for a few seconds before I remembered seeing a similar mess when I first looked at navigation on a Gazebo simulation of TurtleBot 3 Burger. Before anything would work, I had to set the initial “2D Pose Estimate” to locate Phoebe on the map.

Once that was done, I set a “2D Nav Goal” via RViz, and Phoebe started moving! Looking on RViz I could see the map along with LIDAR scan plots and Phoebe’s digital representation from URDF. Those are all familiar from earlier. New to the navigation map is a planned path plotted in green taking account of the local cost map in gray. AMCL contributed the rest of the information on screen, with individual estimates drawn as little yellow arrows and estimated position in red.

Phoebe Nav2D 2

It’s pretty exciting to have a robot with basic intelligence for path planning, and not just a fancy remote control car.

Of course, there’s a lot of tuning to be done before things actually work well. Phoebe is super cautious and conservative about navigating obstacles, exhibiting a lot of halting and retrying behavior in narrower passageways even when there are still 10-15cm of clearance on each side. I’m confident there are parameter I could tune to improve this.

Less obvious are what I need to adjust to increase Phoebe’s confidence in relatively wide open areas, Phoebe would occasionally brake to a halt and hunt around a bit before resuming travel even when there’s plenty of space. I didn’t see an obstacle pop up on the local costmap, so it’s not clear what triggered this behavior.

(Cross-posted to Hackaday.io)

Navigation Stack Setup for Phoebe

rosorg-logo1Section 1 “Robot Setup” of this ROS Navigation tutorial page confirmed Phoebe met all the basic requirements for the standard ROS navigation stack. Section 2 “Navigation Stack Setup” is where I need to tell that navigation stack how to run on Phoebe.

I had already created a ROS package for Phoebe earlier to track all of my necessary support files, so getting navigation up and running is a matter of creating a new launch file in my existing directory for launch files. To date all of my ROS node configuration has been done in the launch file, but ROS navigation requires additional configuration files in YAML format.

First up in the tutorial were the configuration values common for both local and global costmap. This is where I saw the robot footprint definition, a little sad it’s not pulled from the URDF I just put together. Since Phoebe’s footprint is somewhat close to a circle, I went with the robot_radius option instead of declaring a footpring with an array of [x,y] coordinates. The inflation_radius parameter sounds like an interesting one to experiment with later pending Phoebe performance. The observation_sources parameter is interesting – it implies the navigation stack can utilize multiple sources simultaneously. I want to come back later and see if it can use a Kinect sensor for navigation. For now, Phoebe has just a LIDAR so that’s how I configured it.

For global costmap parameters, the tutorial values look equally applicable to Phoebe so I copied them as-is. For the local costmap, I reduced the width and height of the costmap window, because Phoebe doesn’t travel fast enough to need to look at 6 meters of surroundings, and I hoped reducing to 2 meters would reduce computation workload.

For base local planner parameters, I reduced maximum velocity until I have confidence Phoebe isn’t going to get into trouble speeding. The key modification here from tutorial values is changing holonomic_robot from true to false. Phoebe is a differential drive robot and can’t strafe sideways as a true holonomic robot can.

The final piece of section 2 is AMCL configuration. Earlier I’ve tried running AMCL on Phoebe without specifying any parameters (use defaults for everything) and it seemed to run without error messages, but I don’t yet have the experience to tell what good AMCL behavior is versus bad. Reading this tutorial, I see the AMCL package has pre-configured launch files. The tutorial called up amcl_omni.launch. Since Phoebe is a differential drive robot, I should use amcl_diff.launch instead. The RViz plot looks different than when I ran AMCL with all default parameters, but again, I don’t yet have the experience to tell if it’s an improvement or not. Let’s see how this runs before modifying parameters.

(Cross-posted to Hackaday.io.)

Checking If Phoebe Meets ROS Navigation Requirements

Now that basic coordinate transform frames have been configured with help of URDF and robot state publisher, I moved on to the next document: robot setup page. This one is actually listed slightly out of order list item on ROS navigation page, third behind the Basic Navigation Tuning Guide. I had started reading the “Tuning Guide” and saw that, in that introduction, the tuning guide assumes people have read the robot setup page. It’s not clear why they are out of order, but clearly robot setup needs to come first.

Right up front in Section 1 “Robot Setup” was a very helpful diagram labelled “Navigation Stack Setup” showing major building blocks for an autonomously navigating ROS robot. Even better, these blocks are color-coded as to their source. White blocks are part of the ROS navigation stack, gray parts are optional components outside of that stack, and blue indicates robot-specific code to interface with navigation stack.

overview_tf
Navigation Stack Setup diagram from ROS documentation

This gives me a convenient checklist to make sure Phoebe has everything necessary for ROS navigation. Clockwise from the right, they are:

  • Sensor source – check! Phoebe has a Neato LIDAR publishing laser scan sensor messages.
  • Base controller – check! Phoebe has a Roboclaw ROS node executing movement commands.
  • Odometry source – check! This is also provided by the Roboclaw ROS node reading from encoders.
  • Sensor transforms – check! This is what we just updated, from a hard-coded published transform to one published by robot state publisher based on information in Phoebe’s URDF.

That was the easy part. Section 2 was more opaque to this ROS beginner. It gave an overview of the configuration necessary for a robot to run navigation, but the overview assumes a level of ROS knowledge that’s at the limit of what I actually have in my head right now. It’ll probably take a few rounds of trial and error before I get everything up and running.

(Cross-posted to Hackaday.io)

Phoebe Digital Avatar in RViz

Now that Phoebe URDF has been figured out, it has been added to RViz visualization of Phoebe during GMapping runs. Before this point, Phoebe’s position and orientation (called a ‘pose‘ in ROS) is represented by a red arrow on the map. It’s been sufficient to get us this far, but a generic arrow is not enough for proper navigation because it doesn’t represent the space occupied by Phoebe. Now, with the URDF, the volume of space occupied by Phoebe is also visually represented on the map.

This is important for a human operator to gauge whether Phoebe can fit in certain spaces. While I was driving Phoebe around manually, it was a guessing game whether the red arrow will fit through a gap. Now with Phoebe’s digital avatar in the map, it’s a lot easier to gauge clearance.

I’m not sure if the ROS navigation stack will use Phoebe’s URDF in the same way. The primary reason the navigation tutorial pointed me to URDF is to get Phoebe’s transforms published properly in the tf tree using the robot state publisher tool. It’s pretty clear robot footprint information will be important for robot navigation for the same reason it was useful to human operation, I just don’t know if it’s the URDF doing that work or if I’ll end up defining robot footprint some other way. (UPDATE: I’ve since learned that, for the purposes of ROS navigation, robot footprint is defined some other way.)

In the meantime, here’s Phoebe by my favorite door to use for distance reference and calibration.

Phoebe By Door Posing Like URDF

And here’s the RViz plot, showing a digital representation of Phoebe by the door, showing the following:

  • LIDAR data in the form of a line of rainbow colored dots, drawn at the height of the Neato LIDAR unit. Each dot represents a LIDAR reading, with color representing the intensity of each return signal.
  • Black blocks on the occupancy map, representing space occupied by the door. Drawn at Z height of zero representing ground.
  • Light gray on the occupancy map representing unoccupied space.
  • Dark gray on the occupancy map representing unexplored space.

Phoebe By Door

(Cross-posted to Hackaday.io)

Phoebe URDF: Fixing Functional Problems

Once I had a decent looking URDF for Phoebe up and running, I added it into the Phoebe launch files and started working on the problems exposed by putting it to work.

The first problems were the drive wheels. Visually, they were stuck at the origin and didn’t move with the rest of the robot. Looking through error messages I realized ROS had expected me to read wheel encoder values and publish them as joint state. Since I hadn’t done so, this meant the wheels (attached with “continuous” joint) didn’t know their location. Until I get around to processing wheel encoder values, the joint type was changed to “fixed” to attach them to the chassis.

Looking at the model from multiple angles, I realized I forgot the caster wheel. Since it’s not driven, it is represented as a simple sphere and also attached via a fixed joint.

That’s enough to start driving around as a single unit, but the robot movement in RViz was reversed front/back with LIDAR data plot. This was caused by the fact I forgot to tell ROS the LIDAR is pointed backwards on the robot. Once I had done so, the 180 degree yaw is visible on the object axis visualization: The LIDAR’s X-axis (red cylinder) is pointing backwards instead of forwards like all the other axis.

Phoebe RViz Axes Arrows No Name

The final set of changes might be more cosmetic than functional. When reading about differential drive robots in ROS, it was brought up several times that the robot’s X/Y origin base_link need to be lined up with the pivoting axis of the robot. However, it wasn’t clear where the Z axis is supposed to be. Perhaps this is different for each ROS mapping module? The algorithm hector_slam defined several frames but they don’t appear to be supported by gmapping.

I first defined Phoebe origin as the center point between its two drive wheel axles. When rendered in RViz, this means the Z plane intersects the middle of the robot. It seems to work well, but the visualization looks a bit odd. Intuitively I want the Z plane to represent the ground, so I decided to drop the robot origin to ground level. In the object visualization, this is visible as the purple arrow heads all pointing at a center point below the robot. If I learn this was a bad move later, I’ll have to change it back.

All these changes combined gave me a Phoebe URDF with minimal representation in RViz visualization of Phoebe behavior.

(Cross-posted to Hackaday.io)

Describe Phoebe For ROS Using URDF

Now that I’ve decided to bring up the ROS navigation stack for Phoebe, where do I start? Well, the ROS Wiki page for the subject is always a good place to start, as they tend to have a tutorial for the subject. ROS navigation is no exception.

The first recommended page is actually a familiar sight – the brief overview on tf was required reading back when I first assembled the chassis. At the time, I could get away with a very simple static publisher, because I just had to tell ROS how and where my Neato LIDAR is mounted on my robot chassis. But now I guess I need to advanced to the next step and publish robot state. And this means describing Phoebe in more detail for ROS using a XML syntax called URDF (Unified Robot Descriptor Format).

So in order to bring up ROS navigation on Phoebe, the navigation wiki page has pointed me to robot state publisher and also the ROS URDF Tutorial. To learn one thing I had to learn another, the typical bootstrap process when learning something new.

For the purposes of robot physics simulation, the robot should be described using very basic geometry: a combination of rectangular solids, cylinders, and spheres. This keeps the computation workload for collision detection simple. While the visual representation can be more complex than the collision detection representation, it doesn’t have to be. So for this first draft, I’ll just do a super simple Phoebe for visual representation, suitable for use in collision calculations if I get into that later.

I started with Phoebe’s Onshape CAD file.

Phoebe CAD Full

Taking the critical dimensions, I created a simplified version in Onshape CAD using just rectangular boxes and cylinders. This exercise makes it a fairly straightforward exercise to translate into URDF.

Phoebe CAD Simplified

By measuring the dimensions in CAD, I could declare a few primitives with URDF and see what it looks like in RViz for comparison against CAD. Once the visual appearance is roughly correct, it’s time to tune the details and make sure they work for ROS functional purposes.

Phoebe RViz Simplified

(Cross-posted to Hackaday.io)

Next Phoebe Project Goal: ROS Navigation

rosorg-logo1When I started working on my own TurtleBot variant (before I even decided to call it Phoebe) my intention was to build a hardware platform to get first hand experience with ROS fundamentals. Phoebe’s Hackaday.io project page subtitle declared itself as a ROS robot for <$250 capable of SLAM. Now that Phoebe can map surroundings using standard ROS SLAM library ‘gmapping‘, that goal has been satisfied. What’s next?

One disappointment I found with existing ROS SLAM libraries is that the tutorials I’ve seen (such as this and this) expect a human to drive the robot during mapping. I had incorrectly assumed the robot would autonomously exploring its space, but “simultaneous location and mapping” only promises location and mapping – nothing about deciding which areas to map, and how to go about it. That is left to the human operator.

When I played with SLAM code earlier, I decided against driving the robot manually and instead invoked an existing module that takes a random walk through available space. A search on ROS Answers web site for something more sophisticated than a random walk resulted in multiple pointers to the explore module, but that code hasn’t been maintained since ROS “groovy” four versions ago. So one path forward is to take up the challenge of either update explore or write my own explorer.

That might be interesting, but once a map is built, what do we do with it? The standard ROS answer is the robot navigation stack. This collection of modules is what gives a ROS robot the ability to plan a path through a map, watch its progress through that plan, and update the plan in reaction to unexpected elements in the environment.

At the moment I believe it would be best to learn about the standard navigation stack and getting that up and running on Phoebe. I might return to the map exploration problem later, and if so, seeing how map data is used for navigation will give me better insights into what would make a better map explorer.

(Cross-posted to Hackaday.io)