Party Bling in 30 Minutes: LED Blinky Collar

It’s good to have grand long term projects like Sawppy, but sometimes it’s nice to take a side trip and bang out a quick fun project. The KISS Tindies were originally supposed to be such a project but grew beyond its original scope. Today’s project had to be short by necessity. At less than 30 minutes, it’s one of my shortest projects ever.

Collar LED blinky final curved.jpg

The motivation for this project was an office party, but I didn’t know what the crowd was going to be like. My fallback outfit for such unknown is a long sleeved dress shirt and a sport jacket. If it turns out to be formal, I’ll be under-dressed but at least I’ll have a jacket on. If it turns out to be semi-formal I should fit in. If it is casual, I can take off the jacket. But these are people in the electronics industry, so there’s a chance I will find a room full of people wearing flashing LEDs. I decided, less than an hour before I had to leave, instead of my usual necktie I’m going to substitute a little bit of LED bling of my own.

The objective is to take the same self-blinking LEDs I used on my KISS Tindies and install them under my shirt collar. Since these LEDs can be obnoxiously bright (especially in dark rooms) the light will be indirect, bouncing off fabric underneath my collar. This way I don’t blind whoever I’m trying to hold a conversation with.

When I bought the self-blinking LEDs for the KISS Tindies project, I bought a bag of 100 so there’s plenty left to play with. For a battery holder I’ll use the same design I created for the Tindies out of copper wire. There’s no time to 3D print a structure, so I’m just going to use paper. Copper foil tape will form circuitry on that sheet of paper. Here’s the initial prototype. I folded the paper in half to give it more strength. I had also cut out a chunk of paper to help the battery holder stay in place.

collar led blinky prototype parts

Assembling these parts resulted in a successfully blinking LED and good enough to proceed.

collar led blinky prototype works

The final version used a longer sheet of paper. I measured my shirt collar and cut a sheet of paper so the ends would sit roughly 3mm inside the collar. This was longer than a normal sheet of paper so I had to pull a sheet of legal size paper out of my paper recycle bin. I think it was the legal disclosure form for a pre-approved credit card offer.

collar led blinky final

The LEDs sit a few centimeters inside the paper’s edge. The other side of the paper had extra copper tape to shield the light from shining through. I wanted the light to reflect inside my collar, not show through it. The first test showed a few circular spotlights on my shirt, so I added a sheet of Scotch tape to diffuse light. Once I was happy with the layout of this contraption, I soldered all components to copper foil for reliability.

Less than 30 minutes from start to finish, I had a blinky LED accessory for my shirt.

collar-led-blinky

As it turned out, there was only one other person wearing electronics in the form of some electroluminescent wire. My blinky LED collar was more subtle about announcing itself, but they were noticed by enough people to make me happy.

(Cross-posted to Hackaday.io)

Sawppy Odometry Candidate: Flow Breakout Board

When I presented Sawppy the Rover at Robotics Society of Southern California, one of the things I brought up as an open problems is how to determine Sawppy’s movement through its environment. Wheel odometry is sufficient for a robot traveling on flat ground like Phoebe, but when Sawppy travels on rough terrain things can get messy in more ways than one.

In the question-and-answer session some people brought up the idea of calculating odometry by visual means, much in the way a modern optical computer mouse determines its movement on a desk. This is something I could whip up with a downward pointing webcam and open source software, but there are also pieces of hardware designed specifically to perform this task. One example is the PWM3901 chip, which I could experiment using breakout boards like this item on Tindie.

However, that visual calculation is only part of the challenge, because translating what that camera sees into a physical dimension requires one more piece of data: the distance from the camera to the surface it is looking at. Depending on application, this distance might be a known quantity. But for robotic applications where the distance may vary, a distance sensor would be required.

As a follow-up to my presentation, RSSC’s online discussion forum brought up the Flow Breakout Board. This is an interesting candidate for helping Sawppy gain awareness of how it is moving through its environment (or failing to do so, as the case may be.) A small lightweight module that puts the aforementioned PWM3901 chip alongside a VL53L0x distance sensor.

flow_breakout_585px-1

The breakout board only handles the electrical connections – an external computer or microcontroller will be necessary to make the whole system sing. That external module will need to communicate with PWM3901 via SPI and, separately, VL53L0x via I2C. Then it will need perform the math to calculate actual X-Y distance traveled. This in itself isn’t a problem.

The problem comes from the fact a PWM3901 was designed to be used on small multirotor aircraft to aid them in holding position. Two design decisions that make sense for its intended purpose turns out to be a problem for Sawppy.

  1. This chip is designed to help hold position, which is why it was not concerned with knowing the height above surface or physical dimension of that translation: the sensor was only concerned with detecting movement so the aircraft can be brought back to position.
  2. Multirotor aircraft all have built-in gyroscopes to stabilize itself, so they already detect rotation about their Z axis. Sawppy has no such sensor and would not be able to calculate its position in global space if it doesn’t know how much it has turned in place.
  3. Multirotor aircraft are flying in the air, so the designed working range of 80mm to infinity is perfectly fine. However, Sawppy has only 160mm between the bottom of the equipment bay and nominal floor distance. If traversing over obstacles more than 80mm tall, or rough terrain bringing surface within 80mm of the sensor, this sensor would become disoriented.

This is a very cool sensor module that has a lot of possibilities, and despite its potential problems it has been added to the list of things to try for Sawppy in the future.

Give The People What They Want: Wire Straightener Now On Thingiverse

My wire straightener project was focused on simplicity and reliability. There are no mechanical adjustments for different gauge wires or to correct for a 3D printer’s dimensional accuracy (or lack thereof.) Every adjustment had to be made in CAD by changing the relevant dimensions and printing a test unit. This requires more work up front, but once all the dimensions are dialed in, the single piece tool will never fall apart and will never need readjustment.

spool holder with two stage straightener 1600x1200

It also means the raw STL files generated by Onshape for my printer would probably not work properly for anyone else. For starters, it was tailored for my specific spool of 18 gauge copper wire. According to Google, 18 gauge translates to a diameter of 1.02mm. My calipers say my spool is 1.00 +/- 0.01 mm, slightly smaller than specified. It is then processed into G-Code by Simplify3D, my printing slicer. And finally that G-Code is translated into plastic by my printer, with all its individual quirks.

So while I was happy to share my Onshape CAD file, I resisted sharing the STL because it almost certainly would not work correctly and I don’t want people to have a bad experience with my design. But people ask for it anyway, over and over.

I have since changed my mind on the topic of posting the STL. I will post the STL, but never by itself. I will also post information describing why the STL is probably not going to work, link to Onshape CAD, and what people need to do to make their own. I foresee the following possibilities:

  1. People who don’t read the instructions will print the file as-is:
    • If it works for them – great!
    • If it doesn’t:
      • Abandon with “This design is stupid and it sucks.” – Well, let’s face it, I was not going to reach this audience anyway.
      • Maybe I should go back and read the instructions.”
  2. People who read the instructions:
    • Successfully fine-tune parameters to successfully make their own straightener – great!
    • Tried to follow directions, but encountered problems and need help – I’m happy to help.

Unless I’ve failed to consider something horrible, these possibilities have more upsides than downsides, so let’s try it. I’m going to share the STL files on the Hackaday.io project page, and I’ve created a Thingiverse page for it as well.

(Cross-posted to Hackaday.io)

Intel RealSense T265 Tracking Camera

In the middle of these experiments with a Xbox 360 Kinect as robot depth sensor, Intel announced a new product that’s along similar lines and a tempting venue for robotic exploration: the Intel RealSense T265 Tracking Camera. Here’s a picture from Intel’s website announcing the product:

intel_realsense_tracking_camera_t265_hero

T265 is not a direct replacement for the Kinect, at least not as a depth sensing camera. For that, we need to look at Intel’s D415 and D435. They would be fun to play with, too, but I already had the Kinect so I’m learning on what I have before I spend money.

So if the T265 is not a Kinect replacement, how is it interesting? It can act as a complement to a depth sensing camera. The point of the thing is not to capture the environment – it is to track the motion and position within that environment. Yes, there is the option for an image output stream, but the primary data output of this device is a position and orientation.

This type of camera-based “inside-out” tracking is used by the Windows Mixed Reality headsets to determine its user’s head position and orientation. These sensors requires low latency and high accuracy to avoid VR motion sickness, and has obvious applications in robotics. Now Intel’s T265 offers that capability in a standalone device.

According to Intel, the implementation is based on a pair of video cameras and an inertial motion unit (IMU). Data feeds into internal electronics running a V-SLAM (visual simultaneous location and mapping) algorithm aided by Movidius neural network chip. This process generates position+orientation output. It seems pretty impressive to me that it is done in such a small form factor and high speed (at least low latency) with 1.5 watt of power.

At $200, it is a tempting toy for experimentation. Before I spend that money, though, I’ll want to read more about how to interface with this device. The USB 2 connection is not surprising, but there’s a phrase that I don’t yet understand: “non volatile memory to boot the device” makes it sound like the host is responsible for some portion of the device’s boot process, which isn’t like any other sensor I’ve worked with before.

Xbox 360 Kinect Needs A Substitute Rover

It was pretty cool to see RTAB-Map build a 3D map of its environment using data generated by my hands waving a Xbox 360 Kinect around. However, that isn’t very representative of rover operation. When I wave it around manually, motions are mostly pan and tilt but not much translation. The optical flow of video feed from a rover traveling along the ground would be mostly dominated by forward travel, occasional panning as the vehicle turns, and limited tilt. So the next experiment is to put the Kinect on a rover to see how it acts.

This is analogous to what we did at SGVTech when I first brought in the LIDAR from a Neato vacuum: we placed on top of SGVHAK Rover and drove it around the shop to see what it sees. Unfortunately, the SGVHAK Rover is currently in the middle of an upgrade and disassembled on a workbench. We’ll need something else to stand in for a rover chassis. Behold, the substitute rover:

kinect with office chair simulating rover

Yes, that is a Xbox 360 Kinect sensor bar taped on top of an office chair. The laptop talking to the Kinect can sit on the chair easily enough, but the separate 12V power supply took a bit more work. I had two identical two-cell lithium battery packs. Wiring those two ~7.4V volt packs in series gave me ~14.8 volts, which fed into a voltage regulator bringing it down to 12V for the Kinect. The whole battery power contraption is visible in this picture taped to the laptop’s wrist rest next to the trackpad.

This gave us a wheeled platform for linear and rotational motion along the ground while keeping the Kinect at a constant height. This is more representative of the type of motion it will see mounted on a rover. Wheeling the chair around the shop, we would see the visual odometer performance is impressive, traveling in a line for about three meters resulted in only a few centimeters of error between its internal representation and reality.

We found this by turning the chair around to let the Kinect see where it came from and compare the newly plotted dots against those it plotted three meters ago. But this raised a new question: was it reasonable to expect that RTAB-Map algorithm match the new dots against the old? Using distance data to correct for odometer drift was one thing Phoebe could do in GMapping. I had hoped RTAB-Map would use new observations to correct for its own visual odometry drift. But instead, it started plotting features a few centimeters off from their original position, creating a “ghost” in point cloud data. Maybe I’m using RTAB-Map wrong somewhere… this is worrisome behavior that needs to be understood.

Xbox 360 Kinect and RTAB-Map: Handheld 3D Environment Scanning

I brought my modified Xbox 360 Kinect and my laptop to this week’s SGVTech meetup. My goal for the evening was to show everyone what can be done with an old game console accessory and publicly available open source code. And the best place to start showcasing RTAB-Map is to go through the very first tutorial: Handheld Mapping with RGB-D sensor.

When I installed OpenKinect on my Ubuntu laptop, I was pleasantly surprised that it was offered as part of Ubuntu software repository making installation trivial. I had half expected that I would have to download the source code and struggle to compile without errors.

Getting handheld RGB-D mapping up and running under ROS using RTAB-Map turned out to be almost as easy. They’re all available on ROS software repositories, again sparing me the headache of understanding and fixing compiler errors. That is, as long as a computer already has ROS Kinetic installed, which is admittedly a bit of work.

But if someone is starting with a working installation of ROS Kinetic on Ubuntu, they only need to install three packages via sudo apt install:

Once they are installed, follow instructions on RTAB-Map handheld RGB-D mapping tutorial to execute two ROS launch files. First one launches the ROS node to match the sensor device (in my case the Xbox 360 Kinect), second one launch RTAB-Map itself along with a visualization GUI.

I had fun scanning the shop environment where we hold our meetups. I moved the sensor around, both panning left-right and up-down, to get data from one side of the room. RTAB-Map created a pretty decent 3D representation of the shop. Here’s a camera view of one experiment. The Kinect is sitting on the workbench behind the laptop screen. The visualization GUI has the raw video image (upper left), an image with dots highlighting the features RTAB-Map is tracking (lower left), and a big window with 3D point cloud compilation of Kinect data.

workshop shelves 3d reconstruction - camera

Here’s the screenshot. It is even more impressive in person because we could interact with the point cloud window, rotate and zoom in 3D space to see the area from angles that the Kinect was never at. Speaking of which, look at the light teal line drawn in the lower right: this represents what RTAB-Map reconstructed as the path (in three dimensional space) I waved the Kinect through.

Workshop Shelves 3D reconstruction - screencap.jpg

RTAB-Map is a lot of fun to play with, and shows huge potential for robot project applications.

Trying RTAB-Map To Process Xbox 360 Kinect Data

xbox 360 kinect with modified plugs 12v usbDepth sensor data from a Xbox 360 Kinect might be imperfect, but it is good enough to proceed with learning about how a machine can make sense of its environment using such data. This is an area of active research with lots of options on how I can sample the current state of the art. What looks the most interesting to me right now is RTAB-Map, from Mathieu Labbé of IntRoLab at Université de Sherbrooke (Quebec, Canada.)

rtab-mapRTAB stands for Real-Time Appearance-Based, which neatly captures the constraint (fast enough to be real-time) and technique (extract interesting attributes from appearance) of how the algorithm approaches mapping an environment. I learned of this algorithm by reading the research paper behind a Hackaday post of mine. The robot featured in that article used RTAB-Map to generate its internal representation of its world.

The more I read about RTAB-Map, the more I liked what I found. Its home page lists events dating back to 2014, and the Github code repository show updates as recently as two days ago. It is encouraging to see continued work on this project, instead of something that was abandoned years ago when its author graduated. (Unfortunately quite common in the ROS ecosystem.)

Speaking of ROS, RTAB-Map itself is not tied to ROS, but the lab provides a rtab-ros module to interface with ROS. Its Github code repository has also seen recent updates, though it looks like a version for ROS Melodic has yet to be generated. This might be a problem later… but right now I’m still exploring ROS using Kinetic so I’m OK in the short term.

As for sensor support, RTAB-Map supports everything I know about, and many more that I don’t. I was not surprised to find support for OpenNI and OpenKinect (freenect). But I was quite pleased to see that it also supports the second generation Xbox One Kinect via freenect2. This covers all the sensors I care about in the foreseeable future.

The only downside is that RTAB-Map requires significantly more computing resources than what I’ve been playing with to date. This was fully expected and not a criticism of RTAB-Map. I knew 3D data would far more complex to process, but I didn’t know how much more. RTAB-Map will be my first solid data point. Right now my rover Sawppy is running on a Raspberry Pi 3. According to RTAB-Map author, a RPi 3 could only perform updates four to five times a second. Earlier I had outlined the range of computing power I might summon for a rover brain upgrade. If I want to run RTAB-Map at a decent rate, it looks like I have to go past Chromebook level of hardware to Intel NUC level of power. Thankfully I don’t have to go to the expensive realm of NVIDIA hardware (either a Jetson board or a GPU) just yet.

Xbox 360 Kinect Depth Sensor Data via OpenKinect (freenect)

I want to try using my Xbox 360 Kinect as a robot sensor. After I’ve made the necessary electrical modifications, I decided to try to talk to my sensor bar via OpenKinect (a.k.a. freenect a.k.a. libfreenect) driver software. Getting it up and running on my Ubuntu 16.04 installation was surprisingly easy: someone has put in the work to make it a part of standard Ubuntu software repository. Whoever it was, thank you!

Once installed, though, I wasn’t sure what to do next. I found documentation telling me to launch a test/demonstration viewer application called glview. That turned out to be old information, the test app is actually called freenect-glview. Also, it is no longer added to the default user search path. I have to launch it with the full path /usr/bin/freenect-glview.

Once I got past that minor hurdle, I have on my screen a window that showed two video feeds from my Kinect sensor: on the left, depth information represented by colors. And on the right, normal human vision color video. Here’s my Kinect pointed at its intended home: on board my rover Sawppy.

freenect-glview with sawppy

This gave me a good close look at Kinect depth data. What’s visible and represented by color is pretty good, but the black areas worry me. They represent places where the Kinect was not able to extract depth information. I didn’t expect it to be able to pick out fine surface details of Sawppy components, but I did expect it to see the whole chassis in some form. This was not the case, with areas of black all over Sawppy’s chassis.

Some observations of what a Xbox 360 Kinect could not see:

  • The top of Sawppy’s camera mast. Neither the webcam nor the 3D-printed mount for that camera. This part is the most concerning one because I have no hypothesis why.
  • The bottom of Sawppy’s payload bay. This is unfortunate but understandable: it is a piece of laser cut acrylic which would reflect Kinect’s projected pattern away from the receiving IR camera.
  • The battery pack in rear has a smooth clear plastic package and would also not reflect much back to the camera.
  • Wiring bundles are enclosed in a braided sleeve. It would scatter the majority of IR pattern and those that make it to the receiving camera would probably be jumbled.

None of these are deal-breakers on their own, they’re part of the challenges of building a robot that functions outside of a controlled environment. In addition to those, I’m also concerned about the frame-to-frame inconsistency of depth data. Problematic areas are sometimes visible for a frame and disappear in the next. The noisiness of this information might confuse a robot trying to make sense of its environment with this data. It’s not visible in the screenshot above, but here’s an animated GIF showing a short snippet for illustration:

kinect-looking-at-sawppy-inconsistencies

Xbox 360 Kinect Driver: OpenNI or OpenKinect (freenect)?

The Kinect sensor bar from my Xbox 360 has long been retired from gaming duty. For its second career as robot sensor, I have cut off its proprietary plug and rewired it for computer use. Once I’ve verified the sensor bar is electrically compatible with a computer running Ubuntu, the first order of business was to turn fragile test connections into properly soldered wires protected by heat shrink tube. Here’s my sensor bar with its new standard USB 2.0 connector and a JST-RCY connector for 12 volt power.

xbox 360 kinect with modified plugs 12v usb

With the electrical side settled, attention turns to software. The sensor bar can tell the computer it is a USB device, but we’ll need additional driver software to access all the data it can provide. I chose to start with the Xbox 360 Kinect because of its wider software support, which means I have multiple choices on which software stack to work with.

OpenNI is one option. This open source SDK is still around thanks to Occipital, one of the companies that partnered with PrimeSense. PrimeSense was the company that originally developed the technology behind Xbox 360 Kinect sensor, but they have since been acquired by Apple and their technology incorporated into the iPhone X. Occipital itself is still in the depth sensor business with their Structure sensor bar. Available standalone or incorporated into products like Misty.

OpenKinect is another option. It doesn’t have a clear corporate sponsor like OpenNI, and seems to have its roots in the winner of the Adafruit contest to create an open source Kinect driver. Confusingly, it is also sometimes called freenect or variants thereof. (Its software library is libfreenect, etc.)

Both of these appear to still be receiving maintenance updates, and both have been used a lot of cool Kinect projects outside of Xbox 360 games. Ensuring there will be a body of source code available as reference for using either. Neither are focused on ROS, but people have written ROS drivers for both OpenNI and OpenKinect (freenect). (And even an effort to rationalize across both.)

One advantage of OpenNI is that it provides an abstraction layer for many different depth cameras built on PrimeSense technology, making code more portable across different hardware. This does not, however, include the second generation Xbox One Kinect, as that was built with a different (not PrimeSense) technology.

In contrast, OpenKinect is specific to the Xbox 360 Kinect sensor bar. It provides access to parts beyond the PrimeSense sensor: microphone array, tilt motor, and accelerometer.  While this means it doesn’t support the second generation Xbox One Kinect either, there’s a standalone sibling project libfreenect2 for meeting that need.

I don’t foresee using any other PrimeSense-based sensors, so OpenNI’s abstraction doesn’t draw me. The access to other hardware offered by OpenKinect does. Plus I do hope to upgrade to a Xbox One Kinect in the future, so I decided to start my Xbox 360 Kinect experimentation using OpenKinect.

Modify Xbox 360 Kinect for PC Use

I want to get some first hand experience working with depth cameras in a robotics context, and a little research implied the Xbox 360 Kinect sensor bar is the cheapest hardware that also has decent open source software support. So it’s time to dust off the Kinect sensor bar from my Halo 4 Edition Xbox 360.

I was a huge Halo fan and I purchased this Halo 4 console bundle as an upgrade from my first generation Xbox 360. My Halo enthusiasm has since faded and so has the Xbox 360. After I upgraded to Xbox One, I lent out this console (plus all accessories and games) to a friend with young children. Eventually the children lost interest in an antiquated console that didn’t play any of the cool new games and it resumed gathering dust. When I asked if I could reclaim my Kinect sensor bar, I was told to reclaim the whole works. The first accessory to undergo disassembly at SGVTech was the Xbox 360 Racing Steering Wheel. Now it is time for the second accessory: my Kinect sensor bar.

The sensor bar connected to my console via a proprietary connector. Most Xbox 360 accessories are wireless battery-powered devices, but the Kinect sends far more data than normal wireless controllers and requires much more power than rechargeable AA batteries can handle. Thus the proprietary connector is a combination of a 12 volt power supply alongside standard USB 2.0 data at 5 volts. To convert this sensor bar for computer use instead of a Xbox 360, the proprietary connector needs to be replaced by two separate connectors: A standard USB 2.0 plug plus a 12V power supply plug.

Having a functioning Xbox 360 made the task easier. First, by going into the Kinect diagnostics menu, I could verify the Kinect is in working condition before I start cutting things up. Second, after I severed the proprietary plug and splayed out wires in the cable, a multimeter was able to easily determine the wires for 12 volt (tan), 5 volt (red), and ground (black) by detecting the voltages placed on those wires by a running Xbox 360.

That left only the two USB data wires, colored green and white. Thankfully, this appears to be fairly standardized across USB cables. When I cut apart a USB 2.0 cable to use as my new plug, I found the same red, black, green, and white colors on wires. To test the easy thing first, I matched wire colors, kept them from shorting each other with small pieces of tape, and put 12V power on the tan wire using a bench power supply.

xbox 360 kinect on workbench

Since I was not confident on this wiring, I used my cheap laptop to test my suspect USB wiring instead of using my good laptop. Fortunately, the color matching appeared to work and the sensor bar enumerated properly. Ubuntu’s dmesg utility lists a Kinect sensor bar as a USB hub with three attached devices:

  1. Xbox NUI Motor: a small motor that can tilt the sensor bar up or down.
  2. Xbox Kinect Audio: on board microphone array.
  3. Xbox NUI Camera: this is our depth-sensing star!

xbox nui camera detected

[ 84.825198] usb 2-3.3: new high-speed USB device number 10 using xhci_hcd
[ 84.931728] usb 2-3.3: New USB device found, idVendor=045e, idProduct=02ae
[ 84.931733] usb 2-3.3: New USB device strings: Mfr=2, Product=1, SerialNumber=3
[ 84.931735] usb 2-3.3: Product: Xbox NUI Camera
[ 84.931737] usb 2-3.3: Manufacturer: Microsoft

 

ROS In Three Dimensions: Starting With Xbox 360 Kinect

The long-term goal driving my robotics investigations is to build something that has an awareness of its environment, and intelligently plan actions within it. (This is a goal shared by many other members of RSSC as well.) Building Phoebe gave me an introduction to ROS running in two dimensions, and now I have ambition to graduate to three. A robot working in three dimensions need a sensor that works in three dimensions, so where I’m going to start.

Phoebe started with a 2D laser scanner purchased off eBay that I learned to get up and running in ROS. Similarly, the cheapest 3D sensor that can be put on a ROS robot are repurposed Kinect sensor bars from Xbox game consoles. Even better, since I’ve been a Xbox gamer (more specifically a Halo and Forza gamer) I don’t need to visit eBay. I have my own Kinect to draft into this project. In fact, I have more than one: I have both the first generation Kinect sensor accessory for Xbox 360, and the second generation that was released alongside Xbox One.

xbox 360 kinect and xbox one kinect

The newer Xbox One Kinect is a superior sensor with a wider field of view, higher resolution, and better precision. But that doesn’t necessarily make it the best choice to start off with, because hardware capability is only part of the story.

When the Xbox 360 Kinect was launched, it was a completely novel new device offering depth sensing at a fraction of the price of existing depth sensors. There was a lot of enthusiasm both in the context of video gaming and hacking them to be used outside of Xbox 360 games. Unfortunately, the breathless hype wrote checks that the reality of a low-cost depth camera couldn’t quite cash. By the time Xbox One launched with an updated Kinect, interest had waned and far fewer open source projects aimed to work with a second generation Kinect.

The superior capabilities of the second generation sensor bar also brought downsides: it required more data bandwidth and hence a move to USB 3.0. At the time, USB 3.0 ecosystem was still maturing and new Kinect had problems working with certain USB 3.0 implementations. Even if the data could get into a computer, the sheer amount of it placed more demands on processing code. When coupled with reduced public interest, it meant software support for the second generation Kinect is less robust. A web search found a lot of people who encountered problems trying to get their second generation bar to work.

In the interest of learning the ropes and getting an introduction to the world of 3D sensing, I decided a larger and more stable software base is more interesting than raw hardware capabilities. I’ll use the first generation Xbox 360 Kinect sensor bar to climb the learning curve of building a three-dimensional solution in ROS. Once that is up and running, I can try to tame the more finicky second generation Kinect.

That UI in Jurassic Park Was A Real Thing

I don’t remember exactly when and where I saw the movie Jurassic Park, but I do remember it was during its theatrical release and I was with a group of people who were aware of the state of the art in computer graphics. We were there to admire the digital dinosaurs in that movie, which represented a tremendous leap ahead. Over twenty-five years later, the film is still recognized as a landmark in visual effects history. Yes, more than half of the dinosaurs in the film are physical effects, but the digital dinosaurs stood out.

Given the enthusiasm in computer generated effects, it naturally followed that this crowd was also computer literate. I believe we all knew our way around a UNIX prompt, so this was a crowd that erupted into laughter when the infamous “It’s a UNIX system! I know this!” line was spoken.

jurassic park unix

I was running jobs on a centralized machine via telnet. Graphics user interface on UNIX systems were rare on the machines I had worked with, never mind with animated three-dimensional graphics! I had assumed it was a bit of Hollywood window dressing, because admittedly text characters on a command line doesn’t work on the silver screen as well.

Well, that was a bad assumption. Maybe not all UNIX systems have interfaces that show the user flying over a three dimension landscape of objects, but one did! I have just learned it was an actual user interface available for Silicon Graphics machines. SGI logo is visible on the computer monitor, and their machines were a big part of rendering those digital dinosaurs we were drooling over. It made sense the production crew would be aware of the visually attractive (optional) component for SGI’s UNIX variant, IRIX.

Inventiveness of Drug Smugglers

I started this blog as a way to document my adventures exploring interesting technologies that I couldn’t find the time to do when engrossed in my previous job.  The general idea is that I could establish a track record of tackling technical challenges. Show that I could set a goal, quickly learn what I need to accomplish it, document the process, and move on to the next thing. One way for this journey to come to a satisfactory end would be if it lead to a new career tackling similar challenges to solve problems.

What would that career look like? That is still fuzzy. I am confident there will be a space in this economy for innovation and creative problem solving. Skills that span across intelligent software, electronics hardware, and mechanical fabrication. Los Angeles is home to enough industries, businesses (startups and established alike) that I expect to find something in due course.

One approach is to find people with similar skills and interests and see how they apply them to earn a living. And unfortunately, this is not commonly broadcast. I can understand the need to keep trade secrets and patents, but it is frustrating.

And then there are those who don’t broadcast their work because it is on the wrong side of the law. Drug smugglers employ much the same problem-solving mentality but apply them to the drug grade. I am not interested in joining that workforce, but I still admire what goes into these endeavors (as compiled by Business Insider). It would be better if there were legal and lucrative ways to apply such creativity, but money of the drug trade is a powerful draw. Smuggling is a constant game of cat-and-mouse as old as there’s been profit in moving contraband from seller to buyer, and there’s no reason to believe it will end anytime soon.

Here’s one of the more impressive efforts: a submarine built for drug smuggling that was only caught because of an onboard fire. Given that these aren’t exactly OSHA certified workplaces, it’s sobering to think about sister ships that might have succumbed to an unforgiving ocean.

coast guard narcotics submarine firefighting

Not everything is so sophisticated. Some of the examples are pretty crude, others are ancient. I was especially amused by the catapult (I think it’s actually a trebuchet) used to fling drugs over a wall. It is a solution that dates back to medieval ages as a countermeasure to equally medieval era concept of a border wall.

Happy Octopus Eating Taco and Fries

As a short break from larger scale projects, I decided to get some more SMD soldering practice. When I was at Supercon I received these little soldering kits which were designed to the simple “Shitty Add-On” specification. It’s a way for people to get a simple start with PCB projects made with aesthetics in mind as well as functionality, headlined by sophisticated #badgelife creations.

As fitting their simple nature, all I have to start with are little zip lock bags of parts. The first hint towards assembly instructions were printed on the circuit board: the text @Michelle4904 which pointed to the Twitter page of Michelle Grau and a Google Doc of assembly instructions. One notable fact of these kits is that there were no extra parts to replace any that might accidentally fly off into space, which meant I had to be extra careful handling them. Fortunately, the parts were larger than my most recent SMD LED project and while I did drop a few, these were large enough they did not fly too far and I was able to recover them.

I started with the fries. It was slow going at first because I was very afraid of losing parts. But I gradually built up a feel for handling them and things got gradually faster. After a bit of experimentation, I settled on a pattern of:

  1. Tin one pad with a small blob of solder.
  2. Place the SMD component next to the blob, and melt the blob again to connect them.
  3. Solder the other end of SMD component to other pad.

michelle4904 fries in progress

This is the rapid start portion of the learning curve – every LED felt faster and easier than the last. Soon the pack of fries were finished and illuminated. I was a little confused as to why the five LEDs were green, I had expected them to be yellow. Maybe they are seasoned fries with parsley or something?

Tangential note: when I visit Red Robin I like to upgrade to their garlic herbed fries.

michelle4904 fries illuminated

Once the fries were done, I then moved on to the taco which had a denser arrangement of components. It was an appropriate next step up in difficulty.

michelle4904 tacos in progress

Once completed, I have a taco with yellow LEDs for the shell (the same yellow I would have expected in the fries…), red LED for tomato, green LED for lettuce, and white LED for sour cream. It’s a fun little project.

Tangential note: Ixtaco Taqueria is my favorite taco place in my neighborhood.

michelle4904 tacos illuminated

The last zip lock bag has a smiling octopus and it was the easiest one to solder. It appears the original intention was to be an 1-to-4 expansion board for shitty add-ons, but if so, the connector genders are backwards.

michelle4904 octopus with connectors

No matter, I’ll just solder the taco and fries permanently to my happy octopus tentacles. In order to let this assembly of PCBs stand on its own, I soldered one of the battery holders I designed for my KISS Tindies.

michelle4904 kit combo battery

And here’s a happy octopus enjoying Taco Tuesday and a side of fries.

michelle4904 kit combo illuminated

Strange Failure Of Monoprice Monitor 10734

Under examination at a recent SGVTech meet is a strange failure mode for a big monitor. Monoprice item #10734 is a 30-inch diagonal monitor with 2560×1600 resolution. The display panel is IPS type and have LED back light – in short, it’s a pretty capable monitor. Unfortunately, it has suffered a failure of some sort that makes it pretty useless as a computer monitor: certain subpixels on the screen no longer respond to display input.

Here the monitor is supposed to display black text “What” on a white background. But as we can see, some parts inside the letters that are supposed to be dark are still illuminated, most visibly here in some green subpixels. The misbehaving pixels are not random, they are in a regular pattern, but I’m at a loss as to why this is happening.

monoprice 10734 subpixel failure

And these misbehaving subpixels drift in their value. Over a course of several minutes they will shift slightly in response to adjacent pixels. The end result is that anything left on screen for more than a few minutes will leave a ghost behind, its colors remembered by the misbehaving subpixels. It looks like old school CRT screen burn-in, but only takes a few minutes to create. Unplugging the monitor, waiting a few minutes, plugging it back in and turning it on does not eliminate the ghost. I need to wait a week before the ghosting fades to an undetectable level. This behavior means something, but I don’t know what.

In the hopes that there’s something obviously broken, out came the screwdrivers to take the monitor apart. We were not surprised to find there are minimal components inside the metal case: there is the big panel unit, connected to a small interface board which hosts the power plug and all the video input ports, connected to an even smaller circuit board hosting the user control buttons.

monoprice 10734 backless

The panel is made by LG, model number LM300WQ6(SL)(A1).

monoprice 10734 panel lg lm300wq6 sl a1

A web search found the LG datasheet for this panel. Because it is a 10-bits per color panel, and there are so many pixels, its data bandwidth requirements are enormous. 4 LVDS channels clocking over 70 MHz. There were no visibly damaged parts on its integrated circuit board. On the other end of LVDS interface cables, there were no visible damage on the interface board either.

monoprice 10734 interface board

Before I opened this up and found the datasheet, I had thought maybe I could generate signals to feed this panel using a PIC or something. But the chips I have are grossly underpowered to talk to a 4-channel LVDS panel at required speed. Perhaps this could be a FPGA project in the future? For now this misbehaving monitor will sit in a corner gathering dust waiting for the time I get around to that project, or until the next housecleaning pass where it departs as electronic recycle, whichever comes first.

ROS In Three Dimensions: Navigation and Planning Will Be Hard

backyard sawppy 1600At this point my research has led me to ROS modules RTAB-Map which will create a three dimensional representation of a robot’s environment. It seems very promising… but building such a representation is only half the battle. How would a robot make good use of this data? My research has not yet uncovered applicable solutions.

The easy thing to do is to fall back to two dimensions, which will allow the use of standard ROS navigation stack. The RTAB-Map ROS module appears to make the super easy, with the option to output a two dimension occupancy grid just like what navigation wants. It is a baseline for handling indoor environments, navigating from room to room and such.

But where’s the fun in that? I could already do that with a strictly two-dimensional Phoebe. Sawppy is a six wheel rover for handling rougher terrain and it would be far preferable to make Sawppy autonomous with ROS modules that can understand and navigate outdoor environments. But doing so successfully will require solving a lot of related problems that I don’t have answers yet.

We can see a few challenges in the picture of Sawppy in a back yard environment:

  • Grass is a rough surface that would be very noisy to robot sensors due to individual blades of grass. With its six wheel drivetrain, Sawppy can almost treat grassy terrain as flat ground. But not quite! There are hidden dangers – like sprinkler heads – which could hamper movement and should be considered in path planning.
  • In the lower right corner we can see transition from grass to flat red brick. This would show as a transition to robot sensors as well, but deciding whether that transition is important will need to be part of path planning. It even introduces a new consideration in the form of direction: Sawppy has no problem dropping from grass to brick, but it takes effort to climb from brick back on to grass. This asymmetry in cost would need to be accounted for.
  • In the upper left corner we see a row of short bricks. An autonomous Sawppy would need to evaluate those short bricks and decide if they could be climbed, or if they are obstacles to be avoided. Experimentally I have found that they are obstacles, but how would Sawppy know that? Or more interestingly: how would Sawppy perform its own experiment autonomously?

So many interesting problems, so little time!

(Cross-posted to Hackaday.io)

Lightweight Google AMP Gaining Weight

Today I received a notification from Google AMP that the images I use in my posts are smaller than their recommended size. This came as quite a surprise to me – all this time I thought I was helping AMP’s mission to keep things lightweight for mobile browsers. It keeps my blog posts from unnecessarily using up readers’ cell phone data plans, but maybe this is just a grumpy old man talking. It is clear that Google wants me to use up more bandwidth.

AMP stands for Accelerated Mobile Pages, an open source initiative that was launched by Google to make web pages that are quick to download and easy to render by cell phones. Cutting the fat also meant cutting web revenue for some publishers, because heavyweight interactive ads were forbidden. Speaking for myself, I am perfectly happy to leave those annoying “Shock the Monkey” ads behind.

As a WordPress.com blog writer I don’t usually worry about AMP, because they automatically creates and serves an AMP-optimized version of my page to appropriate readers. And since I don’t run ads on my page there’s little loss on my side. As a statistics junkie, I do miss out on knowing about my AMP viewership numbers, because people who read AMP cached versions of my posts don’t interact with WordPress.com servers and don’t show on my statistics. But that’s a minor detail. And in theory, having an AMP alternate is supposed to help my Google search rankings so I get more desktop visitors than I would otherwise. This might matter to people whose income depends on their web content. I have the privilege that I’m just writing this blog for fun.

Anyway, back to the warning about my content. While I leave AMP optimization up to WordPress.com, I do control the images I upload. And apparently I’ve been scaling them down too far for Google.

amp image recommend 1200 wide

I’m curious why they chose 1200 pixel width, that seems awfully large for a supposedly small lightweight experience. Most Chromebook screens are only around 1300 pixels wide, a 1200 pixel wide image is almost full screen! Even normal desktop web browsers visiting this site retrieves only a 700 pixel wide version of my images. Because of that fact, I had been uploading images 1024 pixels wide and thought I had plenty of headroom. Now that I know Google’s not happy with 1024, I’ll increase to 1200 pixels wide going forward.

ROS In Three Dimensions: Data Structure and Sensor

rosorg-logo1One of the ways a TurtleBot makes ROS easier and more approachable for beginners is by simplifying a robot’s world into two dimensions. It’s somewhat like the introductory chapters of a physics textbook, where all surfaces are friction-less and all collisions are perfectly inelastic. The world of a TurtleBot is perfectly flat and all obstacles have an infinite height. This simplification allows the robot’s environment to be represented as a 2D array called an occupancy grid.

Of course, the real world is more complicated. My TurtleBot clone Phoebe encountered several problems just trying to navigate my home. The real world do not have flat floors and obstacles come in all shapes, sizes, and heights. Fortunately, researchers have been working on problems encountered by robots venturing outside the simplified world, it’s a matter of reading research papers and following their citation links to find the tools.

One area of research improves upon the 2D occupancy grid by building data structures that can represent a robot’s environment in 3D. I’ve found several papers that built upon the octree concept, so that seems to be a good place to start.

But for a robot to build a representation of its environment in 3D, it needs 3D sensors. Phoebe’s Neato vacuum LIDAR works in a simplified 2D world but won’t cut it anymore in a 3D world. The most affordable entry point here is the Microsoft Kinect sensor bar from an old Xbox 360, which can function as a RGBD (red + blue + green + depth) input source for ROS.

Phoebe used Gmapping for SLAM, but that takes 2D laser scan data and generates a 2D occupancy grid. Searching for a 3D SLAM algorithm that can digest RGBD camera data, I searched for “RGBD SLAM” that led immediately to this straightforwardly named package. But of course, that’s not the only one around. I’ve also come across RTAB-Map which seems to be better maintained and updated for recent ROS releases. And best of all, RTAB-Map has the ability to generate odometry data purely from the RGBD input stream, which might allow me to bypass the challenges of calculating Sawppy’s chassis odometry from unreliable servo angle readings.

(Cross-posted to Hackaday.io)

Sawppy on ROS: Open Problems

A great side effect of giving a presentation is that it requires me to gather my thoughts in order to present them to others. Since members of RSSC are familar with ROS, I collected my scattered thoughts on ROS over the past few weeks and condensed the essence into a single slide that I’ve added to my presentation.

backyard sawppy 1600

From building and running my Phoebe robot, I learned about the basics of ROS using a two-wheeled robot on flat terrain. Sticking to 2D simplifies a lot of robotics problems and I thought it would help me expand to a six-wheeled rover to rough terrain. Well, I was right on the former but the latter is still a big step to climb.

The bare basic responsibilities of a ROS TurtleBot chassis (and derivatives like Phoebe) is twofold: subscribe to topic /cmd_vel and execute movement commands published to that topic, and from the resulting movement, calculate and publish odometry data to topic /odom.

Executing commands sent to /cmd_vel is relatively straightforward when Sawppy is on level ground. It would not terribly different from existing code. The challenge comes from uneven terrain with unpredictable traction. Some of Sawppy’s wheels might slip and resulting motion might be very different from what was commanded. My experience with Phoebe showed that while it is possible to correct for minor drift, major sudden unexpected shifts in position or orientation (such as when Phoebe runs into an unseen obstacle) throws everything out of whack.

Given the potential for wheel slip on uneven terrain, calculating Sawppy odometry is a challenge. And that’s before we consider another problem: the inexpensive serial bus servos I use do not have fine pitched rotation encoders, just a sensor for servo angle that only covers ~240 of 360 degrees. While I would be happy if it just didn’t return any data when out of range, it actually returns random data. I haven’t yet figured out a good way to filter the signal out of the noise, which would be required to calculate odometry.

And these are just challenges within the chassis I built, there’s more out there!

(Cross-posted to Hackaday.io)

Sawppy Presented at January 2019 RSSC Meeting

Today I introduced my rover project Sawppy to members of Robotics Society of Southern California. Before the presentations started, Sawppy sat on a table so interested people can come by for a closer look. My visual aid PowerPoint slide deck is available here.

sawppy at rssc

My presentation is an extended version of what I gave at Downtown LA Mini Maker Faire. Some of the addition came at the beginning: this time I’m not following a JPL Open Source Rover presentation, so I had to give people the background story on ROV-E, JPL OSR, and SGVHAK rover to properly explain Sawppy’s inspiration. Some of the addition came at the end: there were some technical details that I was able to discuss with a technical audience. (I’ll expand on them in future blog posts.)

I was very happy at the positive reception I received for Sawppy. The first talk of the morning covered autonomous robots, so I was afraid the audience would look down at Sawppy’s lack of autonomy. Thankfully that did not turn out to be a big deal. Many were impressed by the mechanical design and construction. Quite a few were also appreciative when I stressed my emphasis on keeping Sawppy affordable and accessible. In the Q&A session we covered a few issues that had easy solutions… if one had a metalworking machine shop. I insisted that Sawppy could be built without a machine shop, and that’s why I made some of the design decisions I did.

A few people were not aware of Onshape and my presentation stirred their interest to look into it. There was also a surprising level of interest in my mention of Monoprice Maker Select v2 as an affordable entry level 3D printer, enough hands were raised that I signed up to give a future talk about my experience.

(Cross-posted to Hackaday.io)