Window Shopping JeVois Machine Vision Camera

In the discussion period that followed my Sawppy presentation at RSSC, there was a discussion on machine vision. When discussing problems & potential solutions, JeVois camera was mentioned as one of the potential tools for machine vision problems. I wrote down the name and resolved to look it up later. I have done so and I like what I see.

First thing that made me smile was the fact it was a Kickstarter success story. I haven’t committed any of my own money to any Kickstarter project, but I’ve certainly read more about failed projects than successful ones. It’s nice when the occasional success story comes across my radar.

The camera module is of the type commonly used in cell phones, and behind the camera is a small machine vision computer again built mostly of portable electronics components. The idea is to have a completely self-contained vision processing system, requiring only power input and delivers processed data output. Various machine vision tasks can be handled completely inside the little module as long as the user is realistic about the limited processing power available. It is less powerful but also less expensive and smaller than Google’s AIY Vision module.

The small size is impressive, and led to my next note of happiness: it looks pretty well documented. When I looked at its size, I had wondered how to best mount the camera on a project. It took less than 5 minutes to decipher documentation hierarchy and find details on physical dimensions and how to mount the camera case. Similarly, my curiosity about power requirements was quickly answered with confirmation that its power draw does indeed exceed the baseline USB 500mW.

Ease of programming was the next investigation. Some of the claims around this camera made it sound like its open source software stack can run on a developer’s PC and debugged before publishing to the camera. However, the few tutorials I skimmed through (one example here) all required an actual JeVois camera to run vision code. I interpret this to mean that JeVois software stack is indeed specific to the camera. The whole “develop on your PC first” only means the general sense of developing vision algorithms on a PC before porting to JeVois software stack for deployment on the camera itself. If I find out I’m wrong, I’ll come back and update this paragraph.

2017-04-27-15-14-36When I looked on Hackaday, I saw that one of the writers thought JeVois camera’s demo mode was a very effective piece of software. It should be quite effective at its job: get users interested in digging deeper. Project-wise, I see a squirrel detector and a front door camera already online.

The JeVois camera has certainly earned a place on my “might be interesting to look into” list for more detailed later investigation.

Sawppy Odometry Candidate: Flow Breakout Board

When I presented Sawppy the Rover at Robotics Society of Southern California, one of the things I brought up as an open problems is how to determine Sawppy’s movement through its environment. Wheel odometry is sufficient for a robot traveling on flat ground like Phoebe, but when Sawppy travels on rough terrain things can get messy in more ways than one.

In the question-and-answer session some people brought up the idea of calculating odometry by visual means, much in the way a modern optical computer mouse determines its movement on a desk. This is something I could whip up with a downward pointing webcam and open source software, but there are also pieces of hardware designed specifically to perform this task. One example is the PWM3901 chip, which I could experiment using breakout boards like this item on Tindie.

However, that visual calculation is only part of the challenge, because translating what that camera sees into a physical dimension requires one more piece of data: the distance from the camera to the surface it is looking at. Depending on application, this distance might be a known quantity. But for robotic applications where the distance may vary, a distance sensor would be required.

As a follow-up to my presentation, RSSC’s online discussion forum brought up the Flow Breakout Board. This is an interesting candidate for helping Sawppy gain awareness of how it is moving through its environment (or failing to do so, as the case may be.) A small lightweight module that puts the aforementioned PWM3901 chip alongside a VL53L0x distance sensor.

flow_breakout_585px-1

The breakout board only handles the electrical connections – an external computer or microcontroller will be necessary to make the whole system sing. That external module will need to communicate with PWM3901 via SPI and, separately, VL53L0x via I2C. Then it will need perform the math to calculate actual X-Y distance traveled. This in itself isn’t a problem.

The problem comes from the fact a PWM3901 was designed to be used on small multirotor aircraft to aid them in holding position. Two design decisions that make sense for its intended purpose turns out to be a problem for Sawppy.

  1. This chip is designed to help hold position, which is why it was not concerned with knowing the height above surface or physical dimension of that translation: the sensor was only concerned with detecting movement so the aircraft can be brought back to position.
  2. Multirotor aircraft all have built-in gyroscopes to stabilize itself, so they already detect rotation about their Z axis. Sawppy has no such sensor and would not be able to calculate its position in global space if it doesn’t know how much it has turned in place.
  3. Multirotor aircraft are flying in the air, so the designed working range of 80mm to infinity is perfectly fine. However, Sawppy has only 160mm between the bottom of the equipment bay and nominal floor distance. If traversing over obstacles more than 80mm tall, or rough terrain bringing surface within 80mm of the sensor, this sensor would become disoriented.

This is a very cool sensor module that has a lot of possibilities, and despite its potential problems it has been added to the list of things to try for Sawppy in the future.

Intel RealSense T265 Tracking Camera

In the middle of these experiments with a Xbox 360 Kinect as robot depth sensor, Intel announced a new product that’s along similar lines and a tempting venue for robotic exploration: the Intel RealSense T265 Tracking Camera. Here’s a picture from Intel’s website announcing the product:

intel_realsense_tracking_camera_t265_hero

T265 is not a direct replacement for the Kinect, at least not as a depth sensing camera. For that, we need to look at Intel’s D415 and D435. They would be fun to play with, too, but I already had the Kinect so I’m learning on what I have before I spend money.

So if the T265 is not a Kinect replacement, how is it interesting? It can act as a complement to a depth sensing camera. The point of the thing is not to capture the environment – it is to track the motion and position within that environment. Yes, there is the option for an image output stream, but the primary data output of this device is a position and orientation.

This type of camera-based “inside-out” tracking is used by the Windows Mixed Reality headsets to determine its user’s head position and orientation. These sensors requires low latency and high accuracy to avoid VR motion sickness, and has obvious applications in robotics. Now Intel’s T265 offers that capability in a standalone device.

According to Intel, the implementation is based on a pair of video cameras and an inertial motion unit (IMU). Data feeds into internal electronics running a V-SLAM (visual simultaneous location and mapping) algorithm aided by Movidius neural network chip. This process generates position+orientation output. It seems pretty impressive to me that it is done in such a small form factor and high speed (at least low latency) with 1.5 watt of power.

At $200, it is a tempting toy for experimentation. Before I spend that money, though, I’ll want to read more about how to interface with this device. The USB 2 connection is not surprising, but there’s a phrase that I don’t yet understand: “non volatile memory to boot the device” makes it sound like the host is responsible for some portion of the device’s boot process, which isn’t like any other sensor I’ve worked with before.

Xbox 360 Kinect Depth Sensor Data via OpenKinect (freenect)

I want to try using my Xbox 360 Kinect as a robot sensor. After I’ve made the necessary electrical modifications, I decided to try to talk to my sensor bar via OpenKinect (a.k.a. freenect a.k.a. libfreenect) driver software. Getting it up and running on my Ubuntu 16.04 installation was surprisingly easy: someone has put in the work to make it a part of standard Ubuntu software repository. Whoever it was, thank you!

Once installed, though, I wasn’t sure what to do next. I found documentation telling me to launch a test/demonstration viewer application called glview. That turned out to be old information, the test app is actually called freenect-glview. Also, it is no longer added to the default user search path. I have to launch it with the full path /usr/bin/freenect-glview.

Once I got past that minor hurdle, I have on my screen a window that showed two video feeds from my Kinect sensor: on the left, depth information represented by colors. And on the right, normal human vision color video. Here’s my Kinect pointed at its intended home: on board my rover Sawppy.

freenect-glview with sawppy

This gave me a good close look at Kinect depth data. What’s visible and represented by color is pretty good, but the black areas worry me. They represent places where the Kinect was not able to extract depth information. I didn’t expect it to be able to pick out fine surface details of Sawppy components, but I did expect it to see the whole chassis in some form. This was not the case, with areas of black all over Sawppy’s chassis.

Some observations of what a Xbox 360 Kinect could not see:

  • The top of Sawppy’s camera mast. Neither the webcam nor the 3D-printed mount for that camera. This part is the most concerning one because I have no hypothesis why.
  • The bottom of Sawppy’s payload bay. This is unfortunate but understandable: it is a piece of laser cut acrylic which would reflect Kinect’s projected pattern away from the receiving IR camera.
  • The battery pack in rear has a smooth clear plastic package and would also not reflect much back to the camera.
  • Wiring bundles are enclosed in a braided sleeve. It would scatter the majority of IR pattern and those that make it to the receiving camera would probably be jumbled.

None of these are deal-breakers on their own, they’re part of the challenges of building a robot that functions outside of a controlled environment. In addition to those, I’m also concerned about the frame-to-frame inconsistency of depth data. Problematic areas are sometimes visible for a frame and disappear in the next. The noisiness of this information might confuse a robot trying to make sense of its environment with this data. It’s not visible in the screenshot above, but here’s an animated GIF showing a short snippet for illustration:

kinect-looking-at-sawppy-inconsistencies

Xbox 360 Kinect Driver: OpenNI or OpenKinect (freenect)?

The Kinect sensor bar from my Xbox 360 has long been retired from gaming duty. For its second career as robot sensor, I have cut off its proprietary plug and rewired it for computer use. Once I’ve verified the sensor bar is electrically compatible with a computer running Ubuntu, the first order of business was to turn fragile test connections into properly soldered wires protected by heat shrink tube. Here’s my sensor bar with its new standard USB 2.0 connector and a JST-RCY connector for 12 volt power.

xbox 360 kinect with modified plugs 12v usb

With the electrical side settled, attention turns to software. The sensor bar can tell the computer it is a USB device, but we’ll need additional driver software to access all the data it can provide. I chose to start with the Xbox 360 Kinect because of its wider software support, which means I have multiple choices on which software stack to work with.

OpenNI is one option. This open source SDK is still around thanks to Occipital, one of the companies that partnered with PrimeSense. PrimeSense was the company that originally developed the technology behind Xbox 360 Kinect sensor, but they have since been acquired by Apple and their technology incorporated into the iPhone X. Occipital itself is still in the depth sensor business with their Structure sensor bar. Available standalone or incorporated into products like Misty.

OpenKinect is another option. It doesn’t have a clear corporate sponsor like OpenNI, and seems to have its roots in the winner of the Adafruit contest to create an open source Kinect driver. Confusingly, it is also sometimes called freenect or variants thereof. (Its software library is libfreenect, etc.)

Both of these appear to still be receiving maintenance updates, and both have been used a lot of cool Kinect projects outside of Xbox 360 games. Ensuring there will be a body of source code available as reference for using either. Neither are focused on ROS, but people have written ROS drivers for both OpenNI and OpenKinect (freenect). (And even an effort to rationalize across both.)

One advantage of OpenNI is that it provides an abstraction layer for many different depth cameras built on PrimeSense technology, making code more portable across different hardware. This does not, however, include the second generation Xbox One Kinect, as that was built with a different (not PrimeSense) technology.

In contrast, OpenKinect is specific to the Xbox 360 Kinect sensor bar. It provides access to parts beyond the PrimeSense sensor: microphone array, tilt motor, and accelerometer.  While this means it doesn’t support the second generation Xbox One Kinect either, there’s a standalone sibling project libfreenect2 for meeting that need.

I don’t foresee using any other PrimeSense-based sensors, so OpenNI’s abstraction doesn’t draw me. The access to other hardware offered by OpenKinect does. Plus I do hope to upgrade to a Xbox One Kinect in the future, so I decided to start my Xbox 360 Kinect experimentation using OpenKinect.

Modify Xbox 360 Kinect for PC Use

I want to get some first hand experience working with depth cameras in a robotics context, and a little research implied the Xbox 360 Kinect sensor bar is the cheapest hardware that also has decent open source software support. So it’s time to dust off the Kinect sensor bar from my Halo 4 Edition Xbox 360.

I was a huge Halo fan and I purchased this Halo 4 console bundle as an upgrade from my first generation Xbox 360. My Halo enthusiasm has since faded and so has the Xbox 360. After I upgraded to Xbox One, I lent out this console (plus all accessories and games) to a friend with young children. Eventually the children lost interest in an antiquated console that didn’t play any of the cool new games and it resumed gathering dust. When I asked if I could reclaim my Kinect sensor bar, I was told to reclaim the whole works. The first accessory to undergo disassembly at SGVTech was the Xbox 360 Racing Steering Wheel. Now it is time for the second accessory: my Kinect sensor bar.

The sensor bar connected to my console via a proprietary connector. Most Xbox 360 accessories are wireless battery-powered devices, but the Kinect sends far more data than normal wireless controllers and requires much more power than rechargeable AA batteries can handle. Thus the proprietary connector is a combination of a 12 volt power supply alongside standard USB 2.0 data at 5 volts. To convert this sensor bar for computer use instead of a Xbox 360, the proprietary connector needs to be replaced by two separate connectors: A standard USB 2.0 plug plus a 12V power supply plug.

Having a functioning Xbox 360 made the task easier. First, by going into the Kinect diagnostics menu, I could verify the Kinect is in working condition before I start cutting things up. Second, after I severed the proprietary plug and splayed out wires in the cable, a multimeter was able to easily determine the wires for 12 volt (tan), 5 volt (red), and ground (black) by detecting the voltages placed on those wires by a running Xbox 360.

That left only the two USB data wires, colored green and white. Thankfully, this appears to be fairly standardized across USB cables. When I cut apart a USB 2.0 cable to use as my new plug, I found the same red, black, green, and white colors on wires. To test the easy thing first, I matched wire colors, kept them from shorting each other with small pieces of tape, and put 12V power on the tan wire using a bench power supply.

xbox 360 kinect on workbench

Since I was not confident on this wiring, I used my cheap laptop to test my suspect USB wiring instead of using my good laptop. Fortunately, the color matching appeared to work and the sensor bar enumerated properly. Ubuntu’s dmesg utility lists a Kinect sensor bar as a USB hub with three attached devices:

  1. Xbox NUI Motor: a small motor that can tilt the sensor bar up or down.
  2. Xbox Kinect Audio: on board microphone array.
  3. Xbox NUI Camera: this is our depth-sensing star!

xbox nui camera detected

[ 84.825198] usb 2-3.3: new high-speed USB device number 10 using xhci_hcd
[ 84.931728] usb 2-3.3: New USB device found, idVendor=045e, idProduct=02ae
[ 84.931733] usb 2-3.3: New USB device strings: Mfr=2, Product=1, SerialNumber=3
[ 84.931735] usb 2-3.3: Product: Xbox NUI Camera
[ 84.931737] usb 2-3.3: Manufacturer: Microsoft

 

ROS In Three Dimensions: Starting With Xbox 360 Kinect

The long-term goal driving my robotics investigations is to build something that has an awareness of its environment, and intelligently plan actions within it. (This is a goal shared by many other members of RSSC as well.) Building Phoebe gave me an introduction to ROS running in two dimensions, and now I have ambition to graduate to three. A robot working in three dimensions need a sensor that works in three dimensions, so where I’m going to start.

Phoebe started with a 2D laser scanner purchased off eBay that I learned to get up and running in ROS. Similarly, the cheapest 3D sensor that can be put on a ROS robot are repurposed Kinect sensor bars from Xbox game consoles. Even better, since I’ve been a Xbox gamer (more specifically a Halo and Forza gamer) I don’t need to visit eBay. I have my own Kinect to draft into this project. In fact, I have more than one: I have both the first generation Kinect sensor accessory for Xbox 360, and the second generation that was released alongside Xbox One.

xbox 360 kinect and xbox one kinect

The newer Xbox One Kinect is a superior sensor with a wider field of view, higher resolution, and better precision. But that doesn’t necessarily make it the best choice to start off with, because hardware capability is only part of the story.

When the Xbox 360 Kinect was launched, it was a completely novel new device offering depth sensing at a fraction of the price of existing depth sensors. There was a lot of enthusiasm both in the context of video gaming and hacking them to be used outside of Xbox 360 games. Unfortunately, the breathless hype wrote checks that the reality of a low-cost depth camera couldn’t quite cash. By the time Xbox One launched with an updated Kinect, interest had waned and far fewer open source projects aimed to work with a second generation Kinect.

The superior capabilities of the second generation sensor bar also brought downsides: it required more data bandwidth and hence a move to USB 3.0. At the time, USB 3.0 ecosystem was still maturing and new Kinect had problems working with certain USB 3.0 implementations. Even if the data could get into a computer, the sheer amount of it placed more demands on processing code. When coupled with reduced public interest, it meant software support for the second generation Kinect is less robust. A web search found a lot of people who encountered problems trying to get their second generation bar to work.

In the interest of learning the ropes and getting an introduction to the world of 3D sensing, I decided a larger and more stable software base is more interesting than raw hardware capabilities. I’ll use the first generation Xbox 360 Kinect sensor bar to climb the learning curve of building a three-dimensional solution in ROS. Once that is up and running, I can try to tame the more finicky second generation Kinect.

Happy Octopus Eating Taco and Fries

As a short break from larger scale projects, I decided to get some more SMD soldering practice. When I was at Supercon I received these little soldering kits which were designed to the simple “Shitty Add-On” specification. It’s a way for people to get a simple start with PCB projects made with aesthetics in mind as well as functionality, headlined by sophisticated #badgelife creations.

As fitting their simple nature, all I have to start with are little zip lock bags of parts. The first hint towards assembly instructions were printed on the circuit board: the text @Michelle4904 which pointed to the Twitter page of Michelle Grau and a Google Doc of assembly instructions. One notable fact of these kits is that there were no extra parts to replace any that might accidentally fly off into space, which meant I had to be extra careful handling them. Fortunately, the parts were larger than my most recent SMD LED project and while I did drop a few, these were large enough they did not fly too far and I was able to recover them.

I started with the fries. It was slow going at first because I was very afraid of losing parts. But I gradually built up a feel for handling them and things got gradually faster. After a bit of experimentation, I settled on a pattern of:

  1. Tin one pad with a small blob of solder.
  2. Place the SMD component next to the blob, and melt the blob again to connect them.
  3. Solder the other end of SMD component to other pad.

michelle4904 fries in progress

This is the rapid start portion of the learning curve – every LED felt faster and easier than the last. Soon the pack of fries were finished and illuminated. I was a little confused as to why the five LEDs were green, I had expected them to be yellow. Maybe they are seasoned fries with parsley or something?

Tangential note: when I visit Red Robin I like to upgrade to their garlic herbed fries.

michelle4904 fries illuminated

Once the fries were done, I then moved on to the taco which had a denser arrangement of components. It was an appropriate next step up in difficulty.

michelle4904 tacos in progress

Once completed, I have a taco with yellow LEDs for the shell (the same yellow I would have expected in the fries…), red LED for tomato, green LED for lettuce, and white LED for sour cream. It’s a fun little project.

Tangential note: Ixtaco Taqueria is my favorite taco place in my neighborhood.

michelle4904 tacos illuminated

The last zip lock bag has a smiling octopus and it was the easiest one to solder. It appears the original intention was to be an 1-to-4 expansion board for shitty add-ons, but if so, the connector genders are backwards.

michelle4904 octopus with connectors

No matter, I’ll just solder the taco and fries permanently to my happy octopus tentacles. In order to let this assembly of PCBs stand on its own, I soldered one of the battery holders I designed for my KISS Tindies.

michelle4904 kit combo battery

And here’s a happy octopus enjoying Taco Tuesday and a side of fries.

michelle4904 kit combo illuminated

Strange Failure Of Monoprice Monitor 10734

Under examination at a recent SGVTech meet is a strange failure mode for a big monitor. Monoprice item #10734 is a 30-inch diagonal monitor with 2560×1600 resolution. The display panel is IPS type and have LED back light – in short, it’s a pretty capable monitor. Unfortunately, it has suffered a failure of some sort that makes it pretty useless as a computer monitor: certain subpixels on the screen no longer respond to display input.

Here the monitor is supposed to display black text “What” on a white background. But as we can see, some parts inside the letters that are supposed to be dark are still illuminated, most visibly here in some green subpixels. The misbehaving pixels are not random, they are in a regular pattern, but I’m at a loss as to why this is happening.

monoprice 10734 subpixel failure

And these misbehaving subpixels drift in their value. Over a course of several minutes they will shift slightly in response to adjacent pixels. The end result is that anything left on screen for more than a few minutes will leave a ghost behind, its colors remembered by the misbehaving subpixels. It looks like old school CRT screen burn-in, but only takes a few minutes to create. Unplugging the monitor, waiting a few minutes, plugging it back in and turning it on does not eliminate the ghost. I need to wait a week before the ghosting fades to an undetectable level. This behavior means something, but I don’t know what.

In the hopes that there’s something obviously broken, out came the screwdrivers to take the monitor apart. We were not surprised to find there are minimal components inside the metal case: there is the big panel unit, connected to a small interface board which hosts the power plug and all the video input ports, connected to an even smaller circuit board hosting the user control buttons.

monoprice 10734 backless

The panel is made by LG, model number LM300WQ6(SL)(A1).

monoprice 10734 panel lg lm300wq6 sl a1

A web search found the LG datasheet for this panel. Because it is a 10-bits per color panel, and there are so many pixels, its data bandwidth requirements are enormous. 4 LVDS channels clocking over 70 MHz. There were no visibly damaged parts on its integrated circuit board. On the other end of LVDS interface cables, there were no visible damage on the interface board either.

monoprice 10734 interface board

Before I opened this up and found the datasheet, I had thought maybe I could generate signals to feed this panel using a PIC or something. But the chips I have are grossly underpowered to talk to a 4-channel LVDS panel at required speed. Perhaps this could be a FPGA project in the future? For now this misbehaving monitor will sit in a corner gathering dust waiting for the time I get around to that project, or until the next housecleaning pass where it departs as electronic recycle, whichever comes first.

Freeform Fun with Salvaged SMD LEDs

There’s a freeform circuit contest going on at Hackaday right now. I’m not eligible to enter, but I can still have fun on my own. I haven’t had any experience creating freeform circuit sculptures and now is as good as time as any to play around for a bit.

Where should I start? The sensible thing is to start simple with a few large through-hole light-emitting diodes (LEDs), but I didn’t do that. I decided to start higher up on the difficulty scale because of a few other events. The first is that I learned to salvage surface mount devices (SMD) from circuit boards with a cheap hot air gun originally designed for paint stripping. I had pulled a few SMD LEDs from a retired landline telephone and they were sitting in a jar.

The second is the arrival of a fine soldering iron tip. I had ordered it in anticipation for trying to repair a damaged ESP32 module. I thought I should practice using these new tips on something expendable before tackling an actual project, and a freeform exercise with salvaged SMD LED seems like a great opportunity to do so.

As a beginner at free form soldering, and a beginner at SMD soldering, the results were predictably terrible. I will win no prizes for fine workmanship here! But everyone has to start somewhere, and there will be many opportunities for practice in the future.

7 survivors light

It’s just a simple circuit with seven LEDs in parallel, on the lead of their shared current-limiting resistor. Not visible here is another aspect of learning to work with surface mount components: they are really, really small. I had actually salvaged nine LEDs. Two of the nine LEDs have flown off somewhere in my workshop and didn’t make it to the final product.

9 salvaged LEDs

For comparison, here is the “I ❤ SMD” soldering kit that was my gentle introduction to surface mount soldering. The LED in that soldering kit were significantly larger and easier to manipulate than those I salvaged from an obsolete landline telephone.

SMD intro size comparison

Moral of the story: for future projects practicing SMD assembly, be sure to have spare components on hand to replace those that fly off into space.

Sony KP-53S35 Signal Board “A” Components

Here are the prizes rewarded for an afternoon spent desoldering parts from a Sony KP-53S35’s signal board “A”.

Signal board A top before

The most visually striking component were the shiny metal boxes in the corner. This is where signal from the TV antenna enters into the system. RF F-Type connectors on the back panel is connected to these modules via their RCA type connector. Since this TV tuned in to analog TV broadcast signals that have long since been retired, I doubt these parts are good for any functional purpose anymore. But they are still shiny and likely to end up in a nonfunctional sculpture project.

Near these modules on the signal board was this circuit board “P”. It was the only module installed as a plug-in card, which caught our eye. Why would Sony design an easily removable module? There were two candidate explanations: (1) easy replacement because it was expected to fail frequently, or (2) easy replacement because it is something to be swapped out. Since the module worked flawlessly for 21 years, it’s probably the latter. A web search for the two main ICs on board found that the Philips TDA8315T is a NTSC decoder, which confirmed hypothesis #2: this “P” board is designed to be easily swapped for a TV to support other broadcast standards.

The RCA jacks are simple and quite likely to find use in another project.

Miscellaneous ICs and other modules were removed mostly as practice. I may look up their identifiers to see if anything is useful, but some of the parts (like the chips with Sony logo on top) are going to be proprietary and not expected to be worth the effort to figure out what they do.

The largest surface mount chip – which I used as hot air SMD removal practice – was labeled BH3856FS and is an audio processing chip handling volume and tone control. Looking at the flip side of the circuit board, we can see it has a large supporting cast of components clustered near it. It might be fun to see if I can power it up for a simple “Hello World” circuit, but returning it to full operation is dependent on the next item:

What’s far more interesting is nearby: the TDA7262 is a stereo audio amplifier with 20W per channel. This might be powerful enough to drive deflection coils to create Lissajous curves. The possibility was enough to make me spent the time and effort to remove its heat sinks gently and also recover all nearby components that might support it. I think it would be a lot of fun to get this guy back up and running in a CRT Lissajous curve project. Either with or without its former partner, the BH3856FS audio chip above.

Lissajous Curve Is An Ideal CRT Learning Project

Lissajous curve with shorter exposure

It was satisfying to see our CRT test rig showing Lissajous curves. [Emily] and I both contributed components for this cobbled-together contraption, drawing from our respective bins of parts. While the curves have their own beauty, there were also good technical reasons why it makes such a great learning project for working with salvaged cathode ray tubes. Mainly for things we don’t have to do:

Focus: We weren’t able to focus our beam in our first work session. We couldn’t count on sharp focus so we appreciate that Lissajous curves still look good when blurry. Thankfully, we did manage better focus for better pictures, but it was not required.

Modulation: To create a raster image, we must have control over beam brightness as we scan the screen. Even if doing arcade vector graphics, we need to be able to turn the beam off when moving from one shape to another. In contrast Lissajous curves are happy with an always-on dot of constant brightness.

Deflection: To create a raster image, we’d need a high level of control over the tube’s deflection coils. We’d need to create a constant horizontal sweep across the screen, as well as scanning vertically. HSYNC, VSYNC, all that good stuff. In contrast driving deflection coils for Lissajous curves require far gentler and smoother coil changes.

Geometry: Unlike modern flat panel displays, CRT can have geometry distortions: pincushion, trapezoidal, tilt, they’re all annoying to adjust and correct in order to deliver a good raster image. Fortunately, a Lissajous curve suffering from geometry distortions still look pretty good and allow us to ignore the issue for the time being.

There is a long way to go before we know enough to drive these tubes at their maximum potential. For one thing, it is running at a tiny fraction of its maximum brightness level. The tube’s previous life in a rear projection television was a hard one, visible in the above picture as a burned-in trapezoid on its phosphor layer. Driven hard enough to require liquid cooling, it would be so bright to be painful to look at and that’s when the beam is scanning across the entire screen. A Lissajous curve covers only a small fraction of that screen area. Concentrating a full-power beam in such a small area would raise concerns of phosphor damage. As pretty as Lissajous curves are, I don’t want them permanently burned into the phosphor. But we don’t have to worry about it until we get beam power figured out.

CRT Test Rig Produced Lissajous Curves

Last night’s CRT exploration adventures with [Emily] produced beautiful Lissajous curves on-screen that looked great to the eye but were a challenge to capture. (Cameras in general have a hard time getting proper focus and exposure for CRT phosphors.) Here’s a picture taken with exposure time of 1/200th of a second, showing phosphor brightness decay in a simple curve.

Lissajous curve with shorter exposure

Due to this brightness decay, more complex curves required a longer exposure time to capture. This picture was taken with a 1/50th second exposure but only captured about half of the curve.

Lissajous curve with longer exposure

Our test setup was a jury-rigged nest of wires. Not at all portable and definitely unsafe for public consumption. It required a space where everyone present are mature adults who understand high voltage parts are no joke and stay clear. (And more pragmatically, if an accident should occur, there will be other people present to call for immediate medical attention.)

CRT Test Rig angled view

Our beam power section consisted of two subsystems. The first is a battery that supplies low power (8 volts and less than 1 watt) to heat the filament. Using a battery keeps it electrically isolated from everything else. The second subsystem supplies high voltage to drive the CRT, and we keep a respectful distance from these parts when powered on.

CRT Test Rig beam power system

Connected to the tail end of the tube is the connector we freed from its original circuit board, wired with a simplified version of what was on that board. Several pins were connected to ground, some directly and others via resistors. The two wires disappearing off the top of the picture are for the heated filament. Two wires for experimentation are brought out and unconnected in this picture. The red connects to “screen grid” (which we don’t understand yet) and the black connected to an IC which we also don’t understand yet.

This is a rough exploratory circuit with known flaws. Not just the two wires that we haven’t yet connected to anything, but also the fact when we connected its ground to transformer’s ground, the tube flared bright for a fraction of a second before going dark. We only got a dot when connecting transformer ground to the filament heater negative, which was unexpected and really just tells us we still have a lot to learn. On the upside, something in this circuit allowed our “focus” wire to do its job this time, unlike our previous session.

CRT Test Rig tube wiring

But that’s to be figured out later. Tonight’s entertainment is our beam control section, which sits safely away from the high voltage bits and we can play with these while our tube is running.

CRT Test Rig beam control system

Controlling vertical deflection is an old Tektronix function generator. This is a proper piece of laboratory equipment producing precise and consistent signals. However, its maximum voltage output of 20V is not enough to give us full vertical deflection. And since we only had one, we needed something else to control horizontal deflection.

That “something else” was a hack. The big black box is a “300W” stereo amplifier, procured from the local thrift store for $15. Designed to drive speaker coils, tonight it is driving a CRT control yoke’s horizontal deflection coil instead. It was more than up to the task of providing full deflection. In fact, we had to turn the volume down to almost minimum for tonight’s experiments. A cell phone running simple tone generator app provided input signal. Not being a precision laboratory instrument, the signal generated was occasionally jittery. But enough for us to have fun producing Lissajous curves!

 

Old TV Picture Tubes Lights Again

When we tore apart an old rear projection television a few weeks ago, I did not expect those picture tubes would ever light up again. We took everything apart quickly within the narrow time window, so we didn’t have time to be careful to keep the electronics driving those CRTs intact. Those electronics are in pieces now, and in that writeup, I said the tubes were beautiful glass work and I hoped to display them as such in the future.

Well, there has been a change in plans.

On the same day as that teardown, [Emily] was gifted an old non-functioning camcorder. She has since taken that apart, too. The first component to see reuse was its tiny black and white viewfinder CRT. And as she dug deeper into the world of old CRTs, [Emily] came across this YouTube video by [Keystone Science] going over the basics of a cathode-ray tube and shared it with me. We were inspired to try lighting these tubes up again (without their original electronics) at yesterday’s SGVTech meetup.

The first step was to straighten out the pins at the rear end of our salvaged CRTs – they got a bit banged up in transport. A quick web search failed to find details on how to drive these tubes but probing with a meter gave us a few candidates for exploration.

Probing CRT pins

  • A pair of wires had around 8 ohms of resistance, highest of all wire pairs that gave a reading. This is likely the heating filament.
  • A few other wire pairs gave readings we didn’t understand, but several of them had some relation to a common pin. The common pin was thus our best candidate for cathode pin.

We knew the anode is connected to the side of the CRT, so now we have all the basics necessary to put a blurry dot on screen. A bench power supply was connected to the eight ohm load, and a few seconds later we can see a dull glow. Then a high voltage transformer was powered up across our anode and candidate cathode.

RPTV picture tube and transformer

After a bit of debugging, we have our blurry green dot! We proceeded to power up the other two tubes, which gave us a blue dot and a red dot. The colors look good to us, but apparently they’re not quite the right colors because during our TV disassembly we saw some color filters on the red and green tubes. (The blue tube had no color filter.)

During the course of the evening, the quality of our dot varied. Most of the time it was a blur approximately 5mm in diameter. On one occasion it bloomed out to 3cm diameter and we didn’t know what had caused it. Likewise, we had a pinpoint bright dot for a few seconds not correlating to any activity we could recall. As far as driving a CRT, we know enough to be respectful of the high voltage involved, but obviously we still have a lot more to learn. It’s just as well we don’t know how to focus the dot, because in the absence of sweep, a constant bright focused dot would likely burn a hole in the center of the screen’s phosphor layer.

A first step towards moving the beam was to put some power on the magnetic deflection yokes. These coils of wire were hooked up to a function generator, and we were able to get movement along one axis. Its maximum output of +/- 20V could only deflect a small fraction of the screen size, but it was something.

We didn’t have a second function generator on hand, but we got movement along another axis using magnets. They were taped to a shaft that was then put into a cordless drill. Holding the spinning drill near the control yoke induced movement along the other axis. Combined with the function generator, it allowed us to make a few curves on screen.

RPTV Red curves

Tinkering projects with visual results are always rewarding. With this success, there might yet be life ahead for these tubes as something other than pretty glass. A search found a hobbyist’s project to drive a CRT for an XY vector arcade monitor. That project page also linked to an excellent description of vector CRTs as used in old Atari arcade machines. Lots to learn!

Fun With Tiny CRT

When we took apart the big old rear projection television, the same family also had an old VHS camcorder from the 1980s slated for disposal. [mle_makes] took it off their hands and merrily started taking it apart for fun components. First component to be brought to our weekly SGVHAK meetup was the viewfinder’s tiny CRT. I brought the box of Sony KP-53S35 salvaged RPTV parts on the same day so we could place the two picture tubes side by side with a ruler between them.

Tiny CRT 1 - Side by side with RPTV tube

While the big tube had 21 years of TV watching burned in to the surface, the little CRT looks to be in good shape. (Also, the RPTV tube was likely driven far harder to generate the necessary brightness.) And since the little tube was part of a battery-powered device (12 volt lead-acid!) the picture tube flickered to life with a DC power supply.

Viewed from the top, we are reminded how much of a space savings modern LCDs gave us. Both of these tubes are far longer than their picture’s diagonal size.

Tiny CRT 2 - Length comparison with RPTV tube

The little tube’s image was remarkably crisp and bright when viewed in person, a fact extremely difficult to capture in a photograph. The 525 scan lines of a NTSC signal meant this little tube was pushing 600 dpi of resolution!

Tiny CRT 3 - tape measure

All of these images on the tube were generated from an old video conference camera, which had a composite video output port that was wired to the tube’s control board. Here’s one of the test setups, using a scrap piece of paper and a simple smiley face drawn on it with a Sharpie marker.

Tiny CRT 4 - camera test setup

The best picture taken of the tube was when I narrowed the aperture to get a longer field of depth, so the camera is free to focus on something other than the actual picture and still get halfway decent results. (I think it is focused on the edges of the glass here.) An admirable amount of paper texture was conveyed on this tube.

Tiny CRT 5 - camera test image

A few weeks after this initial tiny CRT demo, it became the centerpiece of this Freeform Mini CRT Sculpture on instructables.com.

Shouldn’t Simple LIDAR Be Cheaper By Now?

While waiting on my 3D printer to print a simple base for my laser distance scanner salvaged from a Neato robot vacuum, I went online to read more about this contraption. The more I read about it, the more I’m puzzled by its price. Shouldn’t these simple geometry-based distance scanners be a lot cheaper by now?

The journey started with this Engadget review from 2010 when Neato’s XV-11 was first introduced to fanfare that I apparently missed at the time. The laser scanner was a critical product differentiation for Neato, separating them from market leader iRobot’s Roomba vacuums. It was an advantage that was easy to explain and easy for users to see in action on their product, both of which help to justify their price premium.

Of course the rest of its market responded and now high-end robot vacuums all have mapping capability of some sort or another, pushing Neato to introduce other features like internet connectivity and remote control via a phone app. In 2016 Ars Technica reviewed these new features and found them immature. But more interesting to my technical brain is that Ars linked to a paper on Neato’s laser scanner design. Presented at May 19-23 2008 IEEE International Conference on Robotics and Automation titled A Low-Cost Laser Distance Sensor and listing multiple people from Neato Robotics as authors, it gave an insight into these spinning domes. Including this picture of internals.

Revo LDS

But even more interesting than the fascinating technology outlined in the paper, is the suggested economics advantage. The big claim is right in the abstract:

The build cost of this device, using COTS electronics and custom mechanical tooling, is under $30.

Considering that Neato robot vacuums have been in mass production for almost ten years, and that there’s been ample time for clones and imitators to come on market, it’s quite odd how these devices still cost significantly more than $30. If the claim in the paper is true, we should have these types of sensor for a few bucks by now, not $180 for an entry-level unit. If they were actually $20-$30, it would make ROS far more accessible. So what happened on the path to cheap laser scanner for everyone?

It’s also interesting that some other robot vacuum makers – like iRobot themselves – have implemented mapping via other means. Or at least, there’s no obvious dome of a laser scanner on top of some mapping-capable Neato competitors. What are they using, and are similar techniques available as ROS components? I hope to come across some answers in the near future.

Learning How To Use Pololu Stepper Driver Modules

My first experience with stepper motors is with this very inexpensive Amazon offering. I’ve since learned that these stepper motors are termed “unipolar” which incurs some trade-offs. From the price tag I knew they were cheap, and from the description I knew they were easy to control from a simple program. What I did not know about is the fairly significant headwinds if one wishes to get beyond the basics.

The simple driver module that goes with these simple motors only works for straightforward on/off control. When I tried to modulate the power to be somewhere between on and off, mysterious electromagnetic stuff started happening causing erratic motor behavior. At the time I decided to postpone solving the issue and to look into it later. Well, now is later and I’m going to solve my problem by ignoring unipolar motors entirely. Because it’s more productive to look at the bipolar stepper motors used by pretty much every halfway decent piece of hardware.

The motors themselves are more expensive, and the drivers are as well. Fortunately economies of scale meant “more expensive” is still only a few dollars. Pololu sells a line of stepper motor driver modules that are popular with the 3D printing crowd. (Or at least that’s where I learned of them.) The module’s physical form factor and pinout has become something of a de-facto industry standard. And a bipolar stepper motor for experimentation is equally easy to obtain as pretty much any stepper motor salvaged from consumer electronics will be a bipolar motor. For the purposes of my experiment, this motor came from a dead inkjet printer’s paper-feed mechanism.

Hooking up the electronics is a fairly straightforward exercise in reading data sheet and following instructions. The only thing I neglected was a capacitor across the motor input pins, something pointed out to me when I brought this experimental rig to a local maker meet. Fortunately I had been playing with a small enough motor that the absence of said capacitor didn’t fry everything.

All I needed to do was generate two data signals: direction and step. This is apparently a fairly common interface, even industrial-type stepper motor controllers accept similar inputs, so a Pololu is a great way to start. I created a small program to run on an 8-bit PIC microcontroller to generate these pulses, and the motor is off and running. It was super easy to get started, and this setup is enough for me to play around and build some basic understanding of stepper motor behavior. How they trade torque for speed, and how they respond to higher voltage/amperage. It’s a good foundation for designing future robotics projects.

Pololu ExperimentComponents on the breadboard, from left to right:

  1. Breadboard Power Supply
  2. Pololu A4983 Stepper Driver
  3. PIC16F18345 with program to generate step/direction based on potentiometer value.
  4. LEDs hooked up in parallel with step and direction signals.
  5. Potentiometer

A Gentle Introduction To Surface Mount Soldering

In my electronics projects to date, I’ve avoided surface mount devices (SMD) as much as I could. They require custom circuit boards because, given the absence of through-hole legs, they don’t work on prototyping breadboards. They’re small, which makes them difficult to handle without specialized tools. Tools like microscopes to see them, fine-tipped tweezers to handle them, and specialized fine tips soldering irons to solder the tiny connections.

That avoidance came to a crashing end at Layer One, where I had to face SMD head on or be left out of the fun on the Layer One badge add-on kit. The tools were provided at the event, as well as some guidance, so I got over the very beginner parts of the learning curve. It doesn’t make me an expert by any means, that would require more practice.

In the spirit of keeping the momentum going, I decided to check out a beginner-friendly SMD soldering electronics kit. The “I Can Surface Mount Solder” kit was designed by someone who also wanted a gentle introduction to SMD and decided to design a circuit for the purpose. All information is open source so I can make my own. And catering to lazy people like myself, the designer has also put kits up for sale on the maker marketplace Tindie.

There’s a volume discount for buying ten or more, with no increase in shipping, so I decided to buy ten and bring them to share at my local hobbyist meetup. I knew I wasn’t the only one who wanted to practice SMD with something simple. Before the event I had one taker for a kit besides myself. During the meet, a third one was put together by a SGVHAK regular and two more were put together by people who have never attended a SGVHAK meet before. They came because they read the meeting information on Meetup.com and wanted to try SMD soldering. I count this as a publicity win.

The kit itself was far easier to put together than the LayerOne LED add-on kit. The SMD components were about the largest sizes available. So they could be seen by the naked eye and while we still needed tweezers to handle them, we could solder them with regular-sized soldering tips. The only real technical challenge was determining the appropriate orientation of the lone red LED, something that took us a while to figure out. Fortunately we all determined the direction correctly before soldering.

At the end of the night, we had five little pulsing heartbeat pendants and five people who had the satisfaction of a successful SMD soldering project.

I Heart SMD

Monoprice Maker Ultimate (Wanhao Duplicator i6) Kills Another Relay

I was willing to stop at “good enough for now” on modifying my open-box Monoprice Maker Select because I needed printers up and running. In the process of designing and iterating Sawppy‘s 3D-printed components, I kept both printers busy pumping out prototypes to see how the designs in my mind survived the translation into real world pieces.

Sometimes there was enough work to keep printers busy around the clock, and this was too much stress for the control boards inside these affordable printers. It’s an inevitable tradeoff between price tag and robustness. In the case of my Monoprice Maker Ultimate, the weakest point in the chain is the main motor relay that controls power going to all the motors (both stepper motors and fan motors) and heaters.

This relay has failed once before, and under the constant workload, another one has kicked the bucket. It has started failing intermittently which shows up as brief interruption in motor power. Since the electronics are not powered through this path, these brief interruptions ruined prints, making them look like the motor drive belt had skipped a few teeth when the reality was the motors stopping briefly as the electronics continued onwards.

Last time this happened, I kept trying to diagnose belt skipping. Wasting a lot of time looking over mechanical parts that were working well. This time I recognize the symptoms and pulled out the control board before the printer failed completely.

Since it wasn’t completely burned out yet, the relay exterior didn’t look bad – only a minor discoloration that might have gone overlooked if we didn’t know exactly where to look.

Relay exterior discoloration

Cutting away the relay’s blue enclosure exposed a familiar sight: the interior is fried.

D6 Second Failed Relay

It’s always easier to do something the second time, but addressing my second fried relay is still time spent not working on the project itself.

Solarbotics Photopopper 4.2 Photovore

And now, a completely unnecessary distraction.

While digging through the parts pile for stuff to help build Sawppy’s wiring, I came across an electronics kit that has been sitting, waiting to be built, for over a decade. Every time I came across this kit I decided “I’ll build it later.” And even though I’m in the middle of building rover wiring, I looked at this kit today and thought: “I kept saying I’ll build this someday… that day is today.”

Photopopper Bag

I’m not sure how old the kit is, but I’m quite confident it’s been over a decade. The only date visible is the last revision date on the manual – August 25, 2003. The manual said it is version 4.2, the Solarbotics catalog is now up to version 5.0. I thought its age wouldn’t be a problem, it’s not like electronic components decay, right?

Photopopper Contents

It’s a simple little kit of through-hole solder components. The first step – like all kits new and old – is to lay out all parts and check against the parts manifest in the instruction. Once all parts are accounted for, assembly can begin. Given the growth in my electronics skills over the past years, this “Skill Level 3” kit is now a breeze to assemble.

It took only about half an hour to reach the point where all power components are connected. I am instructed to take the partially built robot someplace bright to give the solar powered system a test. I placed it under sunlight and… nothing.

First I ran through troubleshooting steps outlined in the manual, and none of them helped. So it was again time to deploy skills I didn’t have years ago, this time for electronics debugging. Out came the multi meter and start probing the circuit.

Diagnosis: the electrolytic capacitor is dead. Remember when I said electronic components don’t decay with age? Well, that was wrong because electrolytic capacitors do. Back into the parts pile I go and I was able to find another 4700 uF capacitor. Problem: it is physically a far larger device. In this picture, the dead capacitor from the kit is on the right, dwarfed by the new functional capacitor on the left.

Photopopper Capacitor

Since the functional capacitor is much larger physically, it couldn’t fit in the same space under my photopopper. The big capacitor would have to go above. The dead capacitor also served as the third leg of the photopopper, so that job had to be reassigned to a little piece of wire. Now the robot sit on the wire instead of a capacitor along with its two wheels.

Photopoppin

And now… it moves! The little solar panel charges up the capacitor, and ever few seconds that power is dumped into one of the electric motors, scooting the photopopper a tiny bit forward. A process that will repeat for as long as there’s light shining on the solar panel. I’m sure that it would move farther or maybe faster if the original sized capacitor was in place. This poor photopopper has to carry a big heavy barrel of a capacitor on its back.

Technically the kit has not been completely assembled. There are also two touch sensors that help the photopopper detect walls and steer away from them. But given that it is moving, and that I wanted to get back to assembling wiring for Sawppy the Rover, I’m content enough to leave the photopopper as is. I’ll pull it out every once in a while, so I could put it under sunlight and let it scoot around.