Evaluating Microchip HV5812 For VFD Projects

[Emily] and I started our vacuum fluorescent display (VFD) project because there was an interesting unit available, with a look distinctly different from modern LED. We just had to salvage it out of an obsolete piece of electronics and figure out how to make it work. We now have a prototype VFD driver circuit up and running, and we can command it to light up arbitrary combinations of segments at arbitrary times from a Python program running on an attached Raspberry Pi. This is a satisfying milestone marking completion of our first generation hardware allowing us to transition to focusing on what to actually put on that display.

The first few experiments with VFD patterns confirmed that we really like how a VFD looks! As people who love to take things apart to see how they work, we enjoy all the components of a VFD visible through its glass case. Their intricate internals qualify them as desktop sculpture just sitting there, making them light up is just icing on the cake.

With this early success and desire for more, chances are good that we’ll embark on additional VFD projects in the future. For our first VFD project we chose to stick with generic chips for the sake of learning the basic principles, but if we’re going to start building more we should look at using chips designed for the purpose.

According to Digi-Key’s online catalog, there are dedicated vacuum fluorescent drivers available from Maxim and Microchip. None of Maxim’s chips are available in hobbyist-friendly through-hole designs, but two of Microchip’s three lines are. HV5812P-G is the 20-channel model in 28-pin DIP format, and HV518P-G is the 32-channel counterpart in 40-pin DIP format. Curiously, for ~50% more pins, the HV518P-G costs over double the price. So it made sense to start with the HV5812.

With data and clock pins for straightforward serial data input, it was designed to be easy to drive from pretty much any microcontroller. The only thing that caught my attention is that logic input lines are expected to be 5V input with a minimum of 3.5V required to be interpreted as logic high. This meant we couldn’t drive it directly from 3.3V hosts like a Raspberry Pi or an ESP32. We’d need level shifters or a 5V capable part like a PIC to act an intermediary.

It looks promising enough — and priced cheaply enough — to be a consideration for potential follow-on VFD projects. So we’ll add that to the Digi-Key shopping cart and see where things go from there.

A Close-Up Look At VFD Internals

When we first pulled the vacuum fluorescent display (VFD) from an old Canon tuner timer unit, we can see a lot of intricate details inside the sealed glass. We had work to do that day – probing the pinout and such, but part of its overall allure comes from the beauty of details visible within. It is still something of interest, I just had to remember to bring my camera with a close up macro lens to take a few photos.

VFD under macro lens off

One of the reasons a VFD is so interesting to look at is the fact the actual illuminating elements sits under other parts which contribute to the process. Closest to the camera are the heating filaments, visible as four horizontal wires. This is where electrons begin their journey to trigger fluorescence.

Between these filaments and individual illuminating segments are our control grids, visible literally as a very fine mesh grid mostly – but not entirely – built on a pattern of hexagons.

And beyond the control grids, our individual phosphor coated segments that we control to illuminate at our command using our prototype driver board. (Once it is fully debugged, at least.) These phosphor elements are what actually emits light and become visible to the user. The grid and filament are thin which helps them block as little of this light as possible.

Fortunately an illuminated VFD segment emits plenty of light to make it through the fine mesh of grids and fine wires of filament. From a distance those fine elements aren’t even visible, but up close they provide a sophisticated look that can’t be matched by the simplicity of a modern segmented LED unit.

VFD under macro lens on

Original NEC VSL0010-A VFD Power Source

Now that we have a better understanding of how a NEC VSL0010-A vacuum fluorescent display (VFD) works, figuring out its control pinout with the help of an inkjet power supply, we returned to the carcass we salvaged that VFD out of. Now that we knew each pins’ function, we picked those that supplied 2.5V AC for filament power to track. We expect they are least likely to pass through or be shared by other devices. We traced through multiple circuit boards back to the main power transformer output plug. We think it’s the two gray wires on the left side of this picture, but our volt meter probes are too big to reach these visible contact points. And the potential risk of high voltage made us wary of poking bare wires into that connector as we did for the inkjet power supply.

NEC VSL0010-A VFD Power Supply - Probes Too Fat

Our solution came as a side benefit of decision made earlier for other reasons. Since we were new to VFD technology, our curiosity-fueled exploratory session was undertaken with an inexpensive Harbor Freight meter instead of the nice Fluke in the shop. Originally the motivation was to reduce risk: we won’t cry if we fry the Harbor Freight meter, but now we see a secondary benefit: With such an expensive device, we also feel free to modify these probes to our project at hand. Off we go to the bench grinder!

NEC VSL0010-A VFD Power Supply - Probes Meet Grinder

A few taps on the grinding wheel, and we have much slimmer probes that could reach in towards those contacts.

NEC VSL0010-A VFD Power Supply - Probes Now Thin

Suitably modified, we could get to work.

NEC VSL0010-A VFD Power Supply - Probes At Work

We were able to confirm the leftmost pair of wires, with gray insulation, is our 2.5VAC for VFD filament. The full set of output wires from this transformer, listed by color of their insulation, are:

  • Gray pair (leftmost in picture): 2.6V AC
  • Brown pair (spanning left and right sides): 41V AC
  • Dark blue pair: (center in picture) 17.2V AC
  • Black pair (rightmost in picture): 26.6V AC

There was also a single light-blue wire adjacent to the pair of dark blue wires. Probing with volt meter indicated it was a center tap between the dark blue pair.

NEC VSL0010-A VFD Power Supply Transformer

Once determined, we extracted the transformer as a single usable unit: there was a fuse holder and an extra power plug between it and the device’s AC power cord. We’re optimistic this assembly will find a role in whatever project that VFD will eventually end up in. 2.6V AC can warm filament, rectified 26.6V AC should work well for VFD grid and segments. And with proper rectification and filtering, a microcontroller can run off one of these rails. It’ll be more complex than driving a LED display unit, but it’ll be worth it for that distinctive VFD glow.

HP Inkjet Printer Power Supply For NEC VSL0010-A VFD

One of the reasons LED has overtaken VFD in electronics is reduced power requirements. Not just in raw wattage of power consumed, but also the varying voltage levels required to drive a VFD. The NEC VSL0010-A VFD whose pinout we just probed ran on 2.5V AC and ~30V DC. In contrast, most LED can run at the same 5V or 3.3V DC power plane as their digital drive logic, vastly simplifying design.

We didn’t have a low voltage AC source handy for probing, so we used 2.5V DC. We expected this to have only cosmetic effects. One side of our VFD will be brighter than the other, since one side will have a filament-to-grid/element voltage difference of 30V but the other will only have 27.5V.

But putting 2.5V DC on the filament occupied our only bench power supply available at the time. What will we use for our 30V DC power source? The answer came from our parts pile of previously disassembled electronics, in this case a retired HP inkjet printer’s power supply module labeled with the number CM751-60190.

HP CM751-60190 AC Power Adapter

According to the label, this module could deliver DC at 32V and 12V. Looking at its three-conductor output plug, it was easy to come to the conclusion we have one wire for ground, one wire for 32V, and one wire for 12V. But that easy conclusion would be wrong. Look closer at the label…

HP CM751-60190 AC Power Adapter pinout

We do indeed have a ground wire in the center, but there is only one power supply wire labelled +32V/+12V. It actually delivers “32 or 12” volts, not “32 and 12” volts. That last pin on the left has an icon. What did that mean? Our hint comes from power output specifications: +32V 1095mA or +12V 170mA. We deduced this meant the icon is a moon, indicating a way to toggle low-power sleep mode where the power supply only delivers 12V * 170mA = 2 watts vs. full 32V * 1095mA = 35 W.

With that hypothesis in hand, it’s time to hook up some wires and test its behavior.

HP CM751-60190 AC Power Adapter test

When “sleep mode” pin is left floating, voltage output is 32VDC. When that pin is grounded, voltage output drops to 12VDC. Since we’re looking for 32VDC to drive our VFD grid and elements, it’s easy enough to leave sleep wire unconnected and solder wires to the remaining two wires to obtain 32V DC for our VFD adventures.

HP CM751-60190 AC Power Adapter new wires

Sleuthing NEC VSL0010-A VFD Control Pinout

Vacuum Fluorescent Display (VFD) technology used to be the dominant form of electronics display. But once LEDs became cheap and bright enough, they’ve displaced VFDs across much of the electronics industry. Now a VFD is associated with vintage technology, and its distinctive glow has become a novelty in and of itself. Our star attraction today served as display for a timer and tuner unit that plugs into the tape handling unit of a Canon VC-10 camera to turn it into a VCR. A VFD is very age-appropriate for a device that tunes into now-obsolete NTSC video broadcast for recording to now-obsolete VHS magnetic tape.

Obviously, in this age of on-demand internet streaming video, there’s little point in bringing the whole system back to life. But the VFD appears to be in good shape, so in pursuit of that VFD glow, salvage operation began at a SGVHAK meetup.

NEC VSL0010-A VFD Before

We have the luxury of probing it while running, aided by the fact we can see much of its implementation inside the vacuum chamber through clear glass. The far right and left pins are visibly connected to filament wires, probing those pins saw approximately 2.5V AC. We can also see eight grids, each with a visible connection to its corresponding pin. That leaves ten pins to control elements within a grid. Probing the grid and element pins indicate they are being driven by roughly 30V DC. (It was hard to be sure because we didn’t have a constant-on element to probe…. like all VCRs, it was blinking 12:00)

This was enough of a preliminary scouting report for us to proceed with desoldering.

NEC VSL0010-A VFD Unsoldering

Predating RoHS solder that can be finicky, it was quickly freed.

NEC VSL0010-A VFD Freed

Now we can see its back side and, more importantly, its part number which immediately went into a web search on how to control it.

NEC VSL0010-A VFD Rear

The top hit on this query is this StackExchange thread, started by someone who has also salvaged one of these displays and wanted to get it up and running with an Arduino. Sadly the answers were unhelpful and not at all supportive, discouraging the effort with “don’t bother with it”.

We shrugged, undeterred, and continued working to figure it out by ourselves.

NEC VSL0010-A VFD Front

If presented with an unknown VFD in isolation, the biggest unknown would have been what voltage levels to use. But since we have that information from probing earlier, we could proceed with confidence we won’t burn up our VFD. We powered up the filament, then powered up one of the pins visibly connected to a grid and touched each of the remaining ten non-grid pins to see what lights up. For this part of the experiment, we got our 32V DC from the power supply unit of a HP inkjet printer.

We then repeated the ten element probe for each grid, writing down what we’ve found along the way.

NEC VSL0010-A VFD Annotated

We hope to make use of this newfound knowledge in a future project, and we hope this blog post will be found by someone in the future and help them return a VFD to its former glowing glory.

Window Shopping JeVois Machine Vision Camera

In the discussion period that followed my Sawppy presentation at RSSC, there was a discussion on machine vision. When discussing problems & potential solutions, JeVois camera was mentioned as one of the potential tools for machine vision problems. I wrote down the name and resolved to look it up later. I have done so and I like what I see.

First thing that made me smile was the fact it was a Kickstarter success story. I haven’t committed any of my own money to any Kickstarter project, but I’ve certainly read more about failed projects than successful ones. It’s nice when the occasional success story comes across my radar.

The camera module is of the type commonly used in cell phones, and behind the camera is a small machine vision computer again built mostly of portable electronics components. The idea is to have a completely self-contained vision processing system, requiring only power input and delivers processed data output. Various machine vision tasks can be handled completely inside the little module as long as the user is realistic about the limited processing power available. It is less powerful but also less expensive and smaller than Google’s AIY Vision module.

The small size is impressive, and led to my next note of happiness: it looks pretty well documented. When I looked at its size, I had wondered how to best mount the camera on a project. It took less than 5 minutes to decipher documentation hierarchy and find details on physical dimensions and how to mount the camera case. Similarly, my curiosity about power requirements was quickly answered with confirmation that its power draw does indeed exceed the baseline USB 500mW.

Ease of programming was the next investigation. Some of the claims around this camera made it sound like its open source software stack can run on a developer’s PC and debugged before publishing to the camera. However, the few tutorials I skimmed through (one example here) all required an actual JeVois camera to run vision code. I interpret this to mean that JeVois software stack is indeed specific to the camera. The whole “develop on your PC first” only means the general sense of developing vision algorithms on a PC before porting to JeVois software stack for deployment on the camera itself. If I find out I’m wrong, I’ll come back and update this paragraph.

2017-04-27-15-14-36When I looked on Hackaday, I saw that one of the writers thought JeVois camera’s demo mode was a very effective piece of software. It should be quite effective at its job: get users interested in digging deeper. Project-wise, I see a squirrel detector and a front door camera already online.

The JeVois camera has certainly earned a place on my “might be interesting to look into” list for more detailed later investigation.

Sawppy Odometry Candidate: Flow Breakout Board

When I presented Sawppy the Rover at Robotics Society of Southern California, one of the things I brought up as an open problems is how to determine Sawppy’s movement through its environment. Wheel odometry is sufficient for a robot traveling on flat ground like Phoebe, but when Sawppy travels on rough terrain things can get messy in more ways than one.

In the question-and-answer session some people brought up the idea of calculating odometry by visual means, much in the way a modern optical computer mouse determines its movement on a desk. This is something I could whip up with a downward pointing webcam and open source software, but there are also pieces of hardware designed specifically to perform this task. One example is the PWM3901 chip, which I could experiment using breakout boards like this item on Tindie.

However, that visual calculation is only part of the challenge, because translating what that camera sees into a physical dimension requires one more piece of data: the distance from the camera to the surface it is looking at. Depending on application, this distance might be a known quantity. But for robotic applications where the distance may vary, a distance sensor would be required.

As a follow-up to my presentation, RSSC’s online discussion forum brought up the Flow Breakout Board. This is an interesting candidate for helping Sawppy gain awareness of how it is moving through its environment (or failing to do so, as the case may be.) A small lightweight module that puts the aforementioned PWM3901 chip alongside a VL53L0x distance sensor.

flow_breakout_585px-1

The breakout board only handles the electrical connections – an external computer or microcontroller will be necessary to make the whole system sing. That external module will need to communicate with PWM3901 via SPI and, separately, VL53L0x via I2C. Then it will need perform the math to calculate actual X-Y distance traveled. This in itself isn’t a problem.

The problem comes from the fact a PWM3901 was designed to be used on small multirotor aircraft to aid them in holding position. Two design decisions that make sense for its intended purpose turns out to be a problem for Sawppy.

  1. This chip is designed to help hold position, which is why it was not concerned with knowing the height above surface or physical dimension of that translation: the sensor was only concerned with detecting movement so the aircraft can be brought back to position.
  2. Multirotor aircraft all have built-in gyroscopes to stabilize itself, so they already detect rotation about their Z axis. Sawppy has no such sensor and would not be able to calculate its position in global space if it doesn’t know how much it has turned in place.
  3. Multirotor aircraft are flying in the air, so the designed working range of 80mm to infinity is perfectly fine. However, Sawppy has only 160mm between the bottom of the equipment bay and nominal floor distance. If traversing over obstacles more than 80mm tall, or rough terrain bringing surface within 80mm of the sensor, this sensor would become disoriented.

This is a very cool sensor module that has a lot of possibilities, and despite its potential problems it has been added to the list of things to try for Sawppy in the future.

Intel RealSense T265 Tracking Camera

In the middle of these experiments with a Xbox 360 Kinect as robot depth sensor, Intel announced a new product that’s along similar lines and a tempting venue for robotic exploration: the Intel RealSense T265 Tracking Camera. Here’s a picture from Intel’s website announcing the product:

intel_realsense_tracking_camera_t265_hero

T265 is not a direct replacement for the Kinect, at least not as a depth sensing camera. For that, we need to look at Intel’s D415 and D435. They would be fun to play with, too, but I already had the Kinect so I’m learning on what I have before I spend money.

So if the T265 is not a Kinect replacement, how is it interesting? It can act as a complement to a depth sensing camera. The point of the thing is not to capture the environment – it is to track the motion and position within that environment. Yes, there is the option for an image output stream, but the primary data output of this device is a position and orientation.

This type of camera-based “inside-out” tracking is used by the Windows Mixed Reality headsets to determine its user’s head position and orientation. These sensors requires low latency and high accuracy to avoid VR motion sickness, and has obvious applications in robotics. Now Intel’s T265 offers that capability in a standalone device.

According to Intel, the implementation is based on a pair of video cameras and an inertial motion unit (IMU). Data feeds into internal electronics running a V-SLAM (visual simultaneous location and mapping) algorithm aided by Movidius neural network chip. This process generates position+orientation output. It seems pretty impressive to me that it is done in such a small form factor and high speed (at least low latency) with 1.5 watt of power.

At $200, it is a tempting toy for experimentation. Before I spend that money, though, I’ll want to read more about how to interface with this device. The USB 2 connection is not surprising, but there’s a phrase that I don’t yet understand: “non volatile memory to boot the device” makes it sound like the host is responsible for some portion of the device’s boot process, which isn’t like any other sensor I’ve worked with before.

Xbox 360 Kinect Depth Sensor Data via OpenKinect (freenect)

I want to try using my Xbox 360 Kinect as a robot sensor. After I’ve made the necessary electrical modifications, I decided to try to talk to my sensor bar via OpenKinect (a.k.a. freenect a.k.a. libfreenect) driver software. Getting it up and running on my Ubuntu 16.04 installation was surprisingly easy: someone has put in the work to make it a part of standard Ubuntu software repository. Whoever it was, thank you!

Once installed, though, I wasn’t sure what to do next. I found documentation telling me to launch a test/demonstration viewer application called glview. That turned out to be old information, the test app is actually called freenect-glview. Also, it is no longer added to the default user search path. I have to launch it with the full path /usr/bin/freenect-glview.

Once I got past that minor hurdle, I have on my screen a window that showed two video feeds from my Kinect sensor: on the left, depth information represented by colors. And on the right, normal human vision color video. Here’s my Kinect pointed at its intended home: on board my rover Sawppy.

freenect-glview with sawppy

This gave me a good close look at Kinect depth data. What’s visible and represented by color is pretty good, but the black areas worry me. They represent places where the Kinect was not able to extract depth information. I didn’t expect it to be able to pick out fine surface details of Sawppy components, but I did expect it to see the whole chassis in some form. This was not the case, with areas of black all over Sawppy’s chassis.

Some observations of what a Xbox 360 Kinect could not see:

  • The top of Sawppy’s camera mast. Neither the webcam nor the 3D-printed mount for that camera. This part is the most concerning one because I have no hypothesis why.
  • The bottom of Sawppy’s payload bay. This is unfortunate but understandable: it is a piece of laser cut acrylic which would reflect Kinect’s projected pattern away from the receiving IR camera.
  • The battery pack in rear has a smooth clear plastic package and would also not reflect much back to the camera.
  • Wiring bundles are enclosed in a braided sleeve. It would scatter the majority of IR pattern and those that make it to the receiving camera would probably be jumbled.

None of these are deal-breakers on their own, they’re part of the challenges of building a robot that functions outside of a controlled environment. In addition to those, I’m also concerned about the frame-to-frame inconsistency of depth data. Problematic areas are sometimes visible for a frame and disappear in the next. The noisiness of this information might confuse a robot trying to make sense of its environment with this data. It’s not visible in the screenshot above, but here’s an animated GIF showing a short snippet for illustration:

kinect-looking-at-sawppy-inconsistencies

Xbox 360 Kinect Driver: OpenNI or OpenKinect (freenect)?

The Kinect sensor bar from my Xbox 360 has long been retired from gaming duty. For its second career as robot sensor, I have cut off its proprietary plug and rewired it for computer use. Once I’ve verified the sensor bar is electrically compatible with a computer running Ubuntu, the first order of business was to turn fragile test connections into properly soldered wires protected by heat shrink tube. Here’s my sensor bar with its new standard USB 2.0 connector and a JST-RCY connector for 12 volt power.

xbox 360 kinect with modified plugs 12v usb

With the electrical side settled, attention turns to software. The sensor bar can tell the computer it is a USB device, but we’ll need additional driver software to access all the data it can provide. I chose to start with the Xbox 360 Kinect because of its wider software support, which means I have multiple choices on which software stack to work with.

OpenNI is one option. This open source SDK is still around thanks to Occipital, one of the companies that partnered with PrimeSense. PrimeSense was the company that originally developed the technology behind Xbox 360 Kinect sensor, but they have since been acquired by Apple and their technology incorporated into the iPhone X. Occipital itself is still in the depth sensor business with their Structure sensor bar. Available standalone or incorporated into products like Misty.

OpenKinect is another option. It doesn’t have a clear corporate sponsor like OpenNI, and seems to have its roots in the winner of the Adafruit contest to create an open source Kinect driver. Confusingly, it is also sometimes called freenect or variants thereof. (Its software library is libfreenect, etc.)

Both of these appear to still be receiving maintenance updates, and both have been used a lot of cool Kinect projects outside of Xbox 360 games. Ensuring there will be a body of source code available as reference for using either. Neither are focused on ROS, but people have written ROS drivers for both OpenNI and OpenKinect (freenect). (And even an effort to rationalize across both.)

One advantage of OpenNI is that it provides an abstraction layer for many different depth cameras built on PrimeSense technology, making code more portable across different hardware. This does not, however, include the second generation Xbox One Kinect, as that was built with a different (not PrimeSense) technology.

In contrast, OpenKinect is specific to the Xbox 360 Kinect sensor bar. It provides access to parts beyond the PrimeSense sensor: microphone array, tilt motor, and accelerometer.  While this means it doesn’t support the second generation Xbox One Kinect either, there’s a standalone sibling project libfreenect2 for meeting that need.

I don’t foresee using any other PrimeSense-based sensors, so OpenNI’s abstraction doesn’t draw me. The access to other hardware offered by OpenKinect does. Plus I do hope to upgrade to a Xbox One Kinect in the future, so I decided to start my Xbox 360 Kinect experimentation using OpenKinect.

Modify Xbox 360 Kinect for PC Use

I want to get some first hand experience working with depth cameras in a robotics context, and a little research implied the Xbox 360 Kinect sensor bar is the cheapest hardware that also has decent open source software support. So it’s time to dust off the Kinect sensor bar from my Halo 4 Edition Xbox 360.

I was a huge Halo fan and I purchased this Halo 4 console bundle as an upgrade from my first generation Xbox 360. My Halo enthusiasm has since faded and so has the Xbox 360. After I upgraded to Xbox One, I lent out this console (plus all accessories and games) to a friend with young children. Eventually the children lost interest in an antiquated console that didn’t play any of the cool new games and it resumed gathering dust. When I asked if I could reclaim my Kinect sensor bar, I was told to reclaim the whole works. The first accessory to undergo disassembly at SGVTech was the Xbox 360 Racing Steering Wheel. Now it is time for the second accessory: my Kinect sensor bar.

The sensor bar connected to my console via a proprietary connector. Most Xbox 360 accessories are wireless battery-powered devices, but the Kinect sends far more data than normal wireless controllers and requires much more power than rechargeable AA batteries can handle. Thus the proprietary connector is a combination of a 12 volt power supply alongside standard USB 2.0 data at 5 volts. To convert this sensor bar for computer use instead of a Xbox 360, the proprietary connector needs to be replaced by two separate connectors: A standard USB 2.0 plug plus a 12V power supply plug.

Having a functioning Xbox 360 made the task easier. First, by going into the Kinect diagnostics menu, I could verify the Kinect is in working condition before I start cutting things up. Second, after I severed the proprietary plug and splayed out wires in the cable, a multimeter was able to easily determine the wires for 12 volt (tan), 5 volt (red), and ground (black) by detecting the voltages placed on those wires by a running Xbox 360.

That left only the two USB data wires, colored green and white. Thankfully, this appears to be fairly standardized across USB cables. When I cut apart a USB 2.0 cable to use as my new plug, I found the same red, black, green, and white colors on wires. To test the easy thing first, I matched wire colors, kept them from shorting each other with small pieces of tape, and put 12V power on the tan wire using a bench power supply.

xbox 360 kinect on workbench

Since I was not confident on this wiring, I used my cheap laptop to test my suspect USB wiring instead of using my good laptop. Fortunately, the color matching appeared to work and the sensor bar enumerated properly. Ubuntu’s dmesg utility lists a Kinect sensor bar as a USB hub with three attached devices:

  1. Xbox NUI Motor: a small motor that can tilt the sensor bar up or down.
  2. Xbox Kinect Audio: on board microphone array.
  3. Xbox NUI Camera: this is our depth-sensing star!

xbox nui camera detected

[ 84.825198] usb 2-3.3: new high-speed USB device number 10 using xhci_hcd
[ 84.931728] usb 2-3.3: New USB device found, idVendor=045e, idProduct=02ae
[ 84.931733] usb 2-3.3: New USB device strings: Mfr=2, Product=1, SerialNumber=3
[ 84.931735] usb 2-3.3: Product: Xbox NUI Camera
[ 84.931737] usb 2-3.3: Manufacturer: Microsoft

 

ROS In Three Dimensions: Starting With Xbox 360 Kinect

The long-term goal driving my robotics investigations is to build something that has an awareness of its environment, and intelligently plan actions within it. (This is a goal shared by many other members of RSSC as well.) Building Phoebe gave me an introduction to ROS running in two dimensions, and now I have ambition to graduate to three. A robot working in three dimensions need a sensor that works in three dimensions, so where I’m going to start.

Phoebe started with a 2D laser scanner purchased off eBay that I learned to get up and running in ROS. Similarly, the cheapest 3D sensor that can be put on a ROS robot are repurposed Kinect sensor bars from Xbox game consoles. Even better, since I’ve been a Xbox gamer (more specifically a Halo and Forza gamer) I don’t need to visit eBay. I have my own Kinect to draft into this project. In fact, I have more than one: I have both the first generation Kinect sensor accessory for Xbox 360, and the second generation that was released alongside Xbox One.

xbox 360 kinect and xbox one kinect

The newer Xbox One Kinect is a superior sensor with a wider field of view, higher resolution, and better precision. But that doesn’t necessarily make it the best choice to start off with, because hardware capability is only part of the story.

When the Xbox 360 Kinect was launched, it was a completely novel new device offering depth sensing at a fraction of the price of existing depth sensors. There was a lot of enthusiasm both in the context of video gaming and hacking them to be used outside of Xbox 360 games. Unfortunately, the breathless hype wrote checks that the reality of a low-cost depth camera couldn’t quite cash. By the time Xbox One launched with an updated Kinect, interest had waned and far fewer open source projects aimed to work with a second generation Kinect.

The superior capabilities of the second generation sensor bar also brought downsides: it required more data bandwidth and hence a move to USB 3.0. At the time, USB 3.0 ecosystem was still maturing and new Kinect had problems working with certain USB 3.0 implementations. Even if the data could get into a computer, the sheer amount of it placed more demands on processing code. When coupled with reduced public interest, it meant software support for the second generation Kinect is less robust. A web search found a lot of people who encountered problems trying to get their second generation bar to work.

In the interest of learning the ropes and getting an introduction to the world of 3D sensing, I decided a larger and more stable software base is more interesting than raw hardware capabilities. I’ll use the first generation Xbox 360 Kinect sensor bar to climb the learning curve of building a three-dimensional solution in ROS. Once that is up and running, I can try to tame the more finicky second generation Kinect.

Happy Octopus Eating Taco and Fries

As a short break from larger scale projects, I decided to get some more SMD soldering practice. When I was at Supercon I received these little soldering kits which were designed to the simple “Shitty Add-On” specification. It’s a way for people to get a simple start with PCB projects made with aesthetics in mind as well as functionality, headlined by sophisticated #badgelife creations.

As fitting their simple nature, all I have to start with are little zip lock bags of parts. The first hint towards assembly instructions were printed on the circuit board: the text @Michelle4904 which pointed to the Twitter page of Michelle Grau and a Google Doc of assembly instructions. One notable fact of these kits is that there were no extra parts to replace any that might accidentally fly off into space, which meant I had to be extra careful handling them. Fortunately, the parts were larger than my most recent SMD LED project and while I did drop a few, these were large enough they did not fly too far and I was able to recover them.

I started with the fries. It was slow going at first because I was very afraid of losing parts. But I gradually built up a feel for handling them and things got gradually faster. After a bit of experimentation, I settled on a pattern of:

  1. Tin one pad with a small blob of solder.
  2. Place the SMD component next to the blob, and melt the blob again to connect them.
  3. Solder the other end of SMD component to other pad.

michelle4904 fries in progress

This is the rapid start portion of the learning curve – every LED felt faster and easier than the last. Soon the pack of fries were finished and illuminated. I was a little confused as to why the five LEDs were green, I had expected them to be yellow. Maybe they are seasoned fries with parsley or something?

Tangential note: when I visit Red Robin I like to upgrade to their garlic herbed fries.

michelle4904 fries illuminated

Once the fries were done, I then moved on to the taco which had a denser arrangement of components. It was an appropriate next step up in difficulty.

michelle4904 tacos in progress

Once completed, I have a taco with yellow LEDs for the shell (the same yellow I would have expected in the fries…), red LED for tomato, green LED for lettuce, and white LED for sour cream. It’s a fun little project.

Tangential note: Ixtaco Taqueria is my favorite taco place in my neighborhood.

michelle4904 tacos illuminated

The last zip lock bag has a smiling octopus and it was the easiest one to solder. It appears the original intention was to be an 1-to-4 expansion board for shitty add-ons, but if so, the connector genders are backwards.

michelle4904 octopus with connectors

No matter, I’ll just solder the taco and fries permanently to my happy octopus tentacles. In order to let this assembly of PCBs stand on its own, I soldered one of the battery holders I designed for my KISS Tindies.

michelle4904 kit combo battery

And here’s a happy octopus enjoying Taco Tuesday and a side of fries.

michelle4904 kit combo illuminated

Strange Failure Of Monoprice Monitor 10734

Under examination at a recent SGVTech meet is a strange failure mode for a big monitor. Monoprice item #10734 is a 30-inch diagonal monitor with 2560×1600 resolution. The display panel is IPS type and have LED back light – in short, it’s a pretty capable monitor. Unfortunately, it has suffered a failure of some sort that makes it pretty useless as a computer monitor: certain subpixels on the screen no longer respond to display input.

Here the monitor is supposed to display black text “What” on a white background. But as we can see, some parts inside the letters that are supposed to be dark are still illuminated, most visibly here in some green subpixels. The misbehaving pixels are not random, they are in a regular pattern, but I’m at a loss as to why this is happening.

monoprice 10734 subpixel failure

And these misbehaving subpixels drift in their value. Over a course of several minutes they will shift slightly in response to adjacent pixels. The end result is that anything left on screen for more than a few minutes will leave a ghost behind, its colors remembered by the misbehaving subpixels. It looks like old school CRT screen burn-in, but only takes a few minutes to create. Unplugging the monitor, waiting a few minutes, plugging it back in and turning it on does not eliminate the ghost. I need to wait a week before the ghosting fades to an undetectable level. This behavior means something, but I don’t know what.

In the hopes that there’s something obviously broken, out came the screwdrivers to take the monitor apart. We were not surprised to find there are minimal components inside the metal case: there is the big panel unit, connected to a small interface board which hosts the power plug and all the video input ports, connected to an even smaller circuit board hosting the user control buttons.

monoprice 10734 backless

The panel is made by LG, model number LM300WQ6(SL)(A1).

monoprice 10734 panel lg lm300wq6 sl a1

A web search found the LG datasheet for this panel. Because it is a 10-bits per color panel, and there are so many pixels, its data bandwidth requirements are enormous. 4 LVDS channels clocking over 70 MHz. There were no visibly damaged parts on its integrated circuit board. On the other end of LVDS interface cables, there were no visible damage on the interface board either.

monoprice 10734 interface board

Before I opened this up and found the datasheet, I had thought maybe I could generate signals to feed this panel using a PIC or something. But the chips I have are grossly underpowered to talk to a 4-channel LVDS panel at required speed. Perhaps this could be a FPGA project in the future? For now this misbehaving monitor will sit in a corner gathering dust waiting for the time I get around to that project, or until the next housecleaning pass where it departs as electronic recycle, whichever comes first.

Freeform Fun with Salvaged SMD LEDs

There’s a freeform circuit contest going on at Hackaday right now. I’m not eligible to enter, but I can still have fun on my own. I haven’t had any experience creating freeform circuit sculptures and now is as good as time as any to play around for a bit.

Where should I start? The sensible thing is to start simple with a few large through-hole light-emitting diodes (LEDs), but I didn’t do that. I decided to start higher up on the difficulty scale because of a few other events. The first is that I learned to salvage surface mount devices (SMD) from circuit boards with a cheap hot air gun originally designed for paint stripping. I had pulled a few SMD LEDs from a retired landline telephone and they were sitting in a jar.

The second is the arrival of a fine soldering iron tip. I had ordered it in anticipation for trying to repair a damaged ESP32 module. I thought I should practice using these new tips on something expendable before tackling an actual project, and a freeform exercise with salvaged SMD LED seems like a great opportunity to do so.

As a beginner at free form soldering, and a beginner at SMD soldering, the results were predictably terrible. I will win no prizes for fine workmanship here! But everyone has to start somewhere, and there will be many opportunities for practice in the future.

7 survivors light

It’s just a simple circuit with seven LEDs in parallel, on the lead of their shared current-limiting resistor. Not visible here is another aspect of learning to work with surface mount components: they are really, really small. I had actually salvaged nine LEDs. Two of the nine LEDs have flown off somewhere in my workshop and didn’t make it to the final product.

9 salvaged LEDs

For comparison, here is the “I ❤ SMD” soldering kit that was my gentle introduction to surface mount soldering. The LED in that soldering kit were significantly larger and easier to manipulate than those I salvaged from an obsolete landline telephone.

SMD intro size comparison

Moral of the story: for future projects practicing SMD assembly, be sure to have spare components on hand to replace those that fly off into space.

Sony KP-53S35 Signal Board “A” Components

Here are the prizes rewarded for an afternoon spent desoldering parts from a Sony KP-53S35’s signal board “A”.

Signal board A top before

The most visually striking component were the shiny metal boxes in the corner. This is where signal from the TV antenna enters into the system. RF F-Type connectors on the back panel is connected to these modules via their RCA type connector. Since this TV tuned in to analog TV broadcast signals that have long since been retired, I doubt these parts are good for any functional purpose anymore. But they are still shiny and likely to end up in a nonfunctional sculpture project.

Near these modules on the signal board was this circuit board “P”. It was the only module installed as a plug-in card, which caught our eye. Why would Sony design an easily removable module? There were two candidate explanations: (1) easy replacement because it was expected to fail frequently, or (2) easy replacement because it is something to be swapped out. Since the module worked flawlessly for 21 years, it’s probably the latter. A web search for the two main ICs on board found that the Philips TDA8315T is a NTSC decoder, which confirmed hypothesis #2: this “P” board is designed to be easily swapped for a TV to support other broadcast standards.

The RCA jacks are simple and quite likely to find use in another project.

Miscellaneous ICs and other modules were removed mostly as practice. I may look up their identifiers to see if anything is useful, but some of the parts (like the chips with Sony logo on top) are going to be proprietary and not expected to be worth the effort to figure out what they do.

The largest surface mount chip – which I used as hot air SMD removal practice – was labeled BH3856FS and is an audio processing chip handling volume and tone control. Looking at the flip side of the circuit board, we can see it has a large supporting cast of components clustered near it. It might be fun to see if I can power it up for a simple “Hello World” circuit, but returning it to full operation is dependent on the next item:

What’s far more interesting is nearby: the TDA7262 is a stereo audio amplifier with 20W per channel. This might be powerful enough to drive deflection coils to create Lissajous curves. The possibility was enough to make me spent the time and effort to remove its heat sinks gently and also recover all nearby components that might support it. I think it would be a lot of fun to get this guy back up and running in a CRT Lissajous curve project. Either with or without its former partner, the BH3856FS audio chip above.

Lissajous Curve Is An Ideal CRT Learning Project

Lissajous curve with shorter exposure

It was satisfying to see our CRT test rig showing Lissajous curves. [Emily] and I both contributed components for this cobbled-together contraption, drawing from our respective bins of parts. While the curves have their own beauty, there were also good technical reasons why it makes such a great learning project for working with salvaged cathode ray tubes. Mainly for things we don’t have to do:

Focus: We weren’t able to focus our beam in our first work session. We couldn’t count on sharp focus so we appreciate that Lissajous curves still look good when blurry. Thankfully, we did manage better focus for better pictures, but it was not required.

Modulation: To create a raster image, we must have control over beam brightness as we scan the screen. Even if doing arcade vector graphics, we need to be able to turn the beam off when moving from one shape to another. In contrast Lissajous curves are happy with an always-on dot of constant brightness.

Deflection: To create a raster image, we’d need a high level of control over the tube’s deflection coils. We’d need to create a constant horizontal sweep across the screen, as well as scanning vertically. HSYNC, VSYNC, all that good stuff. In contrast driving deflection coils for Lissajous curves require far gentler and smoother coil changes.

Geometry: Unlike modern flat panel displays, CRT can have geometry distortions: pincushion, trapezoidal, tilt, they’re all annoying to adjust and correct in order to deliver a good raster image. Fortunately, a Lissajous curve suffering from geometry distortions still look pretty good and allow us to ignore the issue for the time being.

There is a long way to go before we know enough to drive these tubes at their maximum potential. For one thing, it is running at a tiny fraction of its maximum brightness level. The tube’s previous life in a rear projection television was a hard one, visible in the above picture as a burned-in trapezoid on its phosphor layer. Driven hard enough to require liquid cooling, it would be so bright to be painful to look at and that’s when the beam is scanning across the entire screen. A Lissajous curve covers only a small fraction of that screen area. Concentrating a full-power beam in such a small area would raise concerns of phosphor damage. As pretty as Lissajous curves are, I don’t want them permanently burned into the phosphor. But we don’t have to worry about it until we get beam power figured out.

CRT Test Rig Produced Lissajous Curves

Last night’s CRT exploration adventures with [Emily] produced beautiful Lissajous curves on-screen that looked great to the eye but were a challenge to capture. (Cameras in general have a hard time getting proper focus and exposure for CRT phosphors.) Here’s a picture taken with exposure time of 1/200th of a second, showing phosphor brightness decay in a simple curve.

Lissajous curve with shorter exposure

Due to this brightness decay, more complex curves required a longer exposure time to capture. This picture was taken with a 1/50th second exposure but only captured about half of the curve.

Lissajous curve with longer exposure

Our test setup was a jury-rigged nest of wires. Not at all portable and definitely unsafe for public consumption. It required a space where everyone present are mature adults who understand high voltage parts are no joke and stay clear. (And more pragmatically, if an accident should occur, there will be other people present to call for immediate medical attention.)

CRT Test Rig angled view

Our beam power section consisted of two subsystems. The first is a battery that supplies low power (8 volts and less than 1 watt) to heat the filament. Using a battery keeps it electrically isolated from everything else. The second subsystem supplies high voltage to drive the CRT, and we keep a respectful distance from these parts when powered on.

CRT Test Rig beam power system

Connected to the tail end of the tube is the connector we freed from its original circuit board, wired with a simplified version of what was on that board. Several pins were connected to ground, some directly and others via resistors. The two wires disappearing off the top of the picture are for the heated filament. Two wires for experimentation are brought out and unconnected in this picture. The red connects to “screen grid” (which we don’t understand yet) and the black connected to an IC which we also don’t understand yet.

This is a rough exploratory circuit with known flaws. Not just the two wires that we haven’t yet connected to anything, but also the fact when we connected its ground to transformer’s ground, the tube flared bright for a fraction of a second before going dark. We only got a dot when connecting transformer ground to the filament heater negative, which was unexpected and really just tells us we still have a lot to learn. On the upside, something in this circuit allowed our “focus” wire to do its job this time, unlike our previous session.

CRT Test Rig tube wiring

But that’s to be figured out later. Tonight’s entertainment is our beam control section, which sits safely away from the high voltage bits and we can play with these while our tube is running.

CRT Test Rig beam control system

Controlling vertical deflection is an old Tektronix function generator. This is a proper piece of laboratory equipment producing precise and consistent signals. However, its maximum voltage output of 20V is not enough to give us full vertical deflection. And since we only had one, we needed something else to control horizontal deflection.

That “something else” was a hack. The big black box is a “300W” stereo amplifier, procured from the local thrift store for $15. Designed to drive speaker coils, tonight it is driving a CRT control yoke’s horizontal deflection coil instead. It was more than up to the task of providing full deflection. In fact, we had to turn the volume down to almost minimum for tonight’s experiments. A cell phone running simple tone generator app provided input signal. Not being a precision laboratory instrument, the signal generated was occasionally jittery. But enough for us to have fun producing Lissajous curves!

 

Old TV Picture Tubes Lights Again

When we tore apart an old rear projection television a few weeks ago, I did not expect those picture tubes would ever light up again. We took everything apart quickly within the narrow time window, so we didn’t have time to be careful to keep the electronics driving those CRTs intact. Those electronics are in pieces now, and in that writeup, I said the tubes were beautiful glass work and I hoped to display them as such in the future.

Well, there has been a change in plans.

On the same day as that teardown, [Emily] was gifted an old non-functioning camcorder. She has since taken that apart, too. The first component to see reuse was its tiny black and white viewfinder CRT. And as she dug deeper into the world of old CRTs, [Emily] came across this YouTube video by [Keystone Science] going over the basics of a cathode-ray tube and shared it with me. We were inspired to try lighting these tubes up again (without their original electronics) at yesterday’s SGVTech meetup.

The first step was to straighten out the pins at the rear end of our salvaged CRTs – they got a bit banged up in transport. A quick web search failed to find details on how to drive these tubes but probing with a meter gave us a few candidates for exploration.

Probing CRT pins

  • A pair of wires had around 8 ohms of resistance, highest of all wire pairs that gave a reading. This is likely the heating filament.
  • A few other wire pairs gave readings we didn’t understand, but several of them had some relation to a common pin. The common pin was thus our best candidate for cathode pin.

We knew the anode is connected to the side of the CRT, so now we have all the basics necessary to put a blurry dot on screen. A bench power supply was connected to the eight ohm load, and a few seconds later we can see a dull glow. Then a high voltage transformer was powered up across our anode and candidate cathode.

RPTV picture tube and transformer

After a bit of debugging, we have our blurry green dot! We proceeded to power up the other two tubes, which gave us a blue dot and a red dot. The colors look good to us, but apparently they’re not quite the right colors because during our TV disassembly we saw some color filters on the red and green tubes. (The blue tube had no color filter.)

During the course of the evening, the quality of our dot varied. Most of the time it was a blur approximately 5mm in diameter. On one occasion it bloomed out to 3cm diameter and we didn’t know what had caused it. Likewise, we had a pinpoint bright dot for a few seconds not correlating to any activity we could recall. As far as driving a CRT, we know enough to be respectful of the high voltage involved, but obviously we still have a lot more to learn. It’s just as well we don’t know how to focus the dot, because in the absence of sweep, a constant bright focused dot would likely burn a hole in the center of the screen’s phosphor layer.

A first step towards moving the beam was to put some power on the magnetic deflection yokes. These coils of wire were hooked up to a function generator, and we were able to get movement along one axis. Its maximum output of +/- 20V could only deflect a small fraction of the screen size, but it was something.

We didn’t have a second function generator on hand, but we got movement along another axis using magnets. They were taped to a shaft that was then put into a cordless drill. Holding the spinning drill near the control yoke induced movement along the other axis. Combined with the function generator, it allowed us to make a few curves on screen.

RPTV Red curves

Tinkering projects with visual results are always rewarding. With this success, there might yet be life ahead for these tubes as something other than pretty glass. A search found a hobbyist’s project to drive a CRT for an XY vector arcade monitor. That project page also linked to an excellent description of vector CRTs as used in old Atari arcade machines. Lots to learn!

Fun With Tiny CRT

When we took apart the big old rear projection television, the same family also had an old VHS camcorder from the 1980s slated for disposal. [mle_makes] took it off their hands and merrily started taking it apart for fun components. First component to be brought to our weekly SGVHAK meetup was the viewfinder’s tiny CRT. I brought the box of Sony KP-53S35 salvaged RPTV parts on the same day so we could place the two picture tubes side by side with a ruler between them.

Tiny CRT 1 - Side by side with RPTV tube

While the big tube had 21 years of TV watching burned in to the surface, the little CRT looks to be in good shape. (Also, the RPTV tube was likely driven far harder to generate the necessary brightness.) And since the little tube was part of a battery-powered device (12 volt lead-acid!) the picture tube flickered to life with a DC power supply.

Viewed from the top, we are reminded how much of a space savings modern LCDs gave us. Both of these tubes are far longer than their picture’s diagonal size.

Tiny CRT 2 - Length comparison with RPTV tube

The little tube’s image was remarkably crisp and bright when viewed in person, a fact extremely difficult to capture in a photograph. The 525 scan lines of a NTSC signal meant this little tube was pushing 600 dpi of resolution!

Tiny CRT 3 - tape measure

All of these images on the tube were generated from an old video conference camera, which had a composite video output port that was wired to the tube’s control board. Here’s one of the test setups, using a scrap piece of paper and a simple smiley face drawn on it with a Sharpie marker.

Tiny CRT 4 - camera test setup

The best picture taken of the tube was when I narrowed the aperture to get a longer field of depth, so the camera is free to focus on something other than the actual picture and still get halfway decent results. (I think it is focused on the edges of the glass here.) An admirable amount of paper texture was conveyed on this tube.

Tiny CRT 5 - camera test image

A few weeks after this initial tiny CRT demo, it became the centerpiece of this Freeform Mini CRT Sculpture on instructables.com.