Yesterday I documented my quest to track down a button I saw on a prop used in the movie Sneakers which premiered 28 years ago. Eventually learning that they were Omron B3J-2100s, and they are still available for purchase from Digi-Key. Given the age of the movie, plus the fact this little detail was not important to the plot, plus the fact the rows of buttons were on screen for only roughly five seconds, I expected my blog post to quickly disappear into footnotes of the internet. Like everything else on this blog it was just a note from my personal explorations. Maybe it’ll receive an occasional a visitor, here to learn how to get these buttons for themselves.
Judging by web traffic, I was quite mistaken. I knew that the movie made impressions on others like myself, but I underestimated how many of us were out there. And even more surprisingly, these buttons made an impression on people as well. I had no idea there were so many button connoisseurs out there whose appreciation for a switch goes beyond whether it can reliable close and open a circuit. My blog post and associated Tweet were picked up by Adafruit blog and someone even submitted it to Hacker News where it was as high as #13 for a brief time. Amazing.
Its popularity also received feedback from many others. I found the prop in the movie was a Sequential Circuits Prophet 2002, but several people brought up the Roland TR-808 and there was also a mention of the Oberheim OB-X. They all used similar looking buttons for similar purposes: select options and an associated LED to indicates the active item. However, despite the similarity (and the TR-808 uses color to great effect) they are not the same Omron B3J buttons of a Prophet 2002.
I started posting a few corrections, but then I stopped. I realized that people were just sharing their own fond memories and there is no particular reason I have to point out they weren’t Omron B3Js. If someone is fond of a stylish button, what’s the point of taking away their joy? For the sake of pedantic correctness? Nah, we’re all connoisseurs of our own favorites. They have theirs, and I have Omron B3J.
Thanks to [garrettlarson] on Hacker News, we have a link to a YouTube clip where we can see the Prophet 2002 and its row of Omron B3J-2100s. (Go to ~1:20 if embedded time offset isn’t working.)
I’m a fan of physical, tactile buttons that provide visual feedback. I realize the current trend favors capacitive touch, but I love individual buttons I can find by feel. And one of the best looking buttons I’ve seen was from the 1992 movie Sneakers. When the blind character Whistler used a Braille-labeled device to add a sound effect representing the “thump” sound of a car going over seams of a concrete bridge.
They were only on screen for a few seconds, but I was enamored with the black buttons, each with a corresponding red LED. The aesthetics reminded me of 2001, like the eye of HAL in a mini monolith. Or maybe Darth Vader, if the Sith lord were a button. When I first watched the movie many years ago, I thought they were neat and left it at that. But in recent years I’ve started building electronics projects. So when I rewatched the movie recently and saw them again, I decided to research these buttons.
The first step is to determine if they were even a thing. All we saw was the front control panel of an unknown device. It was possible the buttons and LEDs were unrelated components sitting adjacent to each other on the circuit board, and only visually tied together by pieces of plastic custom-made for the device. So the first step was to find that device. There was a label at the bottom of the panel below Whistler’s hand, but due to the shallow depth of field I could only make out the end as “… 2002 digital sampler”. Time to hit the internet and see if anyone recognized the machine.
The first step is the Trivia section of the movie’s page on Internet Movie Database where people contribute random and minute pieces of information. Firearms enthusiasts can usually be counted on to name specific guns used in a film, and automotive enthusiasts frequently contribute make and model of cars as well.
Sadly, the electronics audio enthusiasts have not felt fit to contribute to this page, so I went elsewhere on the internet trying various keyword combinations of “Sneakers”, “Whistler”, “sampler”, etc. The answer was found in a comment to a Hackaday post about the movie. I’ve complained a lot about the general quality of internet comments, but this time one person’s nitpicking correction is my rare nugget of gold.
Whistler’s device is a Sequential Circuits Prophet 2002 Digital Sampler rack. As befitting the movie character, the sampler’s control panel had Braille labels covering the default text. But otherwise it appears relatively unmodified for the movie. I wish the pictures were higher resolution, but their arrangement strongly implies the button and LED are part of a single subcomponent. The strongest evidence came from the presence of four vertical axis buttons, rotated 90 degrees from the rest.
Aside: On the far right of the control panel, we can see a sign of the era, a 3.5″ floppy drive for data storage.
Encouraged by this find, I started searching for Prophet 2002 buttons. I quickly found an eBay community offering replacement parts for Sequential Circuits products including these buttons. What’s intriguing to me is that these are sold in “New” condition, not surplus or salvaged from old units. I’m optimistically interpreting this as a hint these buttons might still be in production, decades after the Prophet 2002 was released in 1985.
Thanks to those eBay listings, I have seen a picture of the component by itself and it is exactly what I hoped it would be: the button’s exterior surface, the electric switch itself, and the LED are integrated into a single through-hole component. Given the tantalizing possibility it is still in active production and something I can buy for my own projects, I went next to electronics supplier Digi-Key.
Digi-Key carries 305,212 components under its “Switches” section, not practical for individual manual review. Fortunately there are subsections and I first tried “Tactile Switches” (5721 items) because those buttons look like they’d give a good tactile response. In the movie we also heard a satisfying click when the button was pressed, but I don’t know if that was added later by the film’s sound mixer.
Within the “Tactile Switches” section, I aggressively filtered by the most optimistic wish they are active and in stock:
It is a more modern and refined variant of the same concept. The button is sculpted, and the illuminated portion sits flush with the surroundings. This would be a great choice if I was updating the design, but I am chasing a specific aesthetic and this switch does not look like a monolith or Vader.
So that wasn’t too bad, but I’m not ready to stop. Peer to “Tactile Switches” are several other subsections worth investigating. I next went to “Pushbutton Switches” (175,722 items) and applied the following filters. Again starting with the optimistic wish they are active and in stock:
Part Status: Active
Stocking Options: In Stock
Type: Keyswitch, Illuminated
Illumination Type, Color: LED, Red
That filter cut the number of possibilities from 175,722 down to 21 which felt like an overly aggressive shot in the dark, and I expected I would have to adjust the search. But it wouldn’t hurt to take a quick look over those 21 and my eyes widened when I saw that list. Most of the 21 results had a very similar aesthetic and would make an acceptable substitute, but that would not be necessary because I saw the Omron B3J-2100.
Yes, I’ve hit the jackpot! Even if that isn’t precisely the correct replacement for a Prophet 2002 sampler, it has the right aesthetics: a dark angular block with the round LED poking out. But now that I’ve found the component, I can perform web searches with its name to confirm that others have also decided Omron B3J is the correct replacement.
Omron’s B3J datasheet showed a list of models, where we can see variations on this design. The button is available in multiple colors, including this black unit and the blue also used by the Prophet 2002. The number and color of LEDs add to the possible combinations, from no LEDs (a few blue examples on a Prophet 2002 have no lights) to two lights in combinations of red, green, or yellow.
Sure, these switches are more expensive than the lowest bidder options on Amazon. But the price premium is a small price to pay when I’m specifically seeking this specific aesthetic. When I want the look that started me on this little research project, only the Omron B3J-2100 will do. And yeah, I’m going to call them “Whistler buttons”.
Like most Kickstarters, the product description is written to make it sound like a fantastic dream come true. The difference between this and every other Kickstarter is that it is describing my dream of an affordable robot vision sensor coming true.
The Kickstarter is launching two related products. The first is OAK-1, a single camera backed by hardware acceleration for computer vision algorithms. This sounds like a supercharged competitor to machine vision cameras like the JeVois and OpenMV. However, it is less relevant to a mobile autonomous robot than its stablemate, the OAK-D.
Armed with two cameras for stereoscopic vision plus a third for full color high resolution image capture, the OAK-D promises a tremendous amount of capability for (at least the current batch of backers) a relatively affordable $149. Both from relatively straightforward stereo distance calculations to more sophisticated inferences (like image segmentation) aided by that distance information.
Relatively to the $99 Google AIY Vision, the OAK-D has far more promise for helping a robot understand the structure of its environment. I hope it ships and delivers on all its promises, because then an OAK-D would become the camera of choice for autonomous robot projects, hands down. But even if not, it is still a way to capture stereo footage for calculation elsewhere, and only moderately overpriced for a three-camera peripheral. Or at least, that’s how I justified backing an OAK-D for my own experiments. The project has easily surpassed its funding goals, so now I have to wait and see if the team can deliver the product by December 2020 as promised.
As of this writing, Ubuntu 18 is not officially supported for Crouton. It’s not explicitly forbidden, but it does come with a warning: “May work with some effort.” I didn’t know exactly what the problem might be, but given how easy it is to erase and restart on a Chromebook I decided to try it and see what happens.
It failed failed with a hash sum failure during download. This wasn’t the kind of failure I thought might occur with an unsupported build, download hash sum failure seems more like a flawed or compromised download server. I didn’t understand enough about the underlying infrastructure to know what went wrong, never mind fixing it. So in an attempt to tackle a smaller problem with a smaller surface area, I backed off to the minimalist “cli-extra” install of Bionic which skips graphical user interface components. This path succeeded without errors, and I now have a command line interface that reported itself to be Ubuntu 18 Bionic.
As a quick test to see if hardware is visible to software running inside this environment, I plugged in a USB to serial adapter. I was happy to see dmesg reported the device was visible and accessible via /dev/ttyUSB0. Curiously, the owner showed up as serial group instead of the usual dialout I see on Ubuntu installations.
A visible serial peripheral was promising enough for me to proceed and install ROS Melodic. I thought I’d try installation with Python 3 as the Python executable, but that went awry. I then repeated installation with the default Python 2. Since I have no GUI, I installed the ros-melodic-ros-base package. Its installation completed with no errors, allowing me to poke around and see how ROS works in this environment.
While I was at Costco for grocery shopping and checking out rechargeable batteries, I walked through the electronics section. For certain items, the actual merchandise is not available in the shopper-accessible warehouse. Instead the warehouse pallet hold sheets of cardboard that shoppers take to the cashier. Once paid, the receipt is shown to a secure caged area attendant who delivers the actual merchandise.
Familiar with this system, I was not surprised to see pallets stacked full of cardboard sheets in the camera section and didn’t think much of it until my peripheral vision reported unexpected motion. GoPro camera packaging always advertise with beautiful people having amazing adventures, but one of these was moving. Cardboard doesn’t do that.
Stopping to investigate, I found one of the cardboard sheets has been modified. A rectangular hole was cut, and a video-playing LCD screen complete with associated electronics was inserted. A USB flash drive presumably held the GoPro promotional video, and that was the extent of the modification. There was no rear enclosure so it was easy for me to take a picture for further research once I returned home.
Given the information visible, I searched for Bluefin Technology “Ad Player” Model 20-3000-1232. This led to the manufacturer’s website and some minimal specifications. While the product label clearly labeled the device as made in China, the web site lists an office in Georgia that I presume was their USA distributor. So I was surprised that I couldn’t seem to find this module for purchase online, the only units I found for sale were secondhand on eBay. Most surprisingly, typing the model number into Alibaba and AliExpress also came up empty! I infer this to mean the company only sells to other businesses and there’s no retail sales channel.
I had thought this device would make a promising platform for hacks depending on price. Second hand eBay Buy-It-Now price of $70 is not terribly promising, I had been hoping for something closer to $30. But until I find a retail source or decide to buy in bulk directly from the manufacturer, none of that matters.
Today I learned brushless DC (BLDC) motor controllers might tailor their motor start up procedure for their designed use case. This is notable because depending on the specialization, it might make them unsuitable for repurposing to other projects. This is not something I had experienced as my own projects have used either stepper motors, brushed DC motors, or self-contained modules like RC hobby servo motors. But another local maker tried to repurpose some brushless motors for a project, and made a discovery worthy of writing down for future reference.
The motors were sold as electric skateboard motors, similar but not identical to this Amazon item. (*) The rubber wheel was removed, and the motor mounted inside a 3D printed gearbox in a similar manner to the brushed DC motors inside SGVHAK rover wheels. The resulting assembly worked well enough on a workbench when driven by the controller module that came with the motor. But when placed under load, the motor was unable able to start from standstill. It was stuck in an endless loop of try, fail, wait, repeat. We had to give the mechanism a push and start it moving before the skateboard motor controller could take over.
Unsatisfied with this behavior, the project moved on to a dedicated brushless motor control chip purchased from Digi-Key and a circuit board was designed around it. This custom BLDC controller module replaced the default unit. When starting under load, it would twitch for a few seconds, then give up and stop. It was only able to run the motor in open air or after a push. So while the actual behavior was different, for practical purposes the two controllers were equally useless for the project.
If it was just one controller, we can blame a faulty unit. But two completely different controllers exhibiting similar behavior in different ways tell us something else is going on. After some investigation, the conclusion is that both controllers are behaving by design. Neither controller were capable of starting a loaded brushless motor from standstill because neither were intended to.
The first controller was tailored for electric skateboards. It does not need to be able to start moving a heavy load from standstill, because skateboard riders usually start off with a kick as they engage their electric throttle. In fact, its inability to move until the skateboard is already moving can be argued as a safety measure to ensure a board can’t take off unexpectedly.
The second controller, after some digging, was discovered to be designed for fans. Unsurprising, then, that it was able to start the motor spinning in air. And again the inability to start under load might even be a safety measure: an air moving fan encountering resistance on startup indicates an obstruction that must be removed.
While instructive, learning this lesson has put the project no closer to a solution. Motor start up behavior isn’t something typically stated up front when shopping for BLDC controllers, as seen in this Amazon “brushless motor controller” query result. (*) More research is required.
But at least we now know it is a factor.
(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.
After I managed to destroy a stepper motor during experimentation, Emily graciously donated another one to the cause. Hooking it up to my A4988 test board, I can tell this optical drive carriage had more power than the one I salvaged from a laptop drive. At least this one could win a fight against gravity. However, I was dismayed to find it is still quite weak in absolute terms, and this time I’m wary of cranking up the power in fear of destroying another motor.
So in order to set up something to learn how to wire up a homing switch for Grbl, I need to find switches that take less force to activate than the switches I already have on hand. Where might I find tiny switches that take tiny force to activate? The recently disassembled laptop optical drive!
Its control board had a few tiny surface mount switches. They were connected to small mechanical linkages throughout the drive so this control board can determine various states of the eject mechanism and such. There were a total of four switches and I put them under a heat gun in an effort to remove them.
This was a tricky procedure, as I had to melt the solder without melting the plastic. I put too much heat into two of the switches and destroyed them in the process. Fortunately, two of them were removed relatively undamaged. I put one of them under a meter to check for continuity, and it appeared to still work as a switch.
Once I extracted the optical assembly carriage of a Panasonic UJ-867 optical drive, the next step is to interface it with a Pololu A4988 driver board. And just as with the previous optical drive stepper motor, there are four visible pins indicating a bipolar motor suitable for control via A4988. However, this motor is far smaller as befitting a laptop computer component.
The existing motor control cable actually passed through the spindle motor, meaning there were no convenient place to solder new wires on the flexible connector. So the cable was removed and new wires soldered in its place.
Given the fine pitch of these pins it was very difficult to avoid solder bridges. But it appeared to run standalone so I reinstalled into the carriage. Where it still ran – but was very weak. Hardly any power at all. When I tilted it up so the axis of travel is vertical, the carriage couldn’t win its fight against gravity. Since the job is only to move an optical assembly, I didn’t expect these carriages exert a great deal of force. But I’ve seen vertically mounted slot loading optical drives. I thought it should at least be able to fight against gravity.
A Dell laptop charger delivers 19.2V. I’m not sure how many volts this motor intended to run at, but 12V seemed reasonable. Then I increased current beyond the 50mA of the previous motor. Increasing both voltage and amperage seemed to help with more power, but it remained too weak to overcome gravity.
As I’m tilting the metal carriage assembly in my hand, I started noticing it was warming. Oh no! The motor! I checked the temperature with my finger, which was a mistake as it was hot enough to be painful to human skin. I shut down my test program but it was too late, the carriage never moved again.
Lessons learned: don’t get too overzealous with power and check temperature more frequently.
With my RGB-XYZ 3D sweep test program, I’ve verified my LED helix is fully set up with a Pixelblaze controller programmed with its geometry wound around a 3D printed chassis. I have a blank canvas – what shall I create on it? A Pixelblaze by itself is capable of generating some pretty amazing patterns, but by default it has no sense of the world around it. It can only run a programmed sequence like my sweep test. I could focus on writing patterns for spectacular LED light shows, but I decided to dig deeper for sensor-reactive patterns.
There are a few provisions on board for analog and digital inputs, so patterns could react to buttons or knobs. Going beyond such simple input is the Sensor Expansion Board. It is an optional Pixelblaze add-on board which provides the following:
A microphone specifically designed to continue function in loud environments
An ambient light level sensor
5 additional analog inputs
A Pixelblaze fresh out of the package includes a few sound-reactive patterns that work with the microphone. They are fun to play with, but that ground has been covered. Seeking fresh under-explored territory and an opportunity to write something useful for future users, I looked at the other sensors available and was drawn to the accelerometer. With it, I could write patterns that react to direction of gravity relative to the sensor board. This should be interesting.
The sensor board is fully documented on Github, which included description of the protocol used to send data to a Pixelblaze. Or actually any other microcontroller capable of decoding 3.3 volt serial data at 115200 baud which should be all of them! In my case I’ll be using it with my Pixelblaze, and the first lesson here is that we only really need 3 pins out of its 7-pin expansion header: 3.3V power and ground obviously, and since the protocol is unidirectional, only one of two serial transmit pins is used by the sensor board. The remaining pins are pass-through available for other purposes. I’ll explore those possibilities later, for now it’s time to get set up to experiment with the accelerometer.
The Death Clock logic is built around user action to trigger its little show for amusement. While we could easily incorporate a micro switch or some such simple mechanical input, Emily felt it would make more sense to have a capacitive touch sensor. This fits into the theme of the clock, sensing and reading a person’s body instead of merely detecting a mechanical movement. So we’ll need something to perform this touch sensing and she procured an Adafruit #1362, AT42QT1070 5-Pad Capacitive Touch Sensor Breakout Board for use. Inside the package was a set of leads for breadboard experimentation, so we soldered them on, put the works on a breadboard, and started playing with it.
Initially the board worked mostly as advertised on Adafruit product page, but it is a lot more finicky than we had anticipated. We encountered frequent false positives (signaled touch when we haven’t touched the contact) and false negatives (did not signal touch when we have touched the contact.) Generally the unpredictability got worse as we used larger pieces of conductive material. Either in the form of longer wires, or in the form of large metal blocks we could touch.
Digging into the datasheet linked from Adafruit’s site, we learned that the sensor runs through a self calibration routine upon powerup, and about a “guard” that can be connected to something not intended as touch contact in order to form a reference for intended touch contacts. The calibration routine explains why we got wild readings as we experimented with different touch pads – we should have power cycled the chip with each new arrangement to let it recalibrate.
After we started power-cycling the chip, we got slightly better results, but we still needed to keep conductive material to a minimum for reliable operation. We played with the guard key and failed to discern noticeable improvement in touch sense reliability, perhaps we’re not using it right?
For Death Clock implementation we will try to keep the sensor board as close to the touch point as we can, and minimize wire length and electrical connections. Hopefully that’ll give us enough reliability.
[Emily] and I started our vacuum fluorescent display (VFD) project because there was an interesting unit available, with a look distinctly different from modern LED. We just had to salvage it out of an obsolete piece of electronics and figure out how to make it work. We now have a prototype VFD driver circuit up and running, and we can command it to light up arbitrary combinations of segments at arbitrary times from a Python program running on an attached Raspberry Pi. This is a satisfying milestone marking completion of our first generation hardware allowing us to transition to focusing on what to actually put on that display.
The first few experiments with VFD patterns confirmed that we really like how a VFD looks! As people who love to take things apart to see how they work, we enjoy all the components of a VFD visible through its glass case. Their intricate internals qualify them as desktop sculpture just sitting there, making them light up is just icing on the cake.
With this early success and desire for more, chances are good that we’ll embark on additional VFD projects in the future. For our first VFD project we chose to stick with generic chips for the sake of learning the basic principles, but if we’re going to start building more we should look at using chips designed for the purpose.
According to Digi-Key’s online catalog, there are dedicated vacuum fluorescent drivers available from Maxim and Microchip. None of Maxim’s chips are available in hobbyist-friendly through-hole designs, but two of Microchip’s three lines are. HV5812P-G is the 20-channel model in 28-pin DIP format, and HV518P-G is the 32-channel counterpart in 40-pin DIP format. Curiously, for ~50% more pins, the HV518P-G costs over double the price. So it made sense to start with the HV5812.
With data and clock pins for straightforward serial data input, it was designed to be easy to drive from pretty much any microcontroller. The only thing that caught my attention is that logic input lines are expected to be 5V input with a minimum of 3.5V required to be interpreted as logic high. This meant we couldn’t drive it directly from 3.3V hosts like a Raspberry Pi or an ESP32. We’d need level shifters or a 5V capable part like a PIC to act an intermediary.
It looks promising enough — and priced cheaply enough — to be a consideration for potential follow-on VFD projects. So we’ll add that to the Digi-Key shopping cart and see where things go from there.
When we first pulled the vacuum fluorescent display (VFD) from an old Canon tuner timer unit, we can see a lot of intricate details inside the sealed glass. We had work to do that day – probing the pinout and such, but part of its overall allure comes from the beauty of details visible within. It is still something of interest, I just had to remember to bring my camera with a close up macro lens to take a few photos.
One of the reasons a VFD is so interesting to look at is the fact the actual illuminating elements sits under other parts which contribute to the process. Closest to the camera are the heating filaments, visible as four horizontal wires. This is where electrons begin their journey to trigger fluorescence.
Between these filaments and individual illuminating segments are our control grids, visible literally as a very fine mesh grid mostly – but not entirely – built on a pattern of hexagons.
And beyond the control grids, our individual phosphor coated segments that we control to illuminate at our command using our prototype driver board. (Once it is fully debugged, at least.) These phosphor elements are what actually emits light and become visible to the user. The grid and filament are thin which helps them block as little of this light as possible.
Fortunately an illuminated VFD segment emits plenty of light to make it through the fine mesh of grids and fine wires of filament. From a distance those fine elements aren’t even visible, but up close they provide a sophisticated look that can’t be matched by the simplicity of a modern segmented LED unit.
Now that we have a better understanding of how a NEC VSL0010-A vacuum fluorescent display (VFD) works, figuring out its control pinout with the help of an inkjet power supply, we returned to the carcass we salvaged that VFD out of. Now that we knew each pins’ function, we picked those that supplied 2.5V AC for filament power to track. We expect they are least likely to pass through or be shared by other devices. We traced through multiple circuit boards back to the main power transformer output plug. We think it’s the two gray wires on the left side of this picture, but our volt meter probes are too big to reach these visible contact points. And the potential risk of high voltage made us wary of poking bare wires into that connector as we did for the inkjet power supply.
Our solution came as a side benefit of decision made earlier for other reasons. Since we were new to VFD technology, our curiosity-fueled exploratory session was undertaken with an inexpensive Harbor Freight meter instead of the nice Fluke in the shop. Originally the motivation was to reduce risk: we won’t cry if we fry the Harbor Freight meter, but now we see a secondary benefit: With such an expensive device, we also feel free to modify these probes to our project at hand. Off we go to the bench grinder!
A few taps on the grinding wheel, and we have much slimmer probes that could reach in towards those contacts.
Suitably modified, we could get to work.
We were able to confirm the leftmost pair of wires, with gray insulation, is our 2.5VAC for VFD filament. The full set of output wires from this transformer, listed by color of their insulation, are:
Gray pair (leftmost in picture): 2.6V AC
Brown pair (spanning left and right sides): 41V AC
Dark blue pair: (center in picture) 17.2V AC
Black pair (rightmost in picture): 26.6V AC
There was also a single light-blue wire adjacent to the pair of dark blue wires. Probing with volt meter indicated it was a center tap between the dark blue pair.
Once determined, we extracted the transformer as a single usable unit: there was a fuse holder and an extra power plug between it and the device’s AC power cord. We’re optimistic this assembly will find a role in whatever project that VFD will eventually end up in. 2.6V AC can warm filament, rectified 26.6V AC should work well for VFD grid and segments. And with proper rectification and filtering, a microcontroller can run off one of these rails. It’ll be more complex than driving a LED display unit, but it’ll be worth it for that distinctive VFD glow.
One of the reasons LED has overtaken VFD in electronics is reduced power requirements. Not just in raw wattage of power consumed, but also the varying voltage levels required to drive a VFD. The NEC VSL0010-A VFD whose pinout we just probed ran on 2.5V AC and ~30V DC. In contrast, most LED can run at the same 5V or 3.3V DC power plane as their digital drive logic, vastly simplifying design.
We didn’t have a low voltage AC source handy for probing, so we used 2.5V DC. We expected this to have only cosmetic effects. One side of our VFD will be brighter than the other, since one side will have a filament-to-grid/element voltage difference of 30V but the other will only have 27.5V.
But putting 2.5V DC on the filament occupied our only bench power supply available at the time. What will we use for our 30V DC power source? The answer came from our parts pile of previously disassembled electronics, in this case a retired HP inkjet printer’s power supply module labeled with the number CM751-60190.
According to the label, this module could deliver DC at 32V and 12V. Looking at its three-conductor output plug, it was easy to come to the conclusion we have one wire for ground, one wire for 32V, and one wire for 12V. But that easy conclusion would be wrong. Look closer at the label…
We do indeed have a ground wire in the center, but there is only one power supply wire labelled +32V/+12V. It actually delivers “32 or 12” volts, not “32 and 12” volts. That last pin on the left has an icon. What did that mean? Our hint comes from power output specifications: +32V 1095mA or +12V 170mA. We deduced this meant the icon is a moon, indicating a way to toggle low-power sleep mode where the power supply only delivers 12V * 170mA = 2 watts vs. full 32V * 1095mA = 35 W.
With that hypothesis in hand, it’s time to hook up some wires and test its behavior.
When “sleep mode” pin is left floating, voltage output is 32VDC. When that pin is grounded, voltage output drops to 12VDC. Since we’re looking for 32VDC to drive our VFD grid and elements, it’s easy enough to leave sleep wire unconnected and solder wires to the remaining two wires to obtain 32V DC for our VFD adventures.
Vacuum Fluorescent Display (VFD) technology used to be the dominant form of electronics display. But once LEDs became cheap and bright enough, they’ve displaced VFDs across much of the electronics industry. Now a VFD is associated with vintage technology, and its distinctive glow has become a novelty in and of itself. Our star attraction today served as display for a timer and tuner unit that plugs into the tape handling unit of a Canon VC-10 camera to turn it into a VCR. A VFD is very age-appropriate for a device that tunes into now-obsolete NTSC video broadcast for recording to now-obsolete VHS magnetic tape.
Obviously, in this age of on-demand internet streaming video, there’s little point in bringing the whole system back to life. But the VFD appears to be in good shape, so in pursuit of that VFD glow, salvage operation began at a SGVHAK meetup.
We have the luxury of probing it while running, aided by the fact we can see much of its implementation inside the vacuum chamber through clear glass. The far right and left pins are visibly connected to filament wires, probing those pins saw approximately 2.5V AC. We can also see eight grids, each with a visible connection to its corresponding pin. That leaves ten pins to control elements within a grid. Probing the grid and element pins indicate they are being driven by roughly 30V DC. (It was hard to be sure because we didn’t have a constant-on element to probe…. like all VCRs, it was blinking 12:00)
This was enough of a preliminary scouting report for us to proceed with desoldering.
Now we can see its back side and, more importantly, its part number which immediately went into a web search on how to control it.
The top hit on this query is this StackExchange thread, started by someone who has also salvaged one of these displays and wanted to get it up and running with an Arduino. Sadly the answers were unhelpful and not at all supportive, discouraging the effort with “don’t bother with it”.
We shrugged, undeterred, and continued working to figure it out by ourselves.
If presented with an unknown VFD in isolation, the biggest unknown would have been what voltage levels to use. But since we have that information from probing earlier, we could proceed with confidence we won’t burn up our VFD. We powered up the filament, then powered up one of the pins visibly connected to a grid and touched each of the remaining ten non-grid pins to see what lights up. For this part of the experiment, we got our 32V DC from the power supply unit of a HP inkjet printer.
We then repeated the ten element probe for each grid, writing down what we’ve found along the way.
We hope to make use of this newfound knowledge in a future project, and we hope this blog post will be found by someone in the future and help them return a VFD to its former glowing glory.
In the discussion period that followed my Sawppy presentation at RSSC, there was a discussion on machine vision. When discussing problems & potential solutions, JeVois camera was mentioned as one of the potential tools for machine vision problems. I wrote down the name and resolved to look it up later. I have done so and I like what I see.
First thing that made me smile was the fact it was a Kickstarter success story. I haven’t committed any of my own money to any Kickstarter project, but I’ve certainly read more about failed projects than successful ones. It’s nice when the occasional success story comes across my radar.
The camera module is of the type commonly used in cell phones, and behind the camera is a small machine vision computer again built mostly of portable electronics components. The idea is to have a completely self-contained vision processing system, requiring only power input and delivers processed data output. Various machine vision tasks can be handled completely inside the little module as long as the user is realistic about the limited processing power available. It is less powerful but also less expensive and smaller than Google’s AIY Vision module.
The small size is impressive, and led to my next note of happiness: it looks pretty well documented. When I looked at its size, I had wondered how to best mount the camera on a project. It took less than 5 minutes to decipher documentation hierarchy and find details on physical dimensions and how to mount the camera case. Similarly, my curiosity about power requirements was quickly answered with confirmation that its power draw does indeed exceed the baseline USB 500mW.
Ease of programming was the next investigation. Some of the claims around this camera made it sound like its open source software stack can run on a developer’s PC and debugged before publishing to the camera. However, the few tutorials I skimmed through (one example here) all required an actual JeVois camera to run vision code. I interpret this to mean that JeVois software stack is indeed specific to the camera. The whole “develop on your PC first” only means the general sense of developing vision algorithms on a PC before porting to JeVois software stack for deployment on the camera itself. If I find out I’m wrong, I’ll come back and update this paragraph.
In the question-and-answer session some people brought up the idea of calculating odometry by visual means, much in the way a modern optical computer mouse determines its movement on a desk. This is something I could whip up with a downward pointing webcam and open source software, but there are also pieces of hardware designed specifically to perform this task. One example is the PWM3901 chip, which I could experiment using breakout boards like this item on Tindie.
However, that visual calculation is only part of the challenge, because translating what that camera sees into a physical dimension requires one more piece of data: the distance from the camera to the surface it is looking at. Depending on application, this distance might be a known quantity. But for robotic applications where the distance may vary, a distance sensor would be required.
As a follow-up to my presentation, RSSC’s online discussion forum brought up the Flow Breakout Board. This is an interesting candidate for helping Sawppy gain awareness of how it is moving through its environment (or failing to do so, as the case may be.) A small lightweight module that puts the aforementioned PWM3901 chip alongside a VL53L0x distance sensor.
The breakout board only handles the electrical connections – an external computer or microcontroller will be necessary to make the whole system sing. That external module will need to communicate with PWM3901 via SPI and, separately, VL53L0x via I2C. Then it will need perform the math to calculate actual X-Y distance traveled. This in itself isn’t a problem.
The problem comes from the fact a PWM3901 was designed to be used on small multirotor aircraft to aid them in holding position. Two design decisions that make sense for its intended purpose turns out to be a problem for Sawppy.
This chip is designed to help hold position, which is why it was not concerned with knowing the height above surface or physical dimension of that translation: the sensor was only concerned with detecting movement so the aircraft can be brought back to position.
Multirotor aircraft all have built-in gyroscopes to stabilize itself, so they already detect rotation about their Z axis. Sawppy has no such sensor and would not be able to calculate its position in global space if it doesn’t know how much it has turned in place.
Multirotor aircraft are flying in the air, so the designed working range of 80mm to infinity is perfectly fine. However, Sawppy has only 160mm between the bottom of the equipment bay and nominal floor distance. If traversing over obstacles more than 80mm tall, or rough terrain bringing surface within 80mm of the sensor, this sensor would become disoriented.
This is a very cool sensor module that has a lot of possibilities, and despite its potential problems it has been added to the list of things to try for Sawppy in the future.
In the middle of these experiments with a Xbox 360 Kinect as robot depth sensor, Intel announced a new product that’s along similar lines and a tempting venue for robotic exploration: the Intel RealSense T265 Tracking Camera. Here’s a picture from Intel’s website announcing the product:
T265 is not a direct replacement for the Kinect, at least not as a depth sensing camera. For that, we need to look at Intel’s D415 and D435. They would be fun to play with, too, but I already had the Kinect so I’m learning on what I have before I spend money.
So if the T265 is not a Kinect replacement, how is it interesting? It can act as a complement to a depth sensing camera. The point of the thing is not to capture the environment – it is to track the motion and position within that environment. Yes, there is the option for an image output stream, but the primary data output of this device is a position and orientation.
This type of camera-based “inside-out” tracking is used by the Windows Mixed Reality headsets to determine its user’s head position and orientation. These sensors requires low latency and high accuracy to avoid VR motion sickness, and has obvious applications in robotics. Now Intel’s T265 offers that capability in a standalone device.
According to Intel, the implementation is based on a pair of video cameras and an inertial motion unit (IMU). Data feeds into internal electronics running a V-SLAM (visual simultaneous location and mapping) algorithm aided by Movidius neural network chip. This process generates position+orientation output. It seems pretty impressive to me that it is done in such a small form factor and high speed (at least low latency) with 1.5 watt of power.
At $200, it is a tempting toy for experimentation. Before I spend that money, though, I’ll want to read more about how to interface with this device. The USB 2 connection is not surprising, but there’s a phrase that I don’t yet understand: “non volatile memory to boot the device” makes it sound like the host is responsible for some portion of the device’s boot process, which isn’t like any other sensor I’ve worked with before.
Once installed, though, I wasn’t sure what to do next. I found documentation telling me to launch a test/demonstration viewer application called glview. That turned out to be old information, the test app is actually called freenect-glview. Also, it is no longer added to the default user search path. I have to launch it with the full path /usr/bin/freenect-glview.
Once I got past that minor hurdle, I have on my screen a window that showed two video feeds from my Kinect sensor: on the left, depth information represented by colors. And on the right, normal human vision color video. Here’s my Kinect pointed at its intended home: on board my rover Sawppy.
This gave me a good close look at Kinect depth data. What’s visible and represented by color is pretty good, but the black areas worry me. They represent places where the Kinect was not able to extract depth information. I didn’t expect it to be able to pick out fine surface details of Sawppy components, but I did expect it to see the whole chassis in some form. This was not the case, with areas of black all over Sawppy’s chassis.
Some observations of what a Xbox 360 Kinect could not see:
The top of Sawppy’s camera mast. Neither the webcam nor the 3D-printed mount for that camera. This part is the most concerning one because I have no hypothesis why.
The bottom of Sawppy’s payload bay. This is unfortunate but understandable: it is a piece of laser cut acrylic which would reflect Kinect’s projected pattern away from the receiving IR camera.
The battery pack in rear has a smooth clear plastic package and would also not reflect much back to the camera.
Wiring bundles are enclosed in a braided sleeve. It would scatter the majority of IR pattern and those that make it to the receiving camera would probably be jumbled.
None of these are deal-breakers on their own, they’re part of the challenges of building a robot that functions outside of a controlled environment. In addition to those, I’m also concerned about the frame-to-frame inconsistency of depth data. Problematic areas are sometimes visible for a frame and disappear in the next. The noisiness of this information might confuse a robot trying to make sense of its environment with this data. It’s not visible in the screenshot above, but here’s an animated GIF showing a short snippet for illustration:
The Kinect sensor bar from my Xbox 360 has long been retired from gaming duty. For its second career as robot sensor, I have cut off its proprietary plug and rewired it for computer use. Once I’ve verified the sensor bar is electrically compatible with a computer running Ubuntu, the first order of business was to turn fragile test connections into properly soldered wires protected by heat shrink tube. Here’s my sensor bar with its new standard USB 2.0 connector and a JST-RCY connector for 12 volt power.
With the electrical side settled, attention turns to software. The sensor bar can tell the computer it is a USB device, but we’ll need additional driver software to access all the data it can provide. I chose to start with the Xbox 360 Kinect because of its wider software support, which means I have multiple choices on which software stack to work with.
OpenNI is one option. This open source SDK is still around thanks to Occipital, one of the companies that partnered with PrimeSense. PrimeSense was the company that originally developed the technology behind Xbox 360 Kinect sensor, but they have since been acquired by Apple and their technology incorporated into the iPhone X. Occipital itself is still in the depth sensor business with their Structure sensor bar. Available standalone or incorporated into products like Misty.
OpenKinect is another option. It doesn’t have a clear corporate sponsor like OpenNI, and seems to have its roots in the winner of the Adafruit contest to create an open source Kinect driver. Confusingly, it is also sometimes called freenect or variants thereof. (Its software library is libfreenect, etc.)
Both of these appear to still be receiving maintenance updates, and both have been used a lot of cool Kinect projects outside of Xbox 360 games. Ensuring there will be a body of source code available as reference for using either. Neither are focused on ROS, but people have written ROS drivers for both OpenNI and OpenKinect (freenect). (And even an effort to rationalize across both.)
One advantage of OpenNI is that it provides an abstraction layer for many different depth cameras built on PrimeSense technology, making code more portable across different hardware. This does not, however, include the second generation Xbox One Kinect, as that was built with a different (not PrimeSense) technology.
In contrast, OpenKinect is specific to the Xbox 360 Kinect sensor bar. It provides access to parts beyond the PrimeSense sensor: microphone array, tilt motor, and accelerometer. While this means it doesn’t support the second generation Xbox One Kinect either, there’s a standalone sibling project libfreenect2 for meeting that need.
I don’t foresee using any other PrimeSense-based sensors, so OpenNI’s abstraction doesn’t draw me. The access to other hardware offered by OpenKinect does. Plus I do hope to upgrade to a Xbox One Kinect in the future, so I decided to start my Xbox 360 Kinect experimentation using OpenKinect.