Arduino Mozzi Wilhelm Exercise

After completing a quick Mozzi exercise, I found a few problems that I wanted to fix in round 2.

The first problem was the use of audio clip from Portal 2. While Valve Software is unlikely to pursue legal action against a hobbyist exercise project for using one short sound from their game, they indisputably own the rights to that sound. If I wanted a short code exercise as any kind of example I can point people to, I should avoid using copyrighted work. Hunting around for a sound that would be recognizably popular but less unencumbered by copyright restrictions, I settled on the famous stock sound effect known as the Wilhelm scream. Information about this effect — as well as the sound clip itself — is found all over, making it a much better candidate.

The second problem was audible noise even when not playing sampled audio. Reading through Mozzi sound code and under-the-hood diagram I don’t understand why this noise is coming through. I explicitly wrote code to emit zero when there’s no sound, which I thought meant silence, but something else was happening that I don’t understand yet.

As a workaround, I will call stopMozzi() when playback ends, and when the button is pressed, I’ll call startMozzi(). The upside is that the noise between playback disappears, the downside is that I now have two very loud pops, one each at the start and end of playback. If connected to a powerful audio amplifier, this sudden impulse can destroy speakers. But I’ll be using it with a small battery-powered amplifier chip, so the destruction might not be as immediate. I would prefer to have neither the noise nor the pop, but until I figure out how, I would have to choose one of them. The decision today is for quick pops rather than ongoing noise.

This improved Arduino Mozzi exercise is publicly available on Github.

Arduino Mozzi Space Core Exercise

Temporarily stymied in my Teensy adventures, I dropped back to Arduino for a Mozzi exercise. I’ve helped Emily through bits and pieces of putting sampled audio on an Arduino using Mozzi, but I had yet to run through the whole process myself.

The hardware side was kept as simple as possible: there’s only a single switch wired to be normally open and momentarily closed. For audio output, I used the wire salvaged from the Project MC2 Pixel Purse(*) that was briefly a hacker darling due to its clearance-sale price. (As of this writing, the price is up to $39.58, far above the $6 – $8 leading to its “buy to take it apart” fame.) Since this cable was designed to be plugged into a cell phone, it had a TRRS plug that I rewired into an imitation of a monophonic TS plug by using the T wire and connecting RRS together into another wire.

With hardware sorted out, I dived into the software tasks. There were more than a few annoyances that make this task not very beginner-friendly. The Huffman audio compression utility audio2huff.py had dependencies listed only in a comment block easily overlooked by beginners. They didn’t all install in the same way (purehuff required a download and install, while the others were installed via pip.) And since they are a few years old and not actively maintained, they were all written for Python 2.

This will become more and more of a problem as we go, since Python 2 support has just officially ended at the start of 2020. Older Linux distributions would launch Python 2 when the user runs python at the command line, and those that want to use Python 3 would need to run python3. This is getting flipped around and some environments now launch Python 3 when the user runs python and those that need the old stuff has to run python2.

What happens when we run these utilities under Python 3? It’d be nice if the error message is “Hey, this needs Python 2” but that’s not the reality. It is more likely to be something like “foobar is not iterable” completely bewildering to beginners.

To be fair, none of these are problems specific to Mozzi, there’s a large body of work online that were written for Python 2 and waiting for someone motivated enough to bring them up to date.

Anyway, after those problems are sorted out, I got my Arduino to play a sound when the button is pressed. My test sound clip was from the game Portal 2: the ecstatic “SPAAACE!” emitted by the corrupted Space Core when it finally got its wish to go flying through space.

This Arduino Mozzi exercise is publicly available on Github.

[UPDATE]: “Wilhelm” exercise (round 2) has some improvements.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases. But you shouldn’t buy a Pixel Purse until it drops back down to clearance prices.

First Experiment in Teensy Audio Foiled By CPU Instruction Set

The original line of Arduino boards are know for a lot of very flexible features that made them the champion of entry into physical computing. “Large storage space” is not one of those features, as the ATmega328P has only 32KB of flash. Plenty for blinking little LEDs and a great many other projects, severely limiting for anything that requires nontrivial amount of data.

One such class of projects is playing back digital audio samples. Simple sounds like tones, beeps, and bloops take little data, but recorded digital audio consumes a great many bytes. I was introduced to the Mozzi Arduino sound synthesis library via exposure to Emily’s projects. While the emphasis is on synthesis, it does have provision to handle sampled audio. However, due to the small flash storage, even with compression it could only handle short sounds.

Looking around for an upgrade, I remembered I have a Teensy LC OSH Park edition I had purchased alongside an OSH Park order years ago and thought it was worth an audio playback experiment. The CPU is much faster and it has almost double the flash storage. Exploring the Arduino-compatibility layer called Teensyduino, I see it offers a full analog audio API, complete with a graphical tool to help compose the tree of audio input/processing/output nodes. I was very impressed!

I decided to start small with just two nodes: sound is to be generated by playing from sampled audio data in memory, and sent to built-in DAC. Pressing “Export” generated the following boilerplate code:

#include <Audio.h>
#include <Wire.h>
#include <SPI.h>
#include <SD.h>
#include <SerialFlash.h>

// GUItool: begin automatically generated code
AudioPlayMemory playMem1; //xy=262,658
AudioOutputAnalog dac1; //xy=436,657
AudioConnection patchCord1(playMem1, dac1);
// GUItool: end automatically generated code

void setup() {
// put your setup code here, to run once:

}

void loop() {
// put your main code here, to run repeatedly:

}

Sadly, this code did not compile. Instead, I encountered this error:

/tmp/ccLBGUGU.s: Assembler messages:
/tmp/ccLBGUGU.s:291: Error: selected processor does not support `smull r0,ip,r3,r5' in Thumb mode
/tmp/ccLBGUGU.s:292: Error: shifts in CMP/MOV instructions are only supported in unified syntax -- `mov ip,ip,asl r6'
/tmp/ccLBGUGU.s:293: Error: unshifted register required -- `orr r0,ip,r0,lsr r7'

A web search on this error message indicated this is because the chip on Teensy LC does not support the instructions described in the error message, I’d need a Teensy 3 or 4. To test this hypothesis, I changed the compiler to target Teensy 3 instead of LC and hit “Verify” again. This time, it was successful. But I don’t have a Teensy 3 on hand, so I can’t actually run that code.

Given this limitation I’ll have to find some other project for my Teensy LC. In the meantime, I’ll drop back to the tried and true Arduino for my Mozzi exercise.

Examining Composite Video Signal Generated By Microcontrollers

There was a several-years-long period of my life when I spent money to build a home theater. This was sometime after DVD became popular, because the motivation was my realization of how much superior DVD picture quality was over VHS. With movies on VHS, noisy visual artifacts were a limitation of the analog magnetic medium. With movies on DVD, the media-imposed limitations were gone and now there are all these other limitations I could remove by spending money, lots of it, to do things like upgrade from composite video to S-Video connections.

Eventually home theater moved to all digital HDMI, and I stopped spending big money because even the cheapest flat panels could completely eliminate classic CRT problems like color convergence. (My personal peeve.) I thought I have left the era of CRT and composite video behind, but throwing out my pile of analog interconnects and video equipment turned out to be premature.

Now I’ve found an interest in old school video again, because they are accessible for the electronics hobbyist. It is much easier to build something to output a composite video signal rather than HDMI. Local fellow maker and tinkerer Emily likes the old school tech for aesthetics reasons in addition to accessibility. So one day we got together at one of our regular SGVTech meets to dig a little deeper into this world.

Emily brought an old portable TV with composite video input, and two candidate Arduino sketches each purporting to generate composite video. (arduino-tvout and one other whose name I can’t remember now.) I brought my ESP32 dev module running Bitluni’s composite video demo. For reference Emily had an actual composite video camera, the composite video Wikipedia page and the reference document used by Bitluni for his demo.

All three were able to get the little TV to show a picture. However, they looked very different under the oscilloscope. The [name will be filled in once I remember] sketch had the wildest waveform whose oscilloscope trace didn’t look anything like a composite video signal, but the proof is in the fact an animated 3D vector graphic cube showed up on the TV anyway. The waveform generated by arduino-tvout was a little rougher than expected, but unlike the previous, it was clearly recognizable as a composite video waveform on the oscilloscope and accepted by the TV. Waveform generated by Bitluni is the best fit with we expected to see, and matched most closely with output generated by the composite video camera.

Knowledge from tonight’s investigation will inform several of our project candidates.

 

Optical Drive Carriage, The Sequel

During my learning and exploration of stepper motor control, I managed to destroy an optical carriage I salvaged from a laptop drive. In order to continue experimentation I need another stepper motor linear actuator of some kind. I rummaged in my pile of parts and came up empty-handed, but I am fortunate to have another local resource: Emily‘s pile of parts. My request was granted with an assembly that had an intact motor, drive screw, and linear carriage. The optical assembly in that carriage had been torn apart, but that is irrelevant for the purposes of this project.

Unlike the previous two stepper motors I played with, this one has exposed pads on the flexible control cable so I tried to solder to them. Given my experience soldering fine pitched parts, I knew it’s problematic to solder one at a time: they are so close together heat from one pad will melt solder on an adjacent pad. My best bet is to set things up so all the wires are soldered at the same time.

Emily salvaged stepper motor wiringThere is slightly less solder than I would have preferred on each of these joints, but several efforts to add solder created solder bridges across pins. Requiring removal by solder sucker, which reduced the amount of solder even more. Since there was enough for electrical conductivity, I left it as is to continue my experiments.


Unrelated to stepper motors or Grbl:

While I was taking the above picture to document my work, I noticed I was being watched and took a picture of the watcher. This little beauty was surveying my workbench perched on top of a cardboard box of M3 fasteners from McMaster-Carr. The spider was only about 5mm long overall including legs. Unfortunately at the time of this shot I had set it for shallow depth of field to photograph the above solder joint. I adjusted my camera to bring more into focus, but this little one was camera shy and jumped away before I could take a better shot. Still, I’m quite pleased with my camera’s macro lens.

Spider on McMaster Carr box

Panasonic UJ-867 Optical Carriage (Briefly) Under A4988 Control

Once I extracted the optical assembly carriage of a Panasonic UJ-867 optical drive, the next step is to interface it with a Pololu A4988 driver board. And just as with the previous optical drive stepper motor, there are four visible pins indicating a bipolar motor suitable for control via A4988. However, this motor is far smaller as befitting a laptop computer component.

Panasonic UJ-867 70 stepper motor connector

The existing motor control cable actually passed through the spindle motor, meaning there were no convenient place to solder new wires on the flexible connector. So the cable was removed and new wires soldered in its place.

Panasonic UJ-867 80 stepper motor new wires

Given the fine pitch of these pins it was very difficult to avoid solder bridges. But it appeared to run standalone so I reinstalled into the carriage. Where it still ran – but was very weak. Hardly any power at all. When I tilted it up so the axis of travel is vertical, the carriage couldn’t win its fight against gravity. Since the job is only to move an optical assembly, I didn’t expect these carriages exert a great deal of force. But I’ve seen vertically mounted slot loading optical drives. I thought it should at least be able to fight against gravity.

A Dell laptop charger delivers 19.2V. I’m not sure how many volts this motor intended to run at, but 12V seemed reasonable. Then I increased current beyond the 50mA of the previous motor. Increasing both voltage and amperage seemed to help with more power, but it remained too weak to overcome gravity.

As I’m tilting the metal carriage assembly in my hand, I started noticing it was warming. Oh no! The motor! I checked the temperature with my finger, which was a mistake as it was hot enough to be painful to human skin. I shut down my test program but it was too late, the carriage never moved again.

Lessons learned: don’t get too overzealous with power and check temperature more frequently.

And if I want to continue these experiments, I’ll need another stepper motor assembly.

Resuming Pololu Stepper Driver Adventures with Arduino and A4988

By the time I got around to playing with homing switches on a salvaged industrial XY stage, it was getting late. I only had a few minutes to perform one experiment: connect the normally open homing switch to X_LIMIT_PIN (GPIO02 as per cpu_map.h), set HOMING_CYCLE_0 to X_AXIS in config.h, and sent command to home X-axis. The motor moved right past the switch into the hard stops, so I turned off the ESP32 marking an unsatisfying end to the work session.

I wanted to be able continue learning Grbl while at home, away from the salvaged hardware, so I dug up the A4988 motor control board I’ve played with briefly. It’s time to get a little further in depth with this thing. Motivated by my current state in the XY stage project, the first goal is to get a stepper motor to activate a homing switch. If I could get that far, I’ll think about other projects. People have done some pretty creative things with little stepper motors controlled by Grbl, I could join that fun.

Rummaging through my pile of parts, the first stepper motor I retrieved was one pulled from an optical drive. This particular stepper motor had only the drive screw, the carriage has been lost. But with four exposed pins in two groups of two, it is a bipolar motor suitable for an A4988 motor control board. I just had to solder some wires to make it usable with a breadboard.

Since this stepper motor was a lot smaller than the one used in my previous A4988 stepper motor experience, I thought this was a good opportunity to learn how to tune the current limits on these modules by following instructions published by Pololu and using an Arduino as a test controller running code published on this page. I started with a limit of 100mA, but the motor got quite toasty at that level after running for a minute. I turned it down to 50mA, and it no longer got hot to the touch running that Arduino sketch.

This is a good start, but a motor with just a drive shaft is not useful for motion control. The next step is to find something that could push on a limit switch. I don’t seem to have anything handy, so it’s time to start digging into salvaged hardware.

ESP32 Grbl Controller Breadboard Prototype

An ESP32 plus Grbl motion control software seems like a good candidate for running an old industrial XY table, definitely promising enough to move forward with prototyping. I had originally intended to use an Olimex ESP32 DevKitC (*) as it was equipped with two rows of sockets. This is easy to connect with jumper wires while not leave pins exposed to risk of short circuits.

This plan was short lived, because I quickly ran into a problem: The ATmega at the heart of an Arduino is a beefy 5V part that can supply up to 40 mA per pin. In contrast, the ESP32 is rather delicate 3.3V part that should not exceed 12 mA per pin. The data sheet for the ZETA4 controller I want to connect to this board expects a minimum of 3.5V to signal step and direction, which means I need external components to shift the ESP32 voltage level up to what the ZETA4 expects. When I made this discovery I was momentarily tempted to switch back to an ATmega solution, but the siren call of higher performance carried me forward.

Since I would need external components, the project brain switched to my HiLetgo ESP32 development board (*) which is mostly identical but came equipped with two rows of pins appropriate for a breadboard. Four level-shifting units were installed, each built around a 2N2222A transistor. They were connected to the step and direction pins for X and Y axis, and each received a LED (and corresponding current-limiting resistor) to indicate activity.

Staying consistent with the system I used for Glow Flow, red LEDs indicate X axis activity and green LEDs indicate Y. These LEDs allowed me to perform a quick test to verify the presence of blinking activity. Next step: connected them to ZETA4 controller to see if the motors move as commanded.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Evaluate Grbl For XY Stage

I considered driving an old industrial XY table from a 3D printer controller board. I have a Melzi board on hand to use but its onboard stepper drivers were too well integrated to be easily adapted to this project. I considered running Marlin on an Arduino directly, but if I’m going to start building my own control board instead of one already tailored for Marlin, it makes sense to look at other options. Given the popularity of the Arduino platform, there’s more than one motion control project out there for me to consider.

Enter Grbl.

Unlike Marlin, which is primarily focused on 3D printing, Grbl offers a more generalized motion control platform. Provisions already in the code include pen plotting, laser cutting, and 3-axis CNC milling. All of which could be done with Marlin as well, but with different amount of code modifications necessary.

But an even bigger advantage in favor of Grbl is the existence of an ESP32 port. Marlin’s development team is working on a 32-bit hardware abstraction layer (HAL) to take it beyond ATmega chips. It looks like they’re up and running ARM Cortex boards and have ambition to bring it to ESP32. But a Grbl ESP32 port is available today and stable enough for people to use in their projects.

The headline feature is, of course, speed. While AccelStepper topped out around 4 kHz pulses, Grbl on ATmega is good for approximately 30 kHz. Grbl on ESP32 claims “at least 4x the step rates” which is likely a very conservative claim. After all, we’re comparing an 8 MHz 8-bit chip to a 240 MHz 32-bit chip. Though running on the ESP32 does incur more overhead than bare-metal code running on an ATmega, which is important in time-sensitive applications like motion control.

There are upsides for this real time operating system (FreeRTOS) overhead, as it allows the ESP32 to handle things beyond motion control. The best part employs its WiFi module to present a web-based control interface. A control interface would have required additional hardware in a Grbl machine built around an ATmega, but with Grbl on ESP32 I just need a web browser to start testing.

And since I already have an ESP32 development board (*) on hand, this looks like a great venue to explore. Let’s try running this XY stage with an ESP32 running Grbl.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Evaluate Retired Melzi Board for XY Stage

In an effort to put a salvaged industrial XY table back to work, the Arduino AccelStepper was used as a quick test to see if the motors and controllers still respond to input signals. Things moved, which is a good sign, but the high precision of the Parker ZETA4 controller demanded step pulses at a far higher frequency than what AccelStepper could deliver.

The search then moved on to something that could generate the pulses required. I’m confident the hardware is capable of more, as AccelStepper topped out at less than 5 kHz signal on a 8 MHz chip. Pulsing a pin isn’t a task that requires over 1,000 instruction cycles. Given familiarity with the 3D printer world, I started looking at Marlin which runs on Arduino hardware.

The problem with running Marlin on my Arduino Nano is that I would have none of the associated accessories. No control panel, no SD reader, etc. However, I do have a full control board retired from one of my 3D printers. This board called itself a Melzi Ardentissimo and a search led to the Melzi page of RepRap wiki. Thanks to the open nature of this design, I could trace through its PCB layout. Much to my disappointment, the step and direction signals connected straight from the tiny pin on the main processor to the motor driver without surfacing in an easily tapped fashion. The intent of this board is integration and it’ll be quite some work to defeat that intent in order to decouple the processor from its integrated stepper driver.

Fortunately, I’m not limited to the world of AVR ATmega chips, nor Marlin software. There’s another very capable processor on hand waiting for such project… an ESP32 running Grbl software.

Arduino AccelStepper Under The Scope

The standard Arduino library includes a stepper motor control library. The external AccelStepper library adds the capability for acceleration and deceleration curves. I thought that would be important for driving industrial equipment as the large pieces of metal impart far more momentum than what I’m familiar with in a 3D printer.

As it turns out, I was worrying about the wrong thing. I connected my bargain oscilloscope to the output pins of my (knockoff) Arduino Nano and watched the pulses go by as I issue different commands to AccelStepper. As I issued commands for faster and faster speeds, I could see those pulses come closer and closer together until they were almost but not quite 0.2 ms apart. Running a test program in a tight loop, an Arduino running AccelStepper is indeed failing to reach 5 kHz pulse speed.

A little web search indicated this is roughly comparable to what others are saying online. AccelStepper is good for roughly 3-4 kHz and things get dicey beyond that. For the ZETA4 controller, configured to 25,000 microsteps per revolution, this is simply not fast enough to get decent speed out of the box.

How does this happen? Further web search indicated this is because all the timing for AccelStepper is done in software. Perhaps for the goal of leaving the very precious hardware timer resources for other user programs? That’s merely speculation of the design decisions involved. But one thing is certain – AccelStepper limits meant it is not sufficient for generating high frequency pulses required to drive this particular salvaged XY motion control table. I’ll have to look at something more specific for motion control, possibly a 3D printer control board.

Old Industrial XY Stage Moves Again On Arduino AccelStepper

A decades-old piece of industrial equipment has failed and been retired, but its motion control table was still good and salvaged for a potential future project. It is a far cry from the kind of motion control equipment I see on a hobbyist 3D printer. Much more rigid chassis, more precise linear guides, finer pitch thread, far beefier motor, under the command of a much more capable control box. One example: the A4988 stepper motor popular in 3D printers features the capability to generate up to 16 microsteps in between whole steps of a stepper motor. In contrast this Parker ZETA4 unit features up to 255 microsteps.

Pulling from the electronic hobbyist world, I set up my Arduino to run the AccelStepper library generating stepper motor direction and pulse signals. It was connected to one axis of the XY stage for the first test. It was exciting when the motor started turning, but it was very very loud! I doubted it was supposed to sound like that. A bit of examination found two problems: the ZETA4 was configured to take whole steps — no microsteps — which is always noisy. And I amplified this mistake by accidentally choosing a speed that caused a resonance.

Once the ZETA4 was configured to its default of 125 microsteps, things were much quieter and smoother. It also slowed down significantly, because each pulse on the step pin moved 1/125 as far as it did before. (And a full revolution of the motor was only 200 steps.) The obvious answer was to increase frequency of those pulses, but I seemed to top out at around 5,000 pulses per second. This surprised me: The ATmega328 chip on board an Arduino is running at 8MHz. I would have expected it to easily generate pulses faster than 5 kHz. Time to put this guy under the oscilloscope and see what we see.

Raspberry Pi Web Kiosk Boots Faster On Raspbian Than Ubuntu Core

My adventures into the lightweight Ubuntu Core operating system was motivated by a 14-year old laptop with Intel CPU. By multiple measures on the specification sheet, it is roughly equivalent to a modern Raspberry Pi. (~1 GHz processor, ~1GB RAM, ~30GB storage, etc.) As a test drive, I had tried setting it up as a web kiosk and failed due to incomplete graphics driver support. (Though I did get it working on a modern laptop.)

Which leads to the next question: how well does Ubuntu Core perform on a Raspberry Pi performing the exact same web kiosk task? And how does that compare to doing the same thing on Raspbian OS? They’re both variants of Linux trimmed down for less powerful hardware. Raspbian is packed with features to help computing beginners get started, and Ubuntu Core has almost no features at all to offer a clean baseline.

My workbench is not a comparison test lab so I didn’t have many copies of identical hardware for a rigorous comparison. Two monitors were close but not quite identical in resolution. (1920×1080 for the LG, 1920×1200 for the Dell.) The two Raspberry Pi should be identical, as should the microSD cards, but they’re not powered by identical power supplies. To establish a baseline, I set them both up for Raspberry Pi web kiosk using my top search hit for “Raspberry Pi Web Kiosk”. Then launched them side-by-side a few times to see how close their performance were from power-up to rendering a web page. (For this test, the Hackaday.com site.) This took approximately 40-45 seconds, sometimes one is faster and sometimes the other is faster. For some runs, the two systems were within one second of each other, sometimes they were almost 5 seconds apart. Establishing the fact this comparison method has a measuring error of several seconds.

With an error margin established, I removed one of their microSD cards and installed Ubuntu Core for Raspberry Pi 3 on it. Once initial configuration were complete, I installed snaps for the web kiosk tutorial I followed earlier for the Dell laptops. I was very curious how fast a stripped-down version of Ubuntu would perform here.

The answer: not very, at least not in the boot-up process. From power-up to displaying the web page, Ubuntu Core web kiosk took almost 80 seconds. Double the time necessary for Raspbian, and that was even with a full install of Raspbian Buster with zero optimizations. There are many tutorials online for building a Raspbian-based web kiosk with stripped-down list of components, or even starting with the slimmer Raspbian Buster Lite operating system which would only get faster.

There may be many good reasons to use Ubuntu Core on a Raspberry Pi (smaller attack surface, designed with security in mind, etc.) but this little comparison indicates boot-up speed is not one of them.

Ubuntu Core 18 Web Kiosk Experiment on Dell Inspiron 11 3180

While experimenting with Ubuntu Core 18 on a 14-year old Dell Latitude X1, I ran into problems and wanted to verify it was a hardware support issue and not a mistake on my part. So I brought my much younger Inspiron 11 (3180) up on Ubuntu Core 18 as well. It verified the issue was indeed hardware support and not my mistake, hampering functionality on the Latitude X1.

After I got my answer, I thought since I’ve already got this Inspiron 11 up and running, I might as well continue experimenting on it. I proceeded to follow through the rest of the steps in the tutorial for setting up a web kiosk on Ubuntu Core. Since this machine had recent hardware, I encountered no hardware issues and got a dedicated web kiosk machine up and running.

Browsing a few web sites, basic browser functionality seem to be present. The first missing functionality I noticed was a lack of sound. A little poking confirmed that Linux audio system ALSA is not installed as part of Ubuntu Core. If someone wants sound on their Ubuntu Core machine, they’ll have to install it. This is fits with my expectation for a bare minimum “Core” OS.

Another feature I noticed is the lack of persistent state. As far as I can tell, everything is ephemeral and lost upon reset. No cookies are preserved across sessions, and it appears the cache is flushed as well. Whether this is a bug or a feature depends on application. It would be desirable for public use web terminal where we really want to wipe everything and start over for every new user.

And it isn’t intended to be a general use web browser, anyway. The cursor can be hidden and so can the navigation bar. I enabled the navigation bar expecting a normal browser tool bar, but it is actually a very minimalist bar with a few buttons like back and refresh. There is no URL input field, as appropriate for a kiosk dedicated to serving specific pages.

Sometimes this is exactly what I would need making Ubuntu Core an ideal bare-minimum OS for an Intel-based machine. But in this day and age, those aren’t our only options. Projects along these lines are also commonly built with a Raspberry Pi. How well does Ubuntu Core work on a Raspberry Pi, compared to Raspberry Pi’s standard Raspbian OS?

Dell Latitude X1 Running Ubuntu Core 18: No Graphics But CH341 USB Serial Works

It was a pleasant surprise to see the Ubuntu Core 18 up and running on a 14-year old Dell Latitude X1, even more pleasant to see it is lightweight enough to be speedy and responsive on such old and slow hardware. But given its age, I knew not to expect everything to work on the stock i386 image. There’s no way they can package a comprehensive set of device drivers on such a compact package. I speculate it was not the intent, either. Ubuntu Core is targeted to embedded projects where it is typical to generate an OS image custom tailored to the specific hardware. So the fact it mostly works out of the box is a tremendous bonus, and the missing hardware support is not a bug.

That said, I’m not terribly interested in generating the custom device tree (or whatever mechanism Ubuntu Core uses) to utilize all peripherals on this Latitude X1. I’m more interested in working with what I already have on hand. During initial configuration I already learned that the wireless module did not work properly. What works, and what doesn’t?

Again I’m not interested in an exhaustive list, I just wanted to find enough to enable interesting projects. Getting this far meant text output and keyboard input functions in addition to wired networking. The next thing to try is to activate the graphics subsystem and mouse input. Looking on Ubuntu’s tutorial web site, I found the web kiosk example which would test hardware necessary to offer a useful set of web-related functionality.

Following the tutorial steps, I could get the machine to switch display modes, but it never got as far as showing a web browser, just a few lines I didn’t understand. At this point I wasn’t sure if I followed the procedures correctly or if the hardware failed, so I duplicated the steps with Ubuntu Core 18 running on my modern Dell Inspiron 11 (3180) laptop. I saw a web browser, so the procedures were correct and the hardware is at fault. Oh well.

Comparing what’s on screen after starting mir-kiosk on both machines, I see the gibberish lines on the X1 actually resemble the mouse arrow but distorted and scattered across interleaved lines. Lending to the hypothesis that video support on stock Ubuntu Core 18 i386 image needs some tweaks before it can support the video hardware on board a Latitude X1. The fact some lines showed up tells me it’s close, but I’m choosing not to invest the time to make it work.

The next idea is to test USB serial communications. I plugged in an Arduino Nano knockoff with the CH341 USB-serial chip and ran dmesg to learn the device was picked up, understood, and assigned a ttyUSB device path. This particular Arduino was already flashed with a sketch that sent data through the serial port, and as a crude test I used cat /dev/ttyUSB0 to see if anything comes up: it did! This is wonderful news. The Latitude X1 can act as high-level processor counterpart to a lower level controller communicating over USB serial opening up project possibilities. I’ll have to think on that for a while.

Dell Latitude X1 Now Running Ubuntu Core 18

About two years ago, an old friend was returned to me: a 2005 vintage Dell Latitude X1. It struggled to run desktop software of 2017 but speed wasn’t the point – the impressive part was that it could run software of 2017 at all. It was never a speed demon even when new, as it sacrificed raw performance for its thin and light (for its day) construction. Over the past two years, I would occasionally dust it off just to see it still running, but as software got more complex it has struggled more and more to act as a modern personal computer.

When an up-to-date Ubuntu 16 LTS desktop edition takes over 10 minutes to boot to the login screen, I had to concede it’s well past time to stop trying to run it as a desktop computer. I hate to give up on this oddball hobby to keep an old machine running modern up to date operating systems, but an interesting idea came up to keep things going: How about running a lighter-weight text-based operating system?

The overburdened desktop Ubuntu was erased to make room for Ubuntu 16.04.6 server. This is a much lighter-weight operating system. As one point of measure, it now takes only about 55 seconds from pressing the power button to a text login prompt. This is much more tolerable than >10 minutes to boot to a graphical login screen.

After I logged in, it gave me a notification that Ubuntu 18 server is available for upgrade. I’ve noticed that my desktop Ubuntu took longer to boot after upgrading from 16 to 18, and I was curious if the text-mode server edition would reflect the same. Since I had no data on this machine anyway, I upgraded to obtain that data point.

The verdict is yes, Ubuntu 18 server takes longer to boot as well. Power button to login prompt now takes 96 seconds, almost double the time for ubuntu 16 server. Actually more than double, considering some portion of that time was hardware initialization and GRUB boot selection timeout before the operating system even started loading.

That was disappointing, but there is an even more interesting experiment: What if, instead of treating this old computer as a server, I treat it as an embedded device? After all, its ~1 GHz CPU and ~1 GB RAM is roughly on par with a Raspberry Pi, and its ~30GB HDD is in the ballpark of microSD cards used in a Pi.

This is the new Ubuntu Core option for bare-bones installations, targeting IoT and other embedded projects. There is an i386 image already available to be installed on the hard drive of this 14-year old Dell laptop. Since Ubuntu Core targets connected devices, I needed a network adapter for initial setup. It looks like the Latitude X1’s WiFi adapter is not supported, but fortunately its wired Ethernet port worked.

Once up and running, I timed its boot time from power switch to login prompt: 35 seconds. Subtracting non-OS overhead, booting Ubuntu 18 Core takes almost half the time of Ubuntu 16 Server, or approaching one quarter of Ubuntu 18 Server. Very nice.

Ubuntu 18 Core makes this old laptop interesting again. Here is a device offering computing power in the ballpark of a Raspberry Pi, plus a display, keyboard, and mouse. (There’s a battery, too, but its degraded capacity barely counts.) It is far too slow to be a general desktop machine, but now it is potentially a viable platform for an embedded device project.

The story of this old workhorse is not yet over…

Arduino Accelerometer Success On Second Try: Mozzi + MMA7660

When we left off earlier, Emily and I determined that getting her MMA7660 I2C accelerometer breakout board to work with Mozzi on an Arduino will require more work than it looked at first glance, and probably not worth the trouble. Emily has since moved on to a different accelerometer module, one which communicates via analog voltage values. But I had a harder time letting go. Some helpful comments on Twitter plus Randy Glenn’s comment on the previous blog post motivated me to take another run with different objectives.

The first change was a reduction in problem scope: I originally thought I’d update the MMA7660 library using fragments from Mozzi’s non-blocking I2C example. This time I’m going the other way: I’m modifying Mozzi’s non-blocking I2C example by pulling fragments from the MMA7660 library. This is a tiny subset of capability and coding. Notably absent are any sort of error-handling… not even that wacky use of goto that make seasoned programmer eyes twitch.

The second change was a change in attitude: I see a lot of violations of good coding conventions, and I wanted to fix them all. Especially problems horrible for code robustness such as poor error-handling. But talking with others who have played with Arduino longer than I have, I learned code quality has not exactly been a barrier for people putting Arduino code online. As I’m starting with a piece of Mozzi example code, all I had to do is make sure I don’t make that code worse, and I should be good to go.

Easing those two constraints (a.k.a. lowering expectations) made for a much easier task, one which was ultimately successful! The Mozzi example chirped at one of three different pitches depending on if the ADXL345 reported X, Y, Z value over half of its range. (And stays silent if none of them were.) Now it chirps in response to MMA7660 data instead. Here’s the short demonstration video I tweeted. (Be sure to turn on sound.)

There was one technical challenge in capturing that video clip: I didn’t have an audio amplifier on hand, so the tiny speaker was powered directly by the Arduino Nano and thus extremely quiet. To capture the chirps, I plugged in a wired earbud headphone with mic to my cell phone for filming. Its microphone module was placed next to the quietly chirping speaker and visible in the picture/video. I prefer not to have such items in the field of view, but it’s what I had to do.

It turns out the MMA7660 is not appropriate for what Emily had in mind anyway. This module was designed for detecting portrait/landscape orientation and had very poor resolution: 5 bits, or values from zero to 63 to cover a range +/- 1.5G. Moving through an 180 degree arc under normal gravity, we see about 40 out of those 64 values. This translates to one tick every ~4.5 degrees. A gently swinging pendulum like what Emily had in mind would only ever see four or five different values through its entire range of motion and not nearly enough to make interesting sounds.

But that’s OK, I’m sure it’ll be useful for something else in the future. I’ve made my modified Mozzi example available on Github in case Emily or I or anyone else decide to put the MMA7660 accelerometer in a Mozzi sketch. And if someone is less interested in that specific module but more curious about how to adapt from one I2C peripheral to another, they can look at this commit to see every line I changed.

Aborted Attempt At Arduino Accelerometer: Mozzi + MMA7660

Every tool has its use, and for quick prototype of electronic ideas, it’s hard to beat an Arduino. A huge community of users has generated an equally huge selection of libraries to get any hardware up and running quickly. The downside is that there’s no way to ensure they all work together. Playing with a single library is easy, but getting more than one to play nicely together is a hit-or-miss affair. Today’s story is a miss.

Mozzi is an Arduino library for sound synthesis. It allows Arduino creations to audibly react to external input, and is the brains behind the Rackety Raccoon project. Its inputs were potentiometers connected to an Arduino’s analog pins. To follow up that project, Rackety Raccoon creator Emily wanted to use an accelerometer chip as input.

The device Emily had on hand was a small breakout board for the MMA7660FC. It communicates via I2C and there’s an Arduino library available. But when it was added into a Mozzi sketch, verification fails with the following error:

libraries/Wire/utility/twi.c.o (symbol from plugin): In function `twi_init':
(.text+0x0): multiple definition of `__vector_24'
libraries/Mozzi/twi_nonblock.cpp.o (symbol from plugin):(.text+0x0): first defined here
collect2: error: ld returned 1 exit status
exit status 1

This error message is spectacularly unhelpful even though it is technically correct on all counts. There is indeed a collision in defining interrupt vector table entries, but Arduino’s target user demographic would have no idea what that means. At a higher level, the problem is that we have two chunks of code both trying to perform I2C communication. MMA7660 sample code uses the standard Arduino Wire library, Mozzi has its own I2C communication library. They both use the same hardware resources and hence the collision. Only one of them may exist in an Arduino sketch.

Why would Mozzi have its own I2C library? The hint comes from the name: “twi_nonblock“. Mozzi is an audio library and it is critically important for Mozzi to run nonstop. Any interruptions would become audible glitches! This is a problem for receiving I2C data using the Wire library because it would wait (“block”) for the I2C peripheral (accelerometer in this case) to respond.

Mozzi can’t wait, hence the twi_nonblock I2C communication library as a replacement for Arduino’s standard Wire library. In order to use MMA7660 with Mozzi on an Arduino, the portions dependent on Wire library must be replaced with counterparts in Mozzi’s twi_nonblock. Mozzi includes sample code to communicate with another I2C accelerometer, the ADXL345.

Examining the sample, we see it is a straightforward line-by-line replacement when sending I2C data. The sample function acc_writeTo() includes comments explaining the Wire equivalent for each line.

// Writes val to address register on device
void acc_writeTo(byte address, byte val) {
  // Wire.beginTransmission(ADXL345_DEVICE); // start transmission to device
  twowire_beginTransmission(ADXL345_DEVICE); // start transmission to device
  // Wire.send(address); // send register address
  twowire_send( address );
  // Wire.send(val); // send value to write
  twowire_send( val );
  // Wire.endTransmission(); // end transmission
  twowire_endTransmission();
}

Which would serve as an excellent guide to rewrite MMA7660::write() from its current Wire format.

void MMA7660::write(uint8_t _register, uint8_t _data) {
  Wire.begin();
  Wire.beginTransmission(MMA7660_ADDR);
  Wire.write(_register);
  Wire.write(_data);
  Wire.endTransmission();
}

In contrast, receiving I2C data would not be a trivial line-by-line replacement. The nature of twi_nonblock meant we have to change a lot more code in order to convert it from a simple synchronous blocking procedure to an asynchronous non-blocking process. If the MMA7660 module and associated library were well-executed, it would not be technically challenging, just time-consuming. And it might be something good we can do to contribute back to the Arduino community.

MMA7660 demo code says it is bad and uses goto

Sadly it was not a shining example of excellence. A comment saying “it is bad” raised warning flags about weird behavior from the chip. Then a few lines later, we see a goto statement as an ugly hack around the chip’s behavior. This is when we decided to “Nope!” out of there and abort this particular investigation.

UPDATE: A week later I took another look, and the second try was successful.

Raspberry Pi GPIO Library Gracefully Degrades When Not On Pi

Our custom drive board for our vacuum fluorescent display (VFD) is built around a PIC which accepts commands via I2C. We tested this setup using a Raspberry Pi and we plan to continue doing so in the Death Clock project. An easy way to perform I2C communication on Raspberry Pi is the pigpio library which was what our test programs have used so far.

While Emily continues working on hardware work items, I wanted to get started on Death Clock software. But it does mean I’d have to work on software in absence of the actual hardware. This isn’t a big problem, because the pigpio library degrades gracefully when not running on a Pi. So it’ll be easy for my program to switch between two modes: one when running on the Pi with the driver board, and one without.

The key is the pigpio library’s feature to remotely communicate with a Pi. The module that actually interfaces with Pi hardware is called the pigpio daemon, and it can be configured to accept commands from the local network, which may or may not be generated by another Pi. Which is why the pigpio library could be installed and imported on a non-Pi like my desktop.

pigpiod fallback

For development purposes, then, I could act as if my desktop wants to communicate with a pigpio daemon over the network and, when it fails to connect, fall back to the mode without a Pi. In practice this means trying to open a Pi’s GPIO pins by calling pigpio.pi() and then checking its connected property. If true, we are running on a Pi. And if not, we go with the fallback path.

This is a very straightforward and graceful fallback to make it simple for me to develop Death Clock code on my desktop machine without our actual VFD hardware. I can get the basic logic up and running, though I won’t know how it will actually look. I might be able to rig up something to visualize my results, but basic logic comes first.

Code discussed in this blog post are visible on Github.

Sawppy Roving With Wired Handheld Controller

I now have a basic Arduino sketch for driving Sawppy using a joystick, I’ve built a handheld controller using an Arduino Nano and a joystick, and an input jack for interfacing with Sawppy. It’s time to put it all together for a system integration test.

Good news: Sawppy is rolling with the new wired controller! Now if there’s too much electromagnetic interference with Sawppy’s standard WiFi control system, we have a backup wired control option. This was the most important upgrade to get in place before attending Maker Faire Bay Area. As the flagship event, I expect plenty of wireless technology in close proximity at San Mateo and wanted this wired backup as an available option.

This successful test says we’re in good shape electrically and mechanically, at least in terms of working as expected. However, a part of “working as expected” also included very twitchy movement due to super sensitive joystick module used. There is very little range of physical joystick movement that maps to entire range of electrical values. In practice this means it is very difficult to drive Sawppy gently when using this controller.

At the very minimum, it doesn’t look very good for Sawppy’s to be seen as jittery and twitchy. Sharp motions also place stresses on Sawppy’s mechanical structure. I’m not worried about suspension parts breakage, but I am worried about the servos. Steering servo are under a lot of stress and couplers may break. And it’s all too easy to command a max-throttle drag racing start, whose sudden surge of current flow may blow the fuse.

I had wanted to keep the Arduino sketch simple, which meant it directly mapped joystick movement to Sawppy motion. But it looks like translating the sensitive joystick’s motion directly to overeager Sawppy is not a good way to go. I need to add more code to smooth out movement for the sake of Sawppy’s health.