LRWave Audio Under Multichannel Oscilloscope

When I read through the user’s guide for my new 4-channel oscilloscope, one of the features that jumped out at me was “XY mode”. Normally signals are displayed with their voltage on the vertical axis and time on the horizontal axis. But XY mode allows us to plot one channel on the vertical axis against another channel on the horizontal axis. Aside from its more technical applications, people have used this to display vector art on their oscilloscope. And the simplest vector art are Lissajous curves, which Emily Velasco introduced me to. We’ve had several projects for Lissajous curves including this old CRT projection TV tube.

Motivated by these Lissajous experiments, I created my software project LRWave to give us a basic function generator using our cell phones. Or really anything that has an audio output jack and a web browser. It’s not nearly as good as a real function generator instrument, but I didn’t really know how far away from “good” it is. Now that I have an oscilloscope, I can look closer.

Digging into my pile of discarded electronics, I found a set of stereo headphones. Cutting its cable, I pulled out three wires corresponding to left audio, right audio, and common ground reference.

These wire strands had an insulating coating that had to be removed. Hot solder seemed to work well melting them off, also conveniently giving me a surface to attach a short segment of wire for oscilloscope probes to hook onto. Now I can see what LRWave output looks like under an oscilloscope.

It’s not very pretty! Each vertical grid on this graph is 20mV according to the legend on the top right. The waveform is far from crisp, smearing across a range of about 50mV. This is very bad when the maximum and minimum levels are only separated by roughly 120mV. The narrow range was because my phone was set at very low audio volume.

Cranking my phone volume up to maximum increased the amplitude to about 1.5V, so the maximum and minimum levels are separated by about 3V. (Each vertical grid is now 500mV.) With this range, the 50mV variation is a lot less critical and we have a usable sine wave. Not as good as a real function generator, but usable. Also, actual performance will vary depending on the audio hardware. Different cell phones/tablets/computers will output audio to varying levels of fidelity.

This is as far as I could have gone with my cheap DSO-138 single-channel oscilloscope, but now that I have more than one channel, I can connect both stereo audio channels to the oscilloscope and activate XY mode to plot them against each other and get some nice Lissajous curves on my oscilloscope screen.

Yeah, that’s what I’m talking about! I expect this line would be finer (thinner) if I used a real wave generation instrument instead of the headphone output jack of a cell phone, but it’s more than enough for a fun graph. Onwards to my next multichannel oscilloscope experiment.

Royal Purple Lissajous CRT

I brought my laser Lissajous curve machine (that degraded into a not-Lessajous curve machine) to our SGVTech meet, but it was not the only one there. [Emily] brought her salvaged CRT that she intends to turn into a Lissajous machine as well. Since I had brought my amplifier with me for my machine, it was easy to disconnect my contraption and connect to hers instead. Now, the stereo amp will be driving the CRT’s horizontal and vertical deflection coils instead of speakers. Unlike the last time we hooked this amp up to a CRT in this manner, we now have LRWave to control left and right audio channels independently.

Violet Lissajous back

This CRT was originally a black and white unit. To add some visual interest, [Emily] has coated it with Krylon Royal Purple Stained Glass Paint. I think the change is subtle but effective at communicating this is something special.

Another change for this experiment: we’ve switched platforms from Android to running LRWave on an old MacBook Pro. We’ve had recurring problems with random pops and crackles in output waveform, and since it’s a problem shared with the native signal generation app across three different Android devices, I suspect the root cause is somewhere within Android OS. Hence switching to MacOS as a change of pace to see if it also has the cracks and pops heard when running on LRWave. Or in the case of a CRT, seen on-screen as a sporadic scrambling of the curve. The switch was a good move. We had no unexpected noises for the rest of the night.

Violet Lissajous front

We got some very pretty curves, far better than what I got out of my laser LED and speaker apparatus. And unlike the speakers, a CRT deflection coil does not degrade as we send different wave forms through it. We had smooth curves throughout the entire test session, it did not degrade into squiggly abstract modern art.

Violet Lissajous closeup

Observations from the night:

  • LRWave can generate a more consistent signal running on MacOS than on Android.
  • Laser + mirrors + speakers are indeed accessible at lower cost and lower voltage, but speakers suffer damage when forced to reproduce arbitrary wave forms. Further evolution of my idea would require finding a different actuator to replace speakers.
  • CRTs can produce beautiful Lissajous curves, far smoother than any pixel-based flat panel display can. Furthermore, their deflection coils seem to suffer no damage from arbitrary wave forms pushed through them.
  • When a salvaged raster CRT like this unit is run at low speeds, there is a visible gap in the line that [Emily] credits to the beam occasionally cutting out for vertical blanking interval. Its effect can be mitigated by running at high speeds, or be used as intentional visual effect at low speeds.

[Emily] plans to spray a matte coating to reduce distracting reflections. I look forward to future progress on this project.

Laser Lissajous at SGVTech

The main reason I wanted a less delicate and more portable form for my laser Lissajous project is that I wanted to bring it to show-and-tell at SGVTech meetup. I got it printed and assembled a half our before the meet. I verified it could make a basic Lissajous curve and then it was time to go.

It was an interesting piece of novelty for show-and-tell. The laser was difficult to see against the ceiling of the space, so I ended up tilting the contraption on its side so the laser is projected sideways onto a taped-up sheet of white paper serving as screen.

I started the night with sine waves that produced decent Lissajous curves. Then we started playing around. LRWave allowed us to feed different wave forms in – triangular, sawtooth, and square waves. Each produced their own wild patterns, but after a while we noticed some changes in the system. Reverting back to sine waves no longer reverted to soothing sounds and smooth curves. Looking over the apparatus, we found that a speaker has rattled itself loose. Tightening fasteners back down did get us some of the curvature back, but could not eliminate all the extraneous vibrations affecting the curve. Something else is degrading. Either the hot glue holding mirrors to the speaker, or the coil inside the speakers.

By the end of the night, between all the potential variables of speaker degradation, glue adhesion failure, mirror flex, and extraneous vibration in the system, the laser curves stopped being Lissajous curves and started becoming wild abstract modern art. It had come full circle: I started this with a 3D-printed stand that was a tribute to Frank Gehry. By the end of the night, the laser projection started resembling Frank Gehry sketches.

Decayed Lissajous laser curve

3D-Printed Laser Lissjous Apparatus

Copper wires and helping hands are fine for my laser Lissajous rough draft, but it’s fragile and nearly impossible to take elsewhere. It’s time to design and 3D-print a more rigid and portable version of my cheap & cheery laser light show.

The fundamental task is not difficult: note the final arrangement of components in my rough draft, put them into Onshape, and create a housing for those parts with the help of Onshape in-context modeling. However, the geometry requirements of mirror and laser placement resulted in quite an awkward layout of components involved.

I looked at component layout in 3D space inside Onshape and spent a few hours trying to find a good way to package them all together. After a few fruitless hours trying to create something that I find pleasing to my existing sense of aesthetics, I decided to take this opportunity and go in a different direction instead: I’m going to channel my inner Frank Gehry for packaging these components.

Lissajous bracket 2 wireframe

This is my take on a wild asymmetric shape that still serves all the requirements of the project, much as how Gehry architecture have many features that seem wild at first glance. Their jarring exterior masks the fact it still creates the interior volume and structural support necessary for the building. For me, this was a fun way to experiment with contour and curvature tools available in Onshape and difficult to represent in OpenSCAD.

Lissajous bracket 2

When assembled, my apparatus can create fairly decent Lissajous curves. Not as nice as those [Emily] and I created on a CRT, but a lot more easily reproducible by anyone with access to a laser pointer, a pair of speakers, and some mirrors.

Helping Hands For Laser Lissajous Rough Draft

With small plastic mirrors now installed on salvaged laptop speakers, it’s time to put them to work. With their far smaller size, I could place the mirrors much closer together. This is important because if they were too far, beam deflected from the first mirror would spread out in a wider arc than that the size of the second mirror. This was a problem with the initial proof of concept rig, especially when I explored maximum deflection which ended up burning up a speaker. Now, these compact speakers allow mirrors to be only about 15mm apart.

These speakers were again powered by the same thrift-store stereo amp, but keeping in mind the lessons from the last round, I’m more careful about the amp’s output volume. The next challenge was to find ways to hold all the components involved as I experiment with direction, angle, and orientation. Since I only had two hands and there were three components (two speakers and a laser) I needed to summon help.

The laser was taped to a stiff strand of copper wire which I could bend at will to aim the laser. For the speakers, I had two sets of helping hands which typically help me solder electronics together but today they are mirror holders. Their alligator clips hold nice and strong to mounting brackets formerly used to secure these speakers inside a laptop chassis. Together they helped me determine an arrangement that would produce the results I sought.

Dell Inpsiron speaker Lissajous first draft with helping hands

Laser+Speaker Lissajous Proof of Concept

With LRWave 1.0 complete, I could focus on the mechanical bits of a Lissajous machine driven by that web app. The goal is to build a more accessible Lissajous machine that does not have the risk presented by high voltages involved in driving a CRT. It will not look as good as a CRT, but that’s the tradeoff:

  • High voltage electron beam in a CRT replaced by far lower voltage LED laser diode.
  • CRT deflection yokes replaced by audio speakers.

The proof of concept rig is driven by the same thrift store amplifier used in the successful CRT Lissajous curve demo. This time it will be driving speakers, which is what it was designed for, instead of CRT deflection yokes. The speakers came from the same source as that CRT: a Sony KP-53S35 rear projection television we took apart for parts so we could embark on projects like this.

Hypothesis: If we attach a mirror to a speaker, then point a laser beam at that mirror, the reflected beam will be displaced by the movement of that speaker. By using two speakers and adjusting beam path through them, we can direct a laser beam among two orthogonal axis X and Y via stereo audio waveform generated by LRWave.

For the initial test, mirrors were taped directly on speaker cones and arranged so laser beam is projected to the ceiling. This produced a satisfactory Lissajous curve. Then the mirror configuration were changed to test another hypothesis: instead of direct attachment to speaker cone, tape the mirror so it sits between the fixed speaker frame and the moving speaker cone. This was expected to provide greater beam deflection, which it did in the pictured test rig. However, the resulting Lissajous curves were distorted due to flex by the plastic mirrors and not-very-secure tape.

RPTV Speakers and masking tape

Experimenting with maximum deflection range, I pushed the speakers too far and burned one up. For a brief few seconds the laser beams were visible, reflected by the smoke of an overloaded speaker coil.

  1. I could see the laser beams, cool!
  2. Um… why am I able to see the laser beams?
  3. [sniff sniff]
  4. Oh no, the magic smoke is escaping!

The Lissajous curve collapsed into a flat line as one deflection axis stopped deflecting, and that ended experimentation for the day.

LRWave 1.0 Complete

The (mostly cosmetic) fine tuning have been done and the LRWave web app is sitting at a good state to declare version 1.0 complete. Now I can move onward to the hardware components of my laser Lissajous project.


Some tidbits worthy of note are:

  • MDC Web theme colors can be applied via CSS styles as per documentation. What’s (currently) missing from the documentation is the requirement I also have to add @import "@material/theme/mdc-theme"; to my style sheet. In the absence of that import directive, the theme CSS styles have no effect, causing me to bang my forehead against a non-respnsive brick wall. Now the following visual elements finally worked as designed:
    • App has a dark background, because I’m building this for a laser Lissajous curve project and I expect to use this app mostly in the dark.
    • The column titles “Left” and “Right” now has theme colors.
    • The waveform drop-down box options are now visible and not white-on-white. (Change is not visible in screenshot.)
  • Web Audio API implementation in Chrome would cause audible pops when pausing or resuming playback. This is true even if I write code to fade volume. (Fade out to zero upon pause, fade in from zero upon resume.) Since this fade code added complexity to my app but failing to eliminate the audible pops, I did not commit fade code to my app.
  • The browser tab favicon now works… I am not sure why but next item might be related:
  • I’ve added an instance of the icon to my app’s background. Now there’s a little app icon at top center of app background. Perhaps this change helped kicked the browser favicon code into action?
  • Responsive layout is much improved – the app was always fine in portrait mode in a compact phone resolution, but now it no longer looks embarrassing on larger displays.
  • Added support for Safari browser. (Safari uses webkitAudioContext while other browsers use AudioContext.)
  • Wrote up a for the project.

The whole project was done via Cloud 9 IDE. My AWS cost for this experiment was a grand total of $0.43.

LRWave EC2 Cost

Github project:

Published live version:

LRWave Core Functions Complete

After a few hours of JavaScript coding, all the core functionality I set out to implement is running. At least, at a basic level. (This link will go to the live version of LRWave, which should continue to improve as I learn.)

  • Two-channel (left and right) function generator
  • Frequency adjustment
    • User can type in anything from 0 to 24 kHz
    • Buttons to adjust frequency up/down in 1Hz steps.
  • Waveform selection available in Web Audio API
    • Sine
    • Square
    • Sawtooth
    • Triangle
  • Gain adjustment
    • User can type in anything from 0% to 100%
  • Play/pause button.

LRWave on Nexus 5

Since I hate web pages that start playing some sound upon load without user interaction, I don’t start playing sound until the user has pushed the play button. It turns out I have accidentally followed the guidance from Google, which will not allow pages using Web Audio API to play sound before user input. I don’t always agree with Google, but I’m glad we’re aligned here.

The JavaScript portion of this project have been relatively easy. I found Web Audio API to be straightforward to use in my simple little function generation app. There are some refinements to be made: there are audible pops as settings are updated. Right now I perform those updates immediately, but they should ramp from old to new value in order to avoid abrupt changes. Everything else dealing with user interface are standard event handler code.

What I found challenging is the aesthetics side of things. Proper HTML layout is still a struggle for me, but I know projects like this will eventual lead to mastery (or at least competence) with responsive web layout. In the meantime, my app looks fine on a phone in portrait orientation but things start to get weird quickly as browser window size grows.

Individual components in MDC Web have worked well so far, with the exception of the slider control. I tried to use it for gain but it didn’t behave as I expected so, after a half an hour scratching my head, I set it aside for later. Another difficulty was MDC theming. I wanted the background to be black and I haven’t found my mistake when trying to do it within MDC theming module.

On the web browser interaction side, I designed an icon and added a reference in the header. Google Chrome recognizes it enough to use the icon on the recently opened sites window, but it doesn’t use the icon on the browser tab and I don’t understand why.

Lots of learning so far, but more learning plus refinement ahead…

Building A MDC Web App With Only The Parts I Need

Material Design logoOne criticism of using Materialize CSS is that we were pulling down the entire library and all of its resources for our project, whether we are using them or not. Aside from the obvious inefficient use of bandwidth, it also presented a challenge when we wanted SGVHAK Rover to be usable independently without an internet connection. This meant we had to make a copy of the entire library in our project to serve locally.

To avoid this problem, MDC Web (Material Design Components for Web) is designed so projects could pull in features piecemeal. Now each web app only has to download the parts they needed. The reason this isn’t typically done is because of another inefficiency: there is overhead per HTTP transaction so we can’t have MDC web components come down in tiny little individual pieces – the overhead would easily wipe out any space savings.

To avoid that problem, MDC Web uses webpack to bundle all the individual pieces into a consolidated file that only requires a single HTTP download transaction. So instead of a HTML file demanding tens to hundreds of individual CSS and JavaScript files, we have a single HTML file, that loads a single bundled CSS file, and a single bundled JavaScript file. These bundles are standalone pieces that can be served by any web server independent of MDC Web webpack infrastructure. Maybe even copied to a rover.

I was looking forward to this functionality and was happy to see it was covered in Section 5 of the Getting Started guide. After putting in the configuration required, it was a matter of typing npm run build to generate my compact representation. I copied the generated bundle.css and bundle.js files and my index.html to a separate directory for the next test.

When I served up my Getting Started project via the development server npm start, Chrome developer console showed that the simple button required downloading 462KB of support files. That’s roughly the same order of magnitude as what it would take to download Materialize and all supporting information, and I was eager to see improvement.

I then went to the separate directory with the built bundles. To ensure that I’m indeed free from npm and webpack, I served these bundle files using an entirely different infrastructure: Python 3’s development web server. The environment variables $PORT and $IP were already set for a Cloud 9 environment and were reused:

python3 -m http.server $PORT --bind $IP

In the browser developer console, I could see that these files – and only these files – were downloaded. They added up to 32KB which was larger than I had hoped for in such a simple example, but maybe there’s room for further optimization. In any case, that’s definitely a tremendous improvement over 462KB.

Using Cloud 9 To Explore Web Development

I’ve started going through the Getting Started guide for Google’s Material Design Components for Web (MDC Web). This is a prerequisite for Material tutorials and I’ve already run into one problem traced to an out-of-date installation of Node.js. Given that I’m learning a new piece of software, I’m sure I’ll run into more problems that require modifying my computer. As mistakes are likely due to my learning status, I’m wary of potentially messing up my main Ubuntu environment.

What I need right now is an enclosed sandbox for experimentation, and I’ve already set up virtual machines I could use for this purpose. But since this is a web-based project, I decided to go all-in on the web world and try doing it online with a web-based development environment: Cloud 9.

Cloud 9 logo colorI’ve played with Cloud 9 before, back when it was a startup with big dreams. It has since been acquired by Amazon and folded into the wide portfolio of Amazone Web Services (AWS). As far as I knew Cloud 9 has always run on AWS behind the scenes, but now a user is exposed to the underlying mechanisms. It means we now have better control over the virtual machines we’re running, which is mostly good. It also means users have to pay for those virtual machine resources, which is fairly inexpensive (I expect to pay less than $1 for this experiment) but isn’t as good as the “Free” it used to be. The saddest part is that it’s no longer “point-click-go” simple to get started. Getting Cloud 9 properly setup means climbing the learning curve for managing AWS security and permissions, which can be substantial.

Access Permissions

AWS has an entire document focused on authorization and access control for Cloud 9. For someone like myself, who just want to play with Cloud 9 but also want to safely partition it off from the rest of my AWS account, the easiest thing to do is to create a new user account within my AWS dashboard. This account can be assigned a predefined access policy called AWSCloud9User, and that’ll be enough to get started. When logged in to this dedicated account I can be confident mistakes won’t accidentally damage anything else I have in AWS.

Network Permissions

With the power of fine-grained virtual machine control comes the responsibility of configuring it to act the way we want. When I last used Cloud 9, launching a piece of web hosting software on my VM meant it was immediately accessible from the internet. That meant I could bring it up on another browser window at my desk to see how it looks. However, this is no longer the default behavior.

When running my VM in the form of an Amazon EC2 instance like now, it has its own network firewall settings to deal with. Not only that, the VM is in its own private network (Amazon VPC) which has its own network firewall settings. Both of these firewalls must be configured to allow external access if I’m to host web content (as when exploring MDC Web) and wish to see it on my own machine.

There’s a lot of documentation online for using Cloud 9. The specific configuration settings that need to be changed are found under “Previewing Running Applications” in section “Share a Running Application over the Internet

First Step In Material Design Adventure Foiled By Ubuntu’s Default Old NodeJS

With the decision to tackle a new web-based software project, the next decision is what web-based UI framework to build the app in. My last web-based software project was to build an UI for SGVHAK rover at the beginning of the year. In the fast-paced world of front-end web development, that’s ancient history.

The rover UI project used Materialize CSS library to create an interface that follows Google’s Material Design guidelines. At the time, Google offered web developers a library called “Material Design Lite” but with the caveat they’re revamping the entire web development experience. There was little point in climbing the learning curve for a deprecated library like MDL. As Materialize CSS was close to using Bootstrap, a known quantity, the choice was appropriate for the situation.

Now, at the end of the year, we have Google’s promised revamp in the form of Material Design Components for Web. I am still a fan of Material so my signal generator utility project LRWave will be my learning project to use Google’s new library.

Diving right into the Getting Started Guide during a local coding meetup, I got as far as the end of “Step 1” executing npm start when I ran into my first error :

ERROR in ./app.scss
    Module build failed: SyntaxError: Unexpected token {
        at exports.runInThisContext (vm.js:53:16)
        at Module._compile (module.js:374:25)
        at Object.Module._extensions..js (module.js:417:10)
        at Module.load (module.js:344:32)
        at Function.Module._load (module.js:301:12)
        at Module.require (module.js:354:17)
        at require (internal/module.js:12:17)


webpack error

Since app.scss was only three lines for Step 1, it was trivial to verify there’s no typo to account for an unexpected “{“. A web search for this error message implicated an out-of-date installation of NodeJS, which triggered a memory from when I first started experimenting with NodeJS on my installation of Ubuntu 16.04. When trying to run node at the command line, a fresh install of Ubuntu 16.04 would tell the user:

The program 'node' is currently not installed. You can install it by typing:
sudo apt install nodejs-legacy

That suffix “legacy” is a pretty good hint this thing is old. Running node --version returned v4.2.6. Checking the Node JS website, today’s LTS is v10.14.2 and latest is 11.4.0. So yes, 4.2.6 is super old! Fortunately there’s a pointer to a more updated set of binaries maintained by Nodesource that plays well with Ubuntu’s built-in package management system. Following those directions automatically uninstalled legacy binaries and replaced them with up-to-date versions.

Once I’m running on non-ancient binaries, I could continue following MDC Web’s Getting Started guide on my computer. But this experience motivated me to look into a different option…

New Project: LRWave

LRWave Logo 128And now we switch gears to address something that came up earlier. During my successful collaboration session with [Emily] to draw Lissajous curves on an old CRT, one of the deflection coils was driven by a stereo amplifier based on signals from an app running on my phone. What I had wanted to try was to drive both deflection axis from the amplifier using different signals sent to the left and right stereo channels. Unfortunately, the simple tone generator app I had use doesn’t allow independent channel control. Well, somebody should write an app to do this, and the best way to make sure the app does what I want is to write it myself. Thus formed a convenient excuse for me to dive back into the software world for my next project.

In this day and age, the easiest way to distribute a piece of code is to make it a web app of some flavor. The basic way is to create a sound file (MP3 or similar) on the server with the desired wave form, and download to the browser for playback. But this is pretty clunky. It would be great if there’s a way to generate and manipulate audio wave forms using JavaScript running on the browser, no server-side code required.

I started with the HTML5 <audio> tag, which would allow implementing the “basic way” above of playing a sound clip generated on the server but not the preferred path. For the more advanced approach to work, there would need to be a browser-side API that are supported by modern browsers. Which implies a W3C standard of some sort. I started dreaming up technical terms that might let me find such an API, and after a few searches with fancy words like “frequency generation API” I found the much simpler named Web Audio API. Double-checking on the “Can I Use” reference site, we can see it’s widely implemented enough to be interesting.

Since it’s a simple utility, it has a simple utilitarian name of “LRWave.” Just something to generate different wave forms to be sent out to left and right channels. Following the simplicity of the app, its logo is two instances of the Google Material speaker icon facing left and right. In between them are two different waves from a public domain SVG file signifying the intent that it sends different waves to each speaker.