LRWave Audio Under Multichannel Oscilloscope

When I read through the user’s guide for my new 4-channel oscilloscope, one of the features that jumped out at me was “XY mode”. Normally signals are displayed with their voltage on the vertical axis and time on the horizontal axis. But XY mode allows us to plot one channel on the vertical axis against another channel on the horizontal axis. Aside from its more technical applications, people have used this to display vector art on their oscilloscope. And the simplest vector art are Lissajous curves, which Emily Velasco introduced me to. We’ve had several projects for Lissajous curves including this old CRT projection TV tube.

Motivated by these Lissajous experiments, I created my software project LRWave to give us a basic function generator using our cell phones. Or really anything that has an audio output jack and a web browser. It’s not nearly as good as a real function generator instrument, but I didn’t really know how far away from “good” it is. Now that I have an oscilloscope, I can look closer.

Digging into my pile of discarded electronics, I found a set of stereo headphones. Cutting its cable, I pulled out three wires corresponding to left audio, right audio, and common ground reference.

These wire strands had an insulating coating that had to be removed. Hot solder seemed to work well melting them off, also conveniently giving me a surface to attach a short segment of wire for oscilloscope probes to hook onto. Now I can see what LRWave output looks like under an oscilloscope.

It’s not very pretty! Each vertical grid on this graph is 20mV according to the legend on the top right. The waveform is far from crisp, smearing across a range of about 50mV. This is very bad when the maximum and minimum levels are only separated by roughly 120mV. The narrow range was because my phone was set at very low audio volume.

Cranking my phone volume up to maximum increased the amplitude to about 1.5V, so the maximum and minimum levels are separated by about 3V. (Each vertical grid is now 500mV.) With this range, the 50mV variation is a lot less critical and we have a usable sine wave. Not as good as a real function generator, but usable. Also, actual performance will vary depending on the audio hardware. Different cell phones/tablets/computers will output audio to varying levels of fidelity.

This is as far as I could have gone with my cheap DSO-138 single-channel oscilloscope, but now that I have more than one channel, I can connect both stereo audio channels to the oscilloscope and activate XY mode to plot them against each other and get some nice Lissajous curves on my oscilloscope screen.

Yeah, that’s what I’m talking about! I expect this line would be finer (thinner) if I used a real wave generation instrument instead of the headphone output jack of a cell phone, but it’s more than enough for a fun graph. Onwards to my next multichannel oscilloscope experiment.

LRWave 1.0 Complete

The (mostly cosmetic) fine tuning have been done and the LRWave web app is sitting at a good state to declare version 1.0 complete. Now I can move onward to the hardware components of my laser Lissajous project.

LRWaveV1.0

Some tidbits worthy of note are:

  • MDC Web theme colors can be applied via CSS styles as per documentation. What’s (currently) missing from the documentation is the requirement I also have to add @import "@material/theme/mdc-theme"; to my style sheet. In the absence of that import directive, the theme CSS styles have no effect, causing me to bang my forehead against a non-respnsive brick wall. Now the following visual elements finally worked as designed:
    • App has a dark background, because I’m building this for a laser Lissajous curve project and I expect to use this app mostly in the dark.
    • The column titles “Left” and “Right” now has theme colors.
    • The waveform drop-down box options are now visible and not white-on-white. (Change is not visible in screenshot.)
  • Web Audio API implementation in Chrome would cause audible pops when pausing or resuming playback. This is true even if I write code to fade volume. (Fade out to zero upon pause, fade in from zero upon resume.) Since this fade code added complexity to my app but failing to eliminate the audible pops, I did not commit fade code to my app.
  • The browser tab favicon now works… I am not sure why but next item might be related:
  • I’ve added an instance of the icon to my app’s background. Now there’s a little app icon at top center of app background. Perhaps this change helped kicked the browser favicon code into action?
  • Responsive layout is much improved – the app was always fine in portrait mode in a compact phone resolution, but now it no longer looks embarrassing on larger displays.
  • Added support for Safari browser. (Safari uses webkitAudioContext while other browsers use AudioContext.)
  • Wrote up a README.md for the project.

The whole project was done via Cloud 9 IDE. My AWS cost for this experiment was a grand total of $0.43.

LRWave EC2 Cost

Github project: https://github.com/Roger-random/lrwave

Published live version: https://roger-random.github.io/lrwave/

LRWave Core Functions Complete

After a few hours of JavaScript coding, all the core functionality I set out to implement is running. At least, at a basic level. (This link will go to the live version of LRWave, which should continue to improve as I learn.)

  • Two-channel (left and right) function generator
  • Frequency adjustment
    • User can type in anything from 0 to 24 kHz
    • Buttons to adjust frequency up/down in 1Hz steps.
  • Waveform selection available in Web Audio API
    • Sine
    • Square
    • Sawtooth
    • Triangle
  • Gain adjustment
    • User can type in anything from 0% to 100%
  • Play/pause button.

LRWave on Nexus 5

Since I hate web pages that start playing some sound upon load without user interaction, I don’t start playing sound until the user has pushed the play button. It turns out I have accidentally followed the guidance from Google, which will not allow pages using Web Audio API to play sound before user input. I don’t always agree with Google, but I’m glad we’re aligned here.

The JavaScript portion of this project have been relatively easy. I found Web Audio API to be straightforward to use in my simple little function generation app. There are some refinements to be made: there are audible pops as settings are updated. Right now I perform those updates immediately, but they should ramp from old to new value in order to avoid abrupt changes. Everything else dealing with user interface are standard event handler code.

What I found challenging is the aesthetics side of things. Proper HTML layout is still a struggle for me, but I know projects like this will eventual lead to mastery (or at least competence) with responsive web layout. In the meantime, my app looks fine on a phone in portrait orientation but things start to get weird quickly as browser window size grows.

Individual components in MDC Web have worked well so far, with the exception of the slider control. I tried to use it for gain but it didn’t behave as I expected so, after a half an hour scratching my head, I set it aside for later. Another difficulty was MDC theming. I wanted the background to be black and I haven’t found my mistake when trying to do it within MDC theming module.

On the web browser interaction side, I designed an icon and added a reference in the header. Google Chrome recognizes it enough to use the icon on the recently opened sites window, but it doesn’t use the icon on the browser tab and I don’t understand why.

Lots of learning so far, but more learning plus refinement ahead…

Building A MDC Web App With Only The Parts I Need

Material Design logoOne criticism of using Materialize CSS is that we were pulling down the entire library and all of its resources for our project, whether we are using them or not. Aside from the obvious inefficient use of bandwidth, it also presented a challenge when we wanted SGVHAK Rover to be usable independently without an internet connection. This meant we had to make a copy of the entire library in our project to serve locally.

To avoid this problem, MDC Web (Material Design Components for Web) is designed so projects could pull in features piecemeal. Now each web app only has to download the parts they needed. The reason this isn’t typically done is because of another inefficiency: there is overhead per HTTP transaction so we can’t have MDC web components come down in tiny little individual pieces – the overhead would easily wipe out any space savings.

To avoid that problem, MDC Web uses webpack to bundle all the individual pieces into a consolidated file that only requires a single HTTP download transaction. So instead of a HTML file demanding tens to hundreds of individual CSS and JavaScript files, we have a single HTML file, that loads a single bundled CSS file, and a single bundled JavaScript file. These bundles are standalone pieces that can be served by any web server independent of MDC Web webpack infrastructure. Maybe even copied to a rover.

I was looking forward to this functionality and was happy to see it was covered in Section 5 of the Getting Started guide. After putting in the configuration required, it was a matter of typing npm run build to generate my compact representation. I copied the generated bundle.css and bundle.js files and my index.html to a separate directory for the next test.

When I served up my Getting Started project via the development server npm start, Chrome developer console showed that the simple button required downloading 462KB of support files. That’s roughly the same order of magnitude as what it would take to download Materialize and all supporting information, and I was eager to see improvement.

I then went to the separate directory with the built bundles. To ensure that I’m indeed free from npm and webpack, I served these bundle files using an entirely different infrastructure: Python 3’s development web server. The environment variables $PORT and $IP were already set for a Cloud 9 environment and were reused:

python3 -m http.server $PORT --bind $IP

In the browser developer console, I could see that these files – and only these files – were downloaded. They added up to 32KB which was larger than I had hoped for in such a simple example, but maybe there’s room for further optimization. In any case, that’s definitely a tremendous improvement over 462KB.

Using Cloud 9 To Explore Web Development

I’ve started going through the Getting Started guide for Google’s Material Design Components for Web (MDC Web). This is a prerequisite for Material tutorials and I’ve already run into one problem traced to an out-of-date installation of Node.js. Given that I’m learning a new piece of software, I’m sure I’ll run into more problems that require modifying my computer. As mistakes are likely due to my learning status, I’m wary of potentially messing up my main Ubuntu environment.

What I need right now is an enclosed sandbox for experimentation, and I’ve already set up virtual machines I could use for this purpose. But since this is a web-based project, I decided to go all-in on the web world and try doing it online with a web-based development environment: Cloud 9.

Cloud 9 logo colorI’ve played with Cloud 9 before, back when it was a startup with big dreams. It has since been acquired by Amazon and folded into the wide portfolio of Amazone Web Services (AWS). As far as I knew Cloud 9 has always run on AWS behind the scenes, but now a user is exposed to the underlying mechanisms. It means we now have better control over the virtual machines we’re running, which is mostly good. It also means users have to pay for those virtual machine resources, which is fairly inexpensive (I expect to pay less than $1 for this experiment) but isn’t as good as the “Free” it used to be. The saddest part is that it’s no longer “point-click-go” simple to get started. Getting Cloud 9 properly setup means climbing the learning curve for managing AWS security and permissions, which can be substantial.

Access Permissions

AWS has an entire document focused on authorization and access control for Cloud 9. For someone like myself, who just want to play with Cloud 9 but also want to safely partition it off from the rest of my AWS account, the easiest thing to do is to create a new user account within my AWS dashboard. This account can be assigned a predefined access policy called AWSCloud9User, and that’ll be enough to get started. When logged in to this dedicated account I can be confident mistakes won’t accidentally damage anything else I have in AWS.

Network Permissions

With the power of fine-grained virtual machine control comes the responsibility of configuring it to act the way we want. When I last used Cloud 9, launching a piece of web hosting software on my VM meant it was immediately accessible from the internet. That meant I could bring it up on another browser window at my desk to see how it looks. However, this is no longer the default behavior.

When running my VM in the form of an Amazon EC2 instance like now, it has its own network firewall settings to deal with. Not only that, the VM is in its own private network (Amazon VPC) which has its own network firewall settings. Both of these firewalls must be configured to allow external access if I’m to host web content (as when exploring MDC Web) and wish to see it on my own machine.

There’s a lot of documentation online for using Cloud 9. The specific configuration settings that need to be changed are found under “Previewing Running Applications” in section “Share a Running Application over the Internet

First Step In Material Design Adventure Foiled By Ubuntu’s Default Old NodeJS

With the decision to tackle a new web-based software project, the next decision is what web-based UI framework to build the app in. My last web-based software project was to build an UI for SGVHAK rover at the beginning of the year. In the fast-paced world of front-end web development, that’s ancient history.

The rover UI project used Materialize CSS library to create an interface that follows Google’s Material Design guidelines. At the time, Google offered web developers a library called “Material Design Lite” but with the caveat they’re revamping the entire web development experience. There was little point in climbing the learning curve for a deprecated library like MDL. As Materialize CSS was close to using Bootstrap, a known quantity, the choice was appropriate for the situation.

Now, at the end of the year, we have Google’s promised revamp in the form of Material Design Components for Web. I am still a fan of Material so my signal generator utility project LRWave will be my learning project to use Google’s new library.

Diving right into the Getting Started Guide during a local coding meetup, I got as far as the end of “Step 1” executing npm start when I ran into my first error :

ERROR in ./app.scss
    Module build failed: SyntaxError: Unexpected token {
        at exports.runInThisContext (vm.js:53:16)
        at Module._compile (module.js:374:25)
        at Object.Module._extensions..js (module.js:417:10)
        at Module.load (module.js:344:32)
        at Function.Module._load (module.js:301:12)
        at Module.require (module.js:354:17)
        at require (internal/module.js:12:17)

[…]

webpack error

Since app.scss was only three lines for Step 1, it was trivial to verify there’s no typo to account for an unexpected “{“. A web search for this error message implicated an out-of-date installation of NodeJS, which triggered a memory from when I first started experimenting with NodeJS on my installation of Ubuntu 16.04. When trying to run node at the command line, a fresh install of Ubuntu 16.04 would tell the user:

The program 'node' is currently not installed. You can install it by typing:
sudo apt install nodejs-legacy

That suffix “legacy” is a pretty good hint this thing is old. Running node --version returned v4.2.6. Checking the Node JS website, today’s LTS is v10.14.2 and latest is 11.4.0. So yes, 4.2.6 is super old! Fortunately there’s a pointer to a more updated set of binaries maintained by Nodesource that plays well with Ubuntu’s built-in package management system. Following those directions automatically uninstalled legacy binaries and replaced them with up-to-date versions.

Once I’m running on non-ancient binaries, I could continue following MDC Web’s Getting Started guide on my computer. But this experience motivated me to look into a different option…

New Project: LRWave

LRWave Logo 128And now we switch gears to address something that came up earlier. During my successful collaboration session with [Emily] to draw Lissajous curves on an old CRT, one of the deflection coils was driven by a stereo amplifier based on signals from an app running on my phone. What I had wanted to try was to drive both deflection axis from the amplifier using different signals sent to the left and right stereo channels. Unfortunately, the simple tone generator app I had use doesn’t allow independent channel control. Well, somebody should write an app to do this, and the best way to make sure the app does what I want is to write it myself. Thus formed a convenient excuse for me to dive back into the software world for my next project.

In this day and age, the easiest way to distribute a piece of code is to make it a web app of some flavor. The basic way is to create a sound file (MP3 or similar) on the server with the desired wave form, and download to the browser for playback. But this is pretty clunky. It would be great if there’s a way to generate and manipulate audio wave forms using JavaScript running on the browser, no server-side code required.

I started with the HTML5 <audio> tag, which would allow implementing the “basic way” above of playing a sound clip generated on the server but not the preferred path. For the more advanced approach to work, there would need to be a browser-side API that are supported by modern browsers. Which implies a W3C standard of some sort. I started dreaming up technical terms that might let me find such an API, and after a few searches with fancy words like “frequency generation API” I found the much simpler named Web Audio API. Double-checking on the “Can I Use” reference site, we can see it’s widely implemented enough to be interesting.

Since it’s a simple utility, it has a simple utilitarian name of “LRWave.” Just something to generate different wave forms to be sent out to left and right channels. Following the simplicity of the app, its logo is two instances of the Google Material speaker icon facing left and right. In between them are two different waves from a public domain SVG file signifying the intent that it sends different waves to each speaker.