Window Shopping Arwes Framework

The reason I want to be able to read JavaScript code written by others, no matter what oddball syntax they want to use, is because I want to be able to leverage the huge selection of web development libraries available freely on the internet. I started learning software development in a very commercialized world where everything costs money. Frameworks, components, technical support, sometimes even documentation costs money. But in today’s open-source world, the biggest cost is the time spent getting up to speed. I want to have the skill to get up to speed quickly, but I’m definitely not there yet.

The latest motivation is a nifty looking web app framework under development called Arwes. (Heard through Hackaday.) Arwes aims to make it easy to build computer interfaces that resemble fictional interfaces we see in science-fiction movies. This is, of course, much easier said than done. What shows up onscreen in a movie typically only needed to serve the purpose of a single scene. Which means a single interaction path, with a single set of data, towards a single goal. It could easily be a PowerPoint slide deck (and sometimes that’s exactly what they are on set.)

Real user interfaces have to handle a wide range of interactions, with a wide range of data, and serving multiple possible tasks. Not to mention having to worry about things never seen onscreen like internationalization and accessibility. Trying to make Sci-Fi onscreen interfaces work in a real world capacity usually ends up as a painful exercise. I’ve seen many efforts to create UI resembling Star Trek: The Next Generation‘s LCARS interface and it always ends up as something that delivers a poor user experience and inefficient use of screen real estate. And there’s the fact copyright around LCARS prevents a free open-source web framework.

I’m confident Arwes will evolve to tackle these and other similar issues. Reading the current state of documentation, I see there exists a set of “Vanilla” controls for use in any web framework of choice, and a sample implementation of such integration with React.js framework. At the moment I don’t know enough to leverage the Vanilla controls directly, and I have yet to learn React. I have more learning ahead of me before I could look at a framework like Arwes and say: “Oh yeah, I know how to use that.” That is still my goal and I’ll get there one small step at a time.

JavaScript Spread Syntax and Other Un-Google-Able Shorthand

I’ve had the opportunity to look at a lot of sample JavaScript code snippets as part of learning web development. For the most part I could follow along, even if I lacked the skill to create something new on my own. Due to its rather haphazard evolution, though, JavaScript does have an annoying habit of having many different ways to do the same thing. Part of this is past-looking historical: as browsers tried to merge different implementations into one globally compatible whole, everyone’s slightly different approaches had to remain valid for backwards compatibility. Part of this is future-looking cultural: well-meaning people try solving old problems by inventing something new intended to do everything the old stuff does “but better”. When combined with the need for backwards compatibility, such efforts meant we end up with the legendary XKCD “Standards”.

Particularly annoying to me are JavaScript syntax additions that are just about impossible to put through a search engine. They’re usually scattered around but I found a Medium post JavaScript’s Shorthand Syntax That Every Developer Should Know that was a pretty good roundup of every single one that had annoyed me. The author pitches these shorthand as enabling “futuristic, minimal, highly-readable, and clean source code!” And as a beginner, I disagree. They are opaque and unreadable to those that didn’t already know them and, due to their nature, it is very hard for newbies to figure out what they mean.

Take the first example: the spread syntax. When I first came across it, what I saw in the source code were three periods. That is not self-explanatory as to its function. This Medium post had a comparison example and touted spread syntax as much cleaner than Array.from(arguments) but I could search for “Array.from()” and “arguments” to learn what that does. Trying to search for “…” was a fruitless exercise in frustration that ended in tears because search engines just ignore “…” as their input. I did not know what the spread syntax was (or even that’s what it was called) thus I was up a creek without a paddle.

The rest of this Medium post covered:

  • Inline short-circuits and nullish coalescing. This uses “||” but any search hits would be buried under information about logical OR operation.
  • Exponential operator and assignment. This is “**” and “**=” which usually gets treated as accidental duplicate characters leading to information about “*” and “*=“.
  • Optional chaining via “?.” a series of punctuation marks who also get ignored by search engines just like “...“.
  • Non-decimal number representation is the least bad of this bunch, at least beginners have something to search with like “What does 0x10 mean in JavaScript”.
  • De-structuring and multiple assignment are the worst. There is literally nothing to put into a search engine. Not even an “...” or “?.” (which gets ignored anyway.) There’s no way for a beginner to tell the syntax would extract selected member values from a JavaScript object.

I can see the value of these JavaScript shorthand for creating terse code, but terse code is not the same as readable code. Even though I’m aware of these concepts now, every time I come across such shorthand I have to stop and think before I could understand the code. It becomes a big speed bump in my thought process, and I don’t like it. I certainly don’t feel it is more readable. However, I have to grudgingly agree the author’s title is true, just not in the way they meant it. They are JavaScript’s Shorthand Syntax That Every Developer Should Know because such code already exists, and every JavaScript developer need to know them well enough to understand code they come across.

Angular Standalone Components for Future Projects

Reading through Angular developer guide for standalone components filled in many of the gaps left after going through the “Getting Started with Angular Standalone Components” code lab. The two are complementary: the developer guide gave us reasons why standalone components exist, and the code lab gave us a taste of how to put them to use. Between framework infrastructure and library support, it becomes practical to make Angular components stand independently from Angular modules.

Which is great, but one important detail is missing from the documentation I’ve read. If it’s such a great idea to have components independent from NgModule, why did components need NgModule to begin with? I assume sometime in the history of Angular, having components live in NgModule was a better idea than having components stand alone. Not knowing those reasons is a blank spot in my Angular understanding.

I had expected to come across some information on when to use standalone components and when to package components in NgModule. Almost every software development design decision is a tradeoff between competing requirements, and I had expected to learn when using a NgModule is a better tradeoff than not having them. But I haven’t seen anything to that effect. It’s possible past reasons for NgModule has gradually atrophied as Angular evolved with the rest of the web, leaving a husk that we can leave behind and there’s no reason to go back. I would still appreciate seeing words to that effect from the Angular team, though.

One purported benefit was to ease the Angular learning curve, making it so we only have to declare dependencies in the component we’re working on instead of having to do it both in the component and in its associated NgModule. As a beginner that reason sounds good to me, so I guess should write future Angular projects with standalone components until I have a reason not to. It’s a fine plan but I worry I might run into situations when using NgModule would be a better choice and I wouldn’t recognize “a reason not to” when it is staring me in the face.

On the topic of future projects, at some point I expect I’ll grow beyond serving static content via GitHub Pages. Fortunately, I think I have a few free/trial options to explore before committing money.

Trying Vite and Its IE11 Legacy Option

While looking over Vue.js’s Quick Start example, I noticed its default set of tools included Vite. I understand it plays a role analogous but not identical to webpack in Angular’s default tool set. I found webpack’s documentation quite opaque, so I thought I would try to absorb what I can from Vite’s documentation. I still don’t understand all the history and issues involved in JavaScript build tools, but I was glad to find Vite documentation more comprehensible.

The introductory “Why Vite?” page explained Vite takes advantage of modern browser features for JavaScript code modules. As a result, the client’s browser can handle some of the work that previously must be done on the developer machine via webpack & friends. However, that still leaves a smaller set of things better done up front by the developer instead of later by the client, and Vite takes care of them.

In time I’ll learn enough about JavaScript to understand what all that meant, but one section caught my attention. Given Vite’s focus on leveraging modern browsers, I was surprised to see “browser compatibility” section included an official plug-in @vitejs/plugin-legacy to support legacy browsers. Given my interest in writing web apps that run on my pile of old Windows Phone 8, this could be very useful!

I opened up my NodeJS test apps repository and followed Vite’s “Getting Started” guide to create a new project using the “vanilla TypeScript” template preset. To verify I’ve got it working as expected, I built and successfully displayed the results on a current build of Google Chrome browser.

Then I added the legacy plugin and rebuilt. It bloated the distribution directory up to 80 kilobytes, which is a huge increase but still almost a third of the size of a blank Angular app and quite manageable even in space-constrained situations. And most importantly: yes, it runs on my old Nokia Lumia 920 phone with Windows Phone 8 operating system. Nice! I’m definitely tucking this away in my toolbox for later use. But for right now, I should probably get back to learning Vue.

Notes on Vue.js Quick Start

After going through Codecademy’s “Learn Vue.js” course, I went to Vue.js site and followed through their quick start “Creating a Vue Application” procedure to see what a “Hello World” looks like. It was quite instructive and showed me many facets of Vue not covered by Codecademy’s course.

The first difference is here we’re creating an application with Vue.js, which means firing up command line tool npm init vue@latest to create an application scaffolding with select features. Since I’m a fan of TypeScript and of maintaining code formatting, I said yes to “TypeScript”, “ESLint” and “Prettier” options and no to the rest.

I then installed all the packages for that scaffolding with npm install and then I ran npm run build to look at the results in /dist/ subdirectory. They added up to a little over 60 kilobytes, which is roughly one-third built size of Angular’s scaffolding. This is even more impressive considering that several kilobytes are placeholders: about a half dozen markup files plus a few SVG files for vector graphics. The drastically smaller file sizes of Vue apps are great, but what have I given up in exchange? That’s something I’ll be looking for as I learn more about both platforms.

Poking around in the scaffolding app, I saw it demonstrated use of Vue componentization via its SFC (Single File Component) file format. A single *.vue file contained a component’s HTML, CSS, and TypeScript/JavaScript. Despite the fact they are all text-based formats and designed to coexist, I’m not a fan of mixing three different syntax in a single file. I prefer Angular’s approach of keeping each type in their own file. To mitigate confusion, I expect Vue’s editor tool Volar would help keep the three types distinct.

Some Vue components in the example are tiny like IconTooling.vue which is literally a wrapper around a chunk of SVG to deliver a vector-graphic icon. Others are a little more substantial like WelcomeItem whose template has three slots for information: #icon, #heading, and everything else. This feels quite different from how Angular components get data from their parents. I look forward to learning more about this style of code organization.

While running npm run build I noticed this Vue app boilerplate build pipeline didn’t use webpack, it used something called Vite instead. Since I couldn’t make heads or tails of webpack on my first pass, I was encouraged that I could understand a lot more of Vite.

Next Study Topic: Vue.js

Having an old Windows Phone 8 die (followed by dissection) was a fresh reminder I haven’t put enough effort towards my desire to “do something interesting” with those obsolete devices. The mysterious decay of one device was a very final bell toll announcing its end, but the clock is ticking on the rest of them as well. Native app development for the platform was shut down years ago, leaving only the browser as an entry point. But even that browser, based on IE11, is getting left further and further behind every day by web evolution.

In one of my on-and-off trips into web development, I ran through Angular framework tutorial and then added legacy project flags to make an IE11-compatible build I could run on a Windows Phone 8. That is no longer possible once Angular dropped support. One of the reasons I chose Angular was because it was an “everything included, plus the kitchen sink” type of deal. An empty Angular app created via its “ng new” command included all the tools already configured for their Angular defaults. I knew the concepts of tools like “bundler”, “minimizer”, etc. but I didn’t know enough to actually use them firsthand. Angular boilerplate helped me get started.

But the reason I chose to start with Angular is also the reason I won’t stay with it: the everything framework is too much framework. Angular targets projects far more complex and sophisticated than what I’m likely to tackle in the near future. Using Angular to create a compass web app was hilarious overkill where size of framework overhead far exceeded size of actual app code.

In my search for something lighter-weight, I briefly looked into Polymer/Lit and decided I overshot too far into too little framework. Looking around for my Goldilocks, one name that has come up frequently in my web development learning is Vue.js. It’s supposed to be lighter and faster than Angular/React but still have some of the preconfigured hand-holding I admit I still need. Maybe it would offer a good middle ground and give me just enough framework for future projects.

One downside is that current version Vue 3 won’t run on IE11, either. However, the documentation claimed most Vue fundamental concepts haven’t changed from Vue 2, which does support IE11 and is still on long-term service status until the end of 2023. Maybe I can get started on Vue 3 and write simple projects that would still run on Vue 2. Even if that doesn’t work, it should help orient me in a simpler setup that I could try to get running on Windows Phone 8.

I’m cautiously optimistic I can learn a lot here, because I saw lots of documentation on Vue project site. Though that is only a measure of quantity and not necessarily quality. It remains to be seen whether the material would go over my head as Lit’s site did. Or if it would introduce new strange concepts with a steep learning curve as RxJS did. I won’t know until I dive in.

Aesthetically, there’s at least one Material Design library to satisfy my preference for web app projects. I’ll have to find out if it would bloat an app as much as Angular Material did.

Codecademy offers one course for Vue.js, so I thought I’d start there.

Window Shopping Polymer and Lit

While poking around with browser magnetometer API on Chrome for Android, one of my references was a “Sensor Info” app published by Intel. I was focused on the magnetometer API itself at first, but I mentally noted to come back later to look at the rest of the web app. Now I’m returning for another look, because “Sensor Info” has the visual style of Google’s Material Design and it was far smaller than an Angular project with Angular Material. I wanted to know how it was done.

The easier part of the answer is Material Web, a collection of web components released by Google for web developers to bring Material Design into their applications. “Sensor Info” imported just Button and Icon, unpacked size weighing in at several tens of kilobytes each. Reading the repository README is not terribly confidence inspiring… technically Material Web has yet to reach version 1.0 maturity even though Material Design has moved on to their third iteration. Not sure what’s going on there.

Beyond visual glitz, the “Sensor Info” application was built with both Polymer and Lit. (sensors-app.js declares a SensorsApp class which derives from LitElement, and import a lot of stuff from @polymer) This confused me because I had thought Lit was a successor to Polymer. As I understand it, the Polymer team plans no further work after version 3 and has taken the lessons learned to start from scratch with Lit. Here’s somebody’s compare-and-contrast writeup I got via Reddit. Now I see “Sensor Info” has references to both projects and, not knowing either Polymor or Lit, I don’t think I’ll have much luck deciphering where one ends and another begins. Not a good place for a beginner to start.

I know both are built on the evolving (stabilizing?) web components standard, and both promise to be far simpler and lightweight than frameworks like Angular or React. I like that premise, but such lightweight “non-opinionated” design also means a beginner is left without guidance. “Do whatever you want” is a great freedom but not helpful when a beginner has no idea what they want yet.

One example is the process of taking the set of web components in use and packaging them together for web app publishing. They expect the developer to use a tool like webpack, but there is no affinity to webpack. A developer can choose to use any other tool. Great, but I hadn’t figured out webpack myself nor any alternatives, so this particular freedom was not useful. I got briefly excited when I saw that there are “Starter Kits” already packaged with tooling that are not required (remember, non-opiniated!) but are convenient for starting out. Maybe there’s a sample webpack.config.js! Sadly, I looked over the TypeScript starter kit and found no mention of webpack or similar tool. Darn. I guess I’ll have to revisit this topic sometime after I learn webpack.

Webpack First Look Did Not Go Well

I’ve used Three.js in two projects so far to handle 3D graphics, but I’ve been referencing it as a monolithic everything bundle. Three.js documentation told me there was a better way:

When installing from npm, you’ll almost always use some sort of bundling tool to combine all of the packages your project requires into a single JavaScript file. While any modern JavaScript bundler can be used with three.js, the most popular choice is webpack.

— Three.js Installation Guide

In my magnetometer test project, I tried to bring in webpack to optimize three.js but aborted after getting disoriented. Now I’m going to sit down and read through its documentation to get a better feel of what it’s all about. Here are my notes from a first look as a beginner.

I have a minor criticism about their home page. The first link is to their “Getting Started” guide, the second link is to their “Concepts” section. I followed the first link to “Getting Started”, which is the first page in their “Guides” section. I got to the end of that first page and saw “Next” link is about Asset Management, the next guide page. Each guide page linked to the next. I quickly got into guide pages who used acronym or terminology that had not yet been explained. Later I realized this was because the terminology was covered in the “Concepts” section. In hindsight I should not have clicked “Next” at the end of “Getting Started” guide. I should have gone back to “Concepts” to learn the lingo, before reading the rest of the guides.

Reading through the guides, I quickly understood that webpack is a very JavaScript-centric system built by people who think JavaScript first. I wanted to learn how to use webpack to integrate my JavaScript code and my HTML markup, but these guide pages didn’t seem to cover my use scenario. Starting right off the bat with the Getting Started guide, they used code to build their markup:

   const element = document.createElement('div');
   element.innerHTML = _.join(['Hello', 'webpack'], ' ');
   document.body.appendChild(element);

Wow. Was that really necessary? What about people who wanted to, you know, write HTML in HTML?

    <body><div>Hello webpack</div></body>

Is that considered quaint and old fashioned nowadays? I didn’t find anything in “Guides” or “Concepts” discussing such integration. I had thought the section on “HtmlWebpackPlugin” would have my answer, but it’s actually a chunk of JavaScript code that erased my index.html (destroying my markup) with a bare-bones index.html that loads the generated JavaScript bundles and have no markup. How does my own markup figure in this picture? Perhaps the documentation authors felt it was too obvious to write down, but I certainly don’t understand how to do it. I feel like an old man yelling at a cloud “tell me how to use my HTML!”

I had thought putting my HTML into the output directory was a basic task, but I failed. And I was dismayed to see “Concepts” and “Guides” pages got deeper and more esoteric from there. I understand webpack is very powerful and can solve many problems with modularizing JavaScript code, but it also requires a certain mindset that I have yet to acquire. Webpack’s bare bones generated index.html looks very similar to the generated index.html of an Angular app (which uses both HTML and webpack) so there must be a way to do this. I just don’t know it yet.

Until I learn what’s going on, for the near future I will use webpack only indirectly: it is already configured and running as part of Angular web app framework’s command line tools suite. I plan to do a few Angular projects to become more familiar with it. And now that I’ve read through webpack concepts, I should recognize pieces of Angular workflow as “Aha, they used webpack to do that.”

Visualizing Magnetometer Data with Three.js

I’m happy that my simple exploratory web app was able to obtain data from my phone’s integrated magnetometer. I recognize there are some downsides to how easy it was, but right now I’m happy I can move forward. Ten times a second (10Hz is the maximum supported update rate) my callback receives x, y, and z values in addition to auxiliary data like a timestamp. That’s great, but the underlying meaning is not very intuitive for me to grasp. What I want next is a visualization of that three-axis magnetometer data.

I turned to the JavaScript 3D graphics library Three.js. The last time I used Three.js was to visualize the RGB332 color space, using a 2D projection to help me make sense of data along three dimensions of color: a cylinder representing HSV color space and a rectangular solid representing RGB. Now I want to visualize a single vector in three-dimensional space representing the local magnetic field as reported by my phone’s magnetometer. I was a little intimidated by the math for calculation 3D transforms. I tried to make my RGB332 color app transition between HSV and RGB color space but it never looked right because I didn’t understand the 3D transform math.

Fortunately, this time I didn’t have to do any of my own math at all. Three.js has a built-in function that accepts the x, y, and z components of a target coordinate and calculates the rotation required to have a 3D project look at that point. My responsibility is to create an object that will convey this information. I chose to follow the precedence of an analog compass which is built out of a small magnetic needle. Shaped like a narrow diamond with one half painted red and the other half painted white. For this 3D visualization I created a shape out of two cones, one red and one white. When this shape looks at the magnetometer vector, it functions very similarly to the sliver of magnet inside a compass.

As a precaution, I added a check for WebGL before firing up Three.js code. I was pretty confident any Android Chrome that supported the magnetometer API would support WebGL as well, but it was good practice to confirm.

One thing I’m not doing (but should) is to account for screen orientation. Chrome developers have added a feature to automatically adjust for screen orientation but right now I’m just going to deactivate auto-rotate on my phone (or… phones!)


Source code for my exploratory project is publicly available on GitHub

Magnetometer API Privacy Concerns

Many Android phones have an integrated magnetometer available to native apps. Chrome browser for Android also makes that capability available to web apps, but right now it is hidden by default as a feature preview. Once I enabled that feature, I was able to follow some sample code online and quickly obtain access to magnetometer data in my own web app. That was so easy! Why was it blocked by default?

Apparently, the answer (or at least a part of it) was that it was too easy. Making magnetometer and other hardware sensor data freely available to web apps would feed into hardware-based browser fingerprinting. Even though magnetometer data by itself might be innocuous, it could be combined with other seemingly-innocent data to uniquely identify users thereby piercing privacy protections. This is bad, and purportedly why Apple has so far declined to support sensor APIs.

That article was in 2020, though, and the web moves fast. When I read up on magnetometer API on MDN (Mozilla Developer Network) I encountered an entire section on obtaining user permission to read hardware sensor data. Since I didn’t have to do any of that for my own test app to obtain magnetometer data, I guess this requirement is only present in Mozilla’s own Firefox browser. Or perhaps it was merely a proposal hoping to satisfy Apple’s official objection to supporting sensor API.

I found no mention of Mozilla’s permission management framework in the official magnetometer API specification. There’s a “Security and Privacy Considerations” section but it’s pretty thin and I don’t see how it would address fingerprinting concerns. For what it’s worth, “limiting maximum sample frequency” was listed as a potential mitigation, and Chrome 111 only allows up to 10Hz.

Today users like myself have to explicitly activate this experimental feature. And at the top of “chrome://flags” page where we do so, there’s an explicit warning that enabling experimental features could compromise privacy. In theory, people opting-in to magnetometer today is aware of potential abuse, but that risk has to be addressed before it’s rolled out to everyone. In the meantime, I have opted in and I’m going to have some fun.

Magnetometer API in Android Chrome Browser

I became curious about magnetometers and was deep into shopping for a prototype breakout board when I remembered I already had some on hand. The bad news is that they’re buried inside mobile devices, the good news is that they’re already connected to all the supporting circuitry they need. Accessing them is then “merely” a software task.

Android app developers can access magnetometer values via position sensor APIs. It’s possible to query everything from raw uncalibrated values to device orientation information computed from calibrated magnetometer data fused with other sensor data. Apple iOS app developers have the Core Motion library from which they can obtain similar information.

But I prefer not to develop a native mobile app because of all the overhead involved. For Android, I would have to install Android Studio (a multi-gigabyte download) and put my device into Developer Mode which hampers its ability to run certain security-focused tasks. For iOS I would have to install Xcode, which is at least as big of a hassle, and I’m not sure what I’d have to do on an iOS device. (Installing TestFlight is part of the picture, but I don’t know the whole picture.)

Looking for an alternative: What if I could access sensor data from something with lower overhead, like a small web app? Checking on the ever-omniscient caniuse.com, I found that a magnetometer API does exist. However, it is not (yet?) standardized and hidden behind an optional flag that the user has to explicitly enable. I typed chrome://flags into my address bar and found the “Generic Sensor Extra Classes” option to switch from “Default” to “Enable”. After making this change, the associated caniuse.com magnetometer test turned from red to green.

One annoyance of working with magnetometer on Android is that I have to work against an actual device. While Chrome developer tools has an area for injecting sensor data into web apps under test, it does not (yet?) include ability to emulate magnetometer data. And looking over Android Studio documentation, I found settings to emulate sensors like an accelerometer but no mention of magnetometer either.

Looking online for some sample code, I found small code snippets in Google Chrome Developer’s blog about the Sensor API. Lots of useful reference on that page but I wanted a complete web app to look at. I found what I was looking for in a “Sensor Info” web app published from Intel’s GitHub account. (This was a surprising source, pretty far from Intel’s main moneymaker of CPUs. What is their interest in this field? Sadly, that corporate strategic answer is not to be found in this app. I choose to be happy that it exists.) Launching the app, clicking “+” allowed me to add the magnetometer sensor and start seeing data stream through. After looking through this Intel web app’s source code repository, I wrote my own minimalist test app streaming magnetometer data. Success! I’m glad it was this easy, but perhaps that was too easy?


Source code for this exploratory project is publicly available on GitHub

Notes on “Using MongoDB with Node.js” from MongoDB University

Most instructional material (and experimentation) for MongoDB uses the MongoDB Shell (mongosh), which is “a fully functional JavaScript and Node.js 16.x REPL environment for interacting with MongoDB deploymentsaccording to mongosh documentation. Making mongosh the primary command line interface useful for exploration, experimentation, and education like on Codecademy or MongoDB University.

Given the JavaScript focus of MongoDB, I was not surprised there is a set of first-party driver libraries to translate to/from various programming languages. But I was surprised to find that Node.js (JavaScript) was among the list of drivers. If this was all JavaScript anyway, why do we need a driver? The answer is that we don’t use JavaScript to talk to the underlying database. That is the job of BSON, the binary data representation used by MongoDB for internal storage. Compact and machine-friendly, it is also how data is transmitted over the network. Which is why we need a Node.js library to convert from JSON to BSON for data transmission. I started the MongoDB University course “Using MongoDB with Node.js” to learn more about using this library.

It was a short course, as befitting the minimal translation required of this JavaScript-focused database. The first course covered how to connect to a MongoDB instance from our Node.js environment. I decided to do my exercises with a Node.js Docker container.

docker run -it --name node-mongo-lab -v C:\Users\roger\coding\MongoDB\node-mongo-lab:/node-mongo-lab node:lts /bin/sh

The exercise is “Hello World” level, connecting to a MongoDB instance and listing all available databases. Success means we’ve verified all libraries & their dependencies are install correctly, that our MongoDB authentication is set up correctly, and that our networking path is clear. I thought that was a great starting point for more exercises, and was disappointed that we actually didn’t use our own Node.js environment any further in this course. The rest of the course used the Instruqt in-browser environment.

We had a lightning-fast review of MongoDB CRUD Operations and how we would do them with the Node.js driver library. All the commands and parameters are basically identical to what we’ve been doing in mongosh. The difference is that we need an instance of the client library as the starting point, from which we could obtain object representing a database and a collection with it. (client.db([database name]).collection([collection name]) Once we have that reference, everything else looks exactly as they did in mongosh. Except now they are code to be executed by Node.js runtime instead of typed. One effect of running code instead of typing commands is that it’s much easier to ensure transaction sessions complete within 60 seconds.

For me, a great side effect of this course is seeing JavaScript async/await in action doing more sophisticated things than simple straightforward things. The best example came from this code snippet demonstrating MongoDB Aggregation:

    let result = await accountsCollection.aggregate(pipeline)
    for await (const doc of result) {
      console.log(doc)
    }

The first line is straightforward: we run our aggregation pipeline and await its result. That result is an instance of MongoDB cursor which is not the entire collection of results but merely access to a portion of that collection. Cursors allow us to start processing data without having to load everything. This saves memory, bandwidth, and processing overhead. And in order to access bits of that collection, we have this “for await” loop I’ve never seen before. Good to know!

Notes on Codecademy “Learn Node-SQLite”

After my SQL fresher course, shortly after learning Node.js, I thought the natural progression was to put them together with Codecademy’s “Learn Node-SQLite” course. The name node-sqlite3 is not a mathematical subtraction but that of a specific JavaScript library bridging worlds of JavaScript and SQL. This course was a frustrating disappointment. (Details below) In hindsight, I think I would have been better off skipping this course and learn the library outside of Codecademy.

About the library: Our database instructions such as queries must be valid SQL commands stored as strings in JavaScript source code. We have the option of putting some parameters into those strings in JavaScript fashion, but the SQL commands are mostly string literals. Results of queries are returned to the caller using Node’s error-first asynchronous callback function convention, and query results are accessible as JavaScript objects. Most of library functionality are concentrated in just a few methods, with details available from API documentation.

This Codecademy course is fairly straightforward, covering the basics of usage so we can get started and explore further on our own. I was amused that some of the examples were simple to the point of duplicating SQL functionality. Specifically the example for db.each() shows how we can tally values from a query which meant we ended up writing a lot of code just to duplicate SQL’s SUM() function. But it’s just an example, so understandable.

The course is succinct to the point of occasionally missing critical information. Specifically, the section about db.run() say “Add a function callback with a single argument and leave it empty for now. Make sure that this function is not an arrow.” but didn’t say why our callback function must not use arrow syntax. This minor omission became a bigger problem when we roll into the after-class quiz, which asked why it must not use arrow syntax. Well, you didn’t tell me! A little independent research found the answer: arrow notation functions have a different behavior around the “this” object than other function notations. And for db.run(), our feedback is stored in properties like this.lastID which would not be accessible in an arrow syntax function. Despite such little problems, the instruction portion of the course were mostly fine. Which brings us to the bad news…

The Code Challenge section is a disaster.

It suffers from the same problem I had with Code Challenge section of the Learn Express course: lack of feedback on failures. Our code was executed using some behind-the-scenes mechanism, which meant we couldn’t see our console.log() output. And unlike the Learn Express course, I couldn’t workaround this limitation by throwing exceptions. No console logs, no exceptions, we don’t even get to see syntax errors! The only feedback we receive is always the same “You did it wrong” message no matter the actual cause.

Hall of Shame Runner-Up: No JavaScript feedback. When I make a JavaScript syntax error, the syntax error message was not shown. Instead, I was told “Did you execute the correct SQL query?” so I wasted time looking at the wrong thing.

Hall of Shame Bronze Medal: No SQL feedback. When I make a SQL command error, I want to see the error message given to our callback function. But console.log(error) output is not shown, so I was stabbing in the dark. For Code Challenge #13, my mistake was querying from “Bridges” table when the sample database table is actually singular “Bridge”. If I could log the error, I would have seen “No such table Bridges” which would have been much more helpful than the vague “Is your query correct?” feedback.

Hall of Shame Silver Medal: Incomplete Instructions. Challenge #14 asked us to build a query where “month is the current month”. I used “month=11” and got nothing. The database had months in words, so I actually needed to use “month=’November'”. I wasted time trying to diagnose this problem because I couldn’t run a “SELECT * FROM Table” to see what the data looked like.

Hall of Shame Gold Medal Grand Prize Winner: Challenge #12 asks us to write a function. My function was not accepted because I did not declare it using the same JavaScript function syntax used in the solution. Instructions said nothing about which function syntax to use. After I clicked “View Solution” and saw what the problem was (image above) I got so angry at the time it wasted, I had to step away for a few hours before I could resume. This was bullshit.


These Hall of Shame (dis)honorees almost turned me off of Codecademy entirely, but after a few days away to calm down, I returned to learn what Codecademy has to teach about PostgreSQL

Replace node-static with serve-static for ESP32 Sawppy Development

One of the optional middleware modules maintained by the Expressjs team is express.static, which can handle serving static assets like HTML, CSS, and images. It was used in code examples for Codecademy’s Learn Express course, and I made a mental note to investigate further after class. I thought it might help me with a problem I already had on hand, and it did!

When I started writing code for a micro Sawppy rover running on an ESP32, I wanted to be able to iterate on client-side code without having to reflash an ESP32. So as an educational test run of Node.js, I wrote a JavaScript counterpart to the code I wrote (in C/C++) for running on ESP32. While they are two different codebases, I intended for the HTTP interface to be identical and indistinguishable by the HTML/CSS/JavaScript client code I wrote. Most of this server-side work was focused around websocket, but I also needed to serve some static files. I looked on nodejs.org and found “How to serve static files” in their knowledge base. That page gave an example using the node-static module, which I copied for my project.

Things were fine for a while, but then I started getting messages from the Github Dependabot nagging me to fix a critical security flaw in my repository due to its use of a library called minimist. It was an indirect dependency I picked up by using node-static, so I figured it’ll be fixed after I pick up an update to node-static. But that update never came. As of this writing, the node-static package on NPM hadn’t been updated for four years. I see updates made on the GitHub repository, but for whatever reason NPM hasn’t picked that up and thus its registry remains outdated.

The good news is that my code isn’t deployed on an internet-facing server. This Node.js server is only for local development of my ESP32 Sawppy client-side browser code, which vastly minimizes the window of vulnerability. But still, I don’t like running known vulnerable code, even if it is only accessible from my own computer and only while I’m working on ESP32 Sawppy code. I want to get this fixed somehow.

After I got nginx set up as a local web server, I thought maybe I could switch to using nginx to serve these static files too. But there’s a hitch: a websocket connection starts as a HTTP request for an upgrade to websocket. So the HTTP server must interoperate with the websocket server for a smooth handover. It’s possible to set this up with nginx, but the instructions to do so is above my current nginx skill level. To keep this simple I need to stay within Node.js.

Which brought me back to Express and its express.static file server. I thought maybe I could fire up an Express app, use just this express.static middleware, and almost nothing else of Express. It’s overkill but it’s not stupid if it works. I just had to figure out how it would handover to my websocket code. Reading Express documentation for express.static, I saw it was built on top of a NPM module called serve-static, and was delighted to learn it can be used independent of Express! Their README included an example: Serve files with vanilla node.js http server and this was exactly what I needed. By using the same Node.js http module, my websocket upgrade handover code will work in exactly the same way. At the end, switching from node-static to serve-static was nearly a direct replacement requiring minimal code edit. And after removing node-static from my package.json, GitHub dependabot was happy and closed out my security issue. I will be free from nagging messages, at least until the next security concern. That might be serious if I had deployed to be internet accessible, but the odds of that just dropped.

Notes on Express “Getting Started” Guide

During the instruction of Codecademy’s “Learn Express” course, we see a few middleware modules that we can optionally use in our project as needed. Examples used in the course are morgan and body-parser, and one of the quizzes asked us to look up vhost. Course material even started using serve-static before we learned about middleware modules at all. These four middleware modules were among those popular enough to be adopted by the Expressjs team who now maintain them.

Since that meant I already had a browser tab open to the Express project site, I decided to poke around. Specifically, I wanted to see how their own Getting Started guide compared to the Codecademy course I just finished. My verdict: the official Express site provides a wider breadth of information but not nearly as much depth for educating a newcomer. If I hadn’t taken the Codecademy course and tried to get started with this site, I would have been able to get a simple Express application up and running but I would not have understood much of what was going on. Especially if I had created an app using the boilerplate application generator. Even after the Codecademy course I don’t know what most of these settings mean!

But the official site had wider breadth, as Codecademy didn’t even mention the boilerplate tool. It also has many lists of pointers to resources, like the aforementioned list of popular middleware modules. Another list I expect to be useful is a sample of options for database integration. Some minimal contextual information was provided with each listed link, but it’s up to us to follow those links and go from there. The only place where this site goes in depth is the Express API reference, which makes sense as the official site for Express should naturally serve as the authoritative source for such information!

I anticipate that I will use Express for at least a few learning/toy projects in the future, at which point I will need to return to this site for API reference and pointers to resources that might help me solve problems in the future. However, before I even get very far into Express, this site has already helped me solve an immediate problem: node-static is out of date.

Notes on Codecademy “Learn Express”

I may have my quibbles with Codecademy’s Learn Node.js course, but it at least gave me a better understanding to supplement what I had learned bumping around on my own. But the power of Node isn’t just the runtime, it’s the software ecosystem which has grown up around it. I have many many choices of what to learn from this point, and I decided to try the Learn Express course.

Before I started the course, I understood Express was one of the earlier Node.js frameworks for building back end of websites in JavaScript. And while there have been many others that have come online since, with more features and capabilities, Express is still popular because it set out not to pack itself with features and capabilities. This meant if we wanted to prototype something slightly off the beaten path, Express would not get in our way. This sounded like a good tool to have in the toolbox.

After taking the course, I learned how Express accomplishes those goals. Express Routes helps us map HTTP methods (GET/POST/PUT/DELETE) to JavaScript code via “Routes”, and for each route we can compose multiple JavaScript modules in the form of “Middleware”. With this mechanism we can assemble arbitrary web API by chaining middleware modules like LEGO pieces to respond to HTTP methods. And… that’s basically the entirety of core Express. Everything else is optional, so we only need to pull in what we need (in the form of middlware modules) for a particular project.

When introducing Routes in Express, our little learning JavaScript handler functions are actually fully qualified Middleware, but we didn’t know it yet. What I did notice is that it had the signature of three parameters: (request, response, next). The Routes course talked about reading request to build our response, but it never talked about next. Students who are curious about them and striking out to search on their own as I did would find information about “chaining”, but it wouldn’t make sense until we learned Middleware. I thought it would have been nice if the course would say “we’ll learn about next later, when we learn about Middleware” or something to that effect.

My gripe with this course is in its quiz sections. We are given partial chunk of JavaScript and told to fill in certain things. When we click “Check Work” we trigger some validation code to see if we did it right. If we did it wrong, we might get an error message to help us on our way. But sometimes the only feedback we receive is that our answer is incorrect, with no further feedback. Unlike earlier Node course exercises, we were not given a command prompt to run “node app.js” and see our output. This meant we could not see the test input, we could not see our program’s behavior, and we could not debug with console.log(). I tried to spin up my own Node.js Docker container to try running the sample code, but we weren’t given entire programs to run and we weren’t given the test input so that was a bust.

I eventually found a workaround: use exceptions. Instead of console.log('debug message') I could use throw Error('debug message') and that would show up on the Codecademy UI. This is far less than ideal.

Once I got past the Route section, I proceeded to Middleware. Most of this unit was focused on showing us how various Middleware mechanisms allow us to reduce code duplication. My gripe with this section is that the course made us do useless repetitive work before telling us to replace them with much more elegant Middleware modules. I understand this is how the course author chose to make their point, but I’m grumpy at useless make-work that I would delete a few minutes later.

By the end of the course, we know Express basics of Route and Middleware and got a little bit of practice building routes from freely available middleware modules. The course ends by telling us there are a lot of Express middleware out there. I decided to look into Express documentation for some starting points.

Notes on Codecademy “Learn Node.js”

I’ve taken most of Codecademy’s HTML/CSS course catalog for front-end web development, ending with a series of very educational exercises created outside of Codecademy’s learning environment. I think I’m pretty well set up to execute web browser client-side portions of my project ideas, but I also need to get some education on server-side coding before I can put all the pieces together. I’ve played with Node.js earlier, but I’m far from an expert. It should be helpful to get a more formalized introduction via Codecademy, starting with Learn Node.js.

This course recommends going through Introduction to JavaScript as a prerequisite, so the course assumes we already know those basics. The course does not place the same requirement on Intermediate JavaScript, so some of the relevant course material is pulled into this Node.js course. Section on Node modules were reruns for me, but here it’s augmented with additional details and a pointer to official documentation.

The good news for the overlap portions is that it meant I already had partial credit for Learn Node.js as soon as I started, the bad news is the Codecademy’s own back-end got a little confused. I clicked through “Next” for a quick review, and by doing so it skipped me over a few lessons that I had not yet seen. My first hint something was wrong was getting tossed into a progress checking quiz and being baffled: “I don’t remember seeing this material before!” I went back to examine the course syllabus, where I saw the skipped portions. The quiz was much easier once I went through that material!

This course taught me about error-first callback functions, something that is apparently an old convention for asynchronous JavaScript (or just Node) code that I hadn’t been aware of. I think I stumbled across this in my earlier experiments and struggled to use the effectively. Here I learn they were the conceptual predecessor to promises, which led to async/await which plays nice with promises. But what about even older error-first callback code? This is where util.promisify() comes into the picture, so that everyone can work together. Recognizing what error-first callbacks are and knowing how to interoperate via util.promisify(), should be very useful.

The course instructs us on how to install Node.js locally on our development computers, but I’m going to stick with using Docker containers. Doing so would be inconvenient if I wanted to rely on globally installed libraries, but I want to avoid global installations as much as possible anyway. NPM is perfectly happy to work at project scope and that just takes mapping my project directory as a volume into the Docker container.

After all, I did that as a Docker & Node test run with ESP32 Sawppy’s web interface. But that brought in some NPM headaches: I was perpetually triggering GitHub dependabot warnings about security vulnerabilities in NPM modules I hadn’t even realized I was using. Doing a straight “update to latest” did not resolve these issues, I eventually figured out it was because I had been using node-static to serve static pages in my projects. But the node-static package hadn’t been updated in years and so it certainly wouldn’t have picked up security fixes. Perhaps I could switch it to another static server NPM module like http-server, or get rid of that altogether and keep using nginx as sheer overkill static web server.

Before I decide, though, this Learn Node.js course ended with a few exercises building our own HTTP server using Node libraries. These were a little more challenging than typical Codecademy in-course exercises. One factor is that the instructions told us to do a lot of things with no way to incrementally test them as we go. We didn’t fire it up the server to listen for traffic (server.listen()) until the second-from-final step, and by then I had accumulated a lot of mistakes that took time to untangle from the rest of the code. The second factor is that the instructions were more vague than usual. Some Codecademy exercises tell us exactly what to type and on which line, and I think that didn’t leave enough room for us to figure things out for ourselves and learn. This exercise would sometimes tell us “fill in the request header” without details or even which Node.js API to use. We had to figure it all out ourselves. I realize this is a delicate balance when writing course material. I feel Codecademy is usually too much “do exactly this” for my taste, but the final project of Learn Node.js might have gone too far in the “left us flailing uselessly” direction.

In the meantime, I believe I have enough of a start to continue learning about server-side JavaScript. My next step is to learn Express.

Notes Of A Three.js Beginner: Color Picker with Raycaster

I was pleasantly surprised at how easy it was to use three.js to draw 256 cubes, each representing a different color from the 8-bit RGB332 palette available for use in my composite video out library. Arranged in a cylinder representing the HSV color model, it failed to give me special insight on how to flatten it into a two-dimension color chart. But even though I didn’t get what I had originally hoped for, I thought it looked quite good. So I decided to get deeper into three.js to make this more useful. Towards the end of three.js getting started guide is a list of Useful Links pointing to additional resources, and I thought the top link Three.js Fundamentals was as good of a place to start as any. It gave me enough knowledge to navigate the rest of three.js reference documentation.

After several hours of working with it, my impression is that three.js is a very powerful but not very beginner-friendly library. I think it’s reasonable for such a library to expect that developers already know some fundamentals of 3D graphics and JavaScript. From there it felt fairly straightforward to start using tools in the library. But, and this is a BIG BUT, there is a steep drop if we should go off the expected path. The library is focused on performance, and in exchange there’s less priority on fault tolerance, graceful recovery, or even helpful debugging messages for when things go wrong. There’s not much to prevent us from shooting ourselves in the foot and we’re on our own to figure out what went wrong.

The first exercise was to turn my pretty HSV cylinder into a color picker, making it an actually useful tool for choosing colors from the RGB332 color palette. I added pointer down + pointer up listeners and if they both occurred on the same cube, I change the background color to that color and display the two-digit hexadecimal code representing that color. Changing the background allows instant comparison to every other color in the cylinder. This functionality requires the three.js Raycaster class, and the documentation example translated across to my application without much fuss, giving me confidence to tackle the next project: add the ability to switch between HSV color cylinder and RGB color cube, where I promptly fell on my face.

[Code for this project is publicly available on GitHub]

Notes After Node.js Introduction

After I ran through the Docker Getting Started tutorial, I went back into the docker container (the one we built as we progressed through the tutorial) and poked around some more. The tutorial template was an Express application, built on Node.js. Since I had parts of this infrastructure in hand, I thought I will just run with it and use Node.js to build a simple web server. The goal is to create a desktop placeholder for an ESP32 acting as a web server, letting me play and experiment quickly without constantly re-flashing my ESP32.

The other motivation was that I wanted to get better at JavaScript. Not necessarily because of the language itself (I’m not a huge fan) but because of the huge community that has sprung up around it, sharing reusable components. I was impressed by what I saw of Node-RED (built on Node.js) and even more impressed when I realized that was only a small subset of the overall Node.js community. More than a few times I’ve researched a topic and found that there was an available JavaScript tool for doing it. (Like building desktop apps.) I’m tired of looking at this treasure trove from the outside, I want this in my toolbox.

Node.js is available with a Windows installer, but given my recent knowledge, I went instead to their officially published Docker images. Using that to run through Node.js introduction required making a few on-the-fly adaptations to Node.js in a container, but I did not consider that a hinderance. I consider it great container practice! But this only barely scratches the surface of what I can find within the Node.js community. Reading the list of Node.js Frameworks and Tools I realized not only have I not heard about many of these things, I don’t even understand the words used to describe them! But there were a lot of great information in that introduction. On the infrastructure side, the live demo code was made possible by Glitch.com, another online development environment I’ll mentally file away alongside Cloud9 and StackBlitz.

Even though Google’s V8 JavaScript engine is at the heart of both Chrome browser and Node.js server, there are some obviously significant differences between running in a server environment versus running in browser. But sometimes there are concepts from one world that can be useful in the the other, and I was fascinated by people who try to bring these worlds closer together. One example is Browserify which brings some server-side component management features to browser-side code. And Event Emitter is a project working in the other direction, bringing browser-style events to server-side code.

As far as I can tell, the JavaScript language itself was not explicitly designed with a focus on handling asynchronous operations. However, because it evolved in the web world where network latency is an ever-present factor, the ecosystem has developed in that direction out of necessity. The flexibility of the language permitted an evolution into asynchronous callbacks, but “possible” isn’t necessarily “elegant” leading to scenarios like callback hell. To make asynchronous callbacks a little more explicit, JavaScript Promise was introduced to help improve code readability. There’s even a tool to convert old-school callbacks into Promises. But as nice as that was, people saw room for improvement, and they built on top of Promises to create the much more easier-to-read async/await pattern for performing asynchronous operations in JavaScript. Anything that makes JavaScript easier to read is a win in my book.

With such an enormous ecosystem, it’s clear I can spend a lot more time learning Node.js. There are plenty of resources online like Node School to do so, but I wanted to maintain some resemblance of focus on my little rover project. So I went back to the Docker getting started tutorial and researching how to adapt it to my needs. I started looking at a tool called webpack to distill everything into a few static files I can serve from an ESP32, but then I decided I should be able to keep my project simple enough I wouldn’t need webpack. And for serving static files, I learned Express would be overkill. There’s a Node.js module node-static available for serving static files so I’ll start by using that and build up as needed from there. This gives me enough of a starting point on server side so I can start looking at the client side.

SGVHAK Rover Interface Avoids Confirmation Dialog

Shutdown Reboot

Our SGVHAK Rover’s brain prefers to be shut down gracefully rather than having its power removed at an arbitrary time. If power cuts out in the middle of a system write operation, it risks corrupting our system storage drive. While our control software is written to be fairly platform-agnostic, we’re primarily running it on Raspberry Pi which lacks a hardware shutdown button. So we need to create an UI to initiate a system shutdown via software.

The easiest thing to do would be to add a “Shutdown” button. And since it is a rather drastic event, have a “Are you sure?” confirmation dialog. This historically common pattern is falling out of favor with user interface designers. Computer users today are constantly inundated with confirmation dialog making them less effective. If our user has built up a habit of dismissing confirmation dialog without thinking, a confirmation dialog is no confirmation at all.

So how do we enforce confirmation of action without resorting to the overplayed confirmation dialog? We have to design our UI to prevent an accidental shutdown from a wayward finger press. To accomplish this goal, our shutdown procedure is designed so user must make a deliberate series of actions, none of which is “Yes/No’ on an ineffectual dialog box.

First – our user must enter a “System Power” menu. If they entered by mistake, any of the three active buttons will take then back to main menu. There’s no way to accidentally shutdown with a single mistaken tap.

Second – they must select “Shutdown” or “Reboot”, forcing a deliberate choice to be made before our UI activates “OK” button. If an incorrect selection is made, it can be corrected without accidentally triggering the action because both options are on the far left of the screen and “OK” is on the far right. Bottom line: Even with two accidental presses, our system will not shut down or reboot.

Third – With the “OK” button now activate after two deliberate actions, the user can tap it to begin the shutdown (or reboot) process.