Window Shopping Marko JS

While I’m window-shopping open-source software like Godot Game Engine, I might as well jot down a few notes in another open-source package. Marko is a web app framework for building user interfaces. Many web development frameworks are free open source so, unlike Godot, that wasn’t why Marko got on my radar.

The starting point was this Mastodon post boosted by someone on my follow list, linking to an article “Why not React?” explaining how React (another client-side web development framework) was built to solve certain problems but solved them in a way that made it very hard to build a high performance React application.

The author asserted that React was built so teams at Facebook can work together to ship features, isolating each team from the implementation details of components created by other teams. This was an important feature for Facebook, because it also meant teams are isolated from each other’s organizational drama. However, such isolation meant many levels of runtime indirection that takes time every time some little event happens.

There was also a brief mention of Angular, the web framework I’ve occasionally put time into learning. And here the author asserted that Angular was built so Google can build large-scale desktop applications that run in a web browser. Born into a world where code lives and runs in an always-open browser tab on a desktop computer, Angular grew up in an environment of high bandwidth and plentiful processing power. Then Google realized more of the world’s internet access are done from phones than from desktop computers, and Angular doesn’t work as well in such environments of limited bandwidth and processing power.

What’s the solution? The author is a proponent of streamed HTML, a feature so obscure that it doesn’t even get called a consistent name. The underlying browser support has existed for most of the history of browsers, yet it hasn’t been commonly used. Perhaps (this is my guess) the fact it is so old made it hard to stand out in the “oh new shiny!” nature of web evolution and the ADHD behavior it forces on web developers. It also breaks the common path pattern for many popular web frameworks, roughly analogous to Unity DOTS or entity component systems.

What’s the solution? Use a framework that supports streamed HTML at its core. Marko was originally developed by eBay to help make browsing auctions fast and responsive. While it has evolved from there (here’s a history of Marko up to the publishing date in 2017) it has maintained that focus on keeping things responsive for users. One of the more recent evolutions is allowing Progressive Web Apps (PWA) to be built with Marko.

It all sounds very good, but Marko is incompatible with one of my usage scenarios. Marko’s magic comes from server-side and client-side runtime components working together. They minimize the computational requirements on the client browser (friendlier to phones) and also to minimize the amount of data transmitted (friendlier to data plans.) However, this also means Marko is not compatible with static web servers where there is no server-side computation at all. (Well, not impossible. But if Marko is built for static serving, it loses much of its advantages.) This means when I build a web app to be served from an ESP32, Marko might not be the best option. Marko grew up in an environment where the server-side resources (of eBay) is vastly greater than whatever is on the client. When I’m serving from an ESP32, the situation is reversed: the phone browser has more memory, storage, and computing power.

I’m going to keep Marko on my mind if I ever build a web app that can take benefit from Marko’s advantages, but from this cursory glance I already know it won’t be the right hammer for every nail. As opposed to today’s web browsers, which try to do it all and hence have grown to become Swiss Army Knife of software packed with capabilities like PDF document viewers.

Faux VFD Experiment on CodePen

Before I can become a hotshot web developer capable of utilizing any library I come across, I need to build up my skills from small projects. One of the ideas that occurred to me recently was motivated by an appreciation for vacuum fluorescent displays (VFD) as used in a Buick Reatta dashboard.

I’ve played with a small VFD salvaged from an old video cassette machine, but that’s nothing like the panorama we see on a Reatta dashboard. It is not something we’re likely to see ever again. The age of VFDs have long since passed and it’s not practical for an average hobbyist to build their own. The circle of people who consider VFD retro cool is probably not big enough to restart VFD manufacturing. That leaves us with building fakes out of technology we have on hand, and I thought I could quickly prototype my idea using web technologies, and here it is:

There are two parts to this faux VFD.

Less obviously visible is the fine hexagonal mesh emulating the control grid I saw when I put my salvaged VFD under a macro lens. I believe it is one of the two electrodes, whether anode or cathode I do not know. But it is a subtle part of a VFD that designers put effort into reducing and masking. Now it is part of the VFD charm and I wanted to replicate it.

Since it is a pattern built out of many small repeating sections, I thought the best way to recreate this mesh is with a bit of JavaScript code drawing to a <canvas>. Otherwise, I would have to create it in markup and that means a lot of duplication in a massive markup file. The innermost section of the loop draws three lines 60 degrees apart, and the loop iterates across the entire display area. Using code to draw vector graphics (versus tiling a bitmap) make this grid dynamically scalable to different resolutions.

Rendered behind the mesh is the “content” of the VFD, which is just a simple circle in this case. Since this is a large feature likely to be subject to direct editing, I decided to do this in SVG markup. My first attempt didn’t look right: modern screens are far too clear and crisp compared to a VFD. Such clarity is useful to render the fine details of my hexagonal control grid, but not for the VFD phosphors. I looked around online and found a SVG blur filter which was a lot closer to how a VFD would look.

I know the result isn’t perfect and I don’t know if I would end up applying this experiment to a future project, but I really liked the fact I could whip up a quick prototype in less than an hour. And furthermore, be able to share the prototype online via infrastructure like CodePen. Even if I don’t end up using it myself, putting it online makes it available for someone else to take this idea further.

Window Shopping Arwes Framework

The reason I want to be able to read JavaScript code written by others, no matter what oddball syntax they want to use, is because I want to be able to leverage the huge selection of web development libraries available freely on the internet. I started learning software development in a very commercialized world where everything costs money. Frameworks, components, technical support, sometimes even documentation costs money. But in today’s open-source world, the biggest cost is the time spent getting up to speed. I want to have the skill to get up to speed quickly, but I’m definitely not there yet.

The latest motivation is a nifty looking web app framework under development called Arwes. (Heard through Hackaday.) Arwes aims to make it easy to build computer interfaces that resemble fictional interfaces we see in science-fiction movies. This is, of course, much easier said than done. What shows up onscreen in a movie typically only needed to serve the purpose of a single scene. Which means a single interaction path, with a single set of data, towards a single goal. It could easily be a PowerPoint slide deck (and sometimes that’s exactly what they are on set.)

Real user interfaces have to handle a wide range of interactions, with a wide range of data, and serving multiple possible tasks. Not to mention having to worry about things never seen onscreen like internationalization and accessibility. Trying to make Sci-Fi onscreen interfaces work in a real world capacity usually ends up as a painful exercise. I’ve seen many efforts to create UI resembling Star Trek: The Next Generation‘s LCARS interface and it always ends up as something that delivers a poor user experience and inefficient use of screen real estate. And there’s the fact copyright around LCARS prevents a free open-source web framework.

I’m confident Arwes will evolve to tackle these and other similar issues. Reading the current state of documentation, I see there exists a set of “Vanilla” controls for use in any web framework of choice, and a sample implementation of such integration with React.js framework. At the moment I don’t know enough to leverage the Vanilla controls directly, and I have yet to learn React. I have more learning ahead of me before I could look at a framework like Arwes and say: “Oh yeah, I know how to use that.” That is still my goal and I’ll get there one small step at a time.

JavaScript Spread Syntax and Other Un-Google-Able Shorthand

I’ve had the opportunity to look at a lot of sample JavaScript code snippets as part of learning web development. For the most part I could follow along, even if I lacked the skill to create something new on my own. Due to its rather haphazard evolution, though, JavaScript does have an annoying habit of having many different ways to do the same thing. Part of this is past-looking historical: as browsers tried to merge different implementations into one globally compatible whole, everyone’s slightly different approaches had to remain valid for backwards compatibility. Part of this is future-looking cultural: well-meaning people try solving old problems by inventing something new intended to do everything the old stuff does “but better”. When combined with the need for backwards compatibility, such efforts meant we end up with the legendary XKCD “Standards”.

Particularly annoying to me are JavaScript syntax additions that are just about impossible to put through a search engine. They’re usually scattered around but I found a Medium post JavaScript’s Shorthand Syntax That Every Developer Should Know that was a pretty good roundup of every single one that had annoyed me. The author pitches these shorthand as enabling “futuristic, minimal, highly-readable, and clean source code!” And as a beginner, I disagree. They are opaque and unreadable to those that didn’t already know them and, due to their nature, it is very hard for newbies to figure out what they mean.

Take the first example: the spread syntax. When I first came across it, what I saw in the source code were three periods. That is not self-explanatory as to its function. This Medium post had a comparison example and touted spread syntax as much cleaner than Array.from(arguments) but I could search for “Array.from()” and “arguments” to learn what that does. Trying to search for “…” was a fruitless exercise in frustration that ended in tears because search engines just ignore “…” as their input. I did not know what the spread syntax was (or even that’s what it was called) thus I was up a creek without a paddle.

The rest of this Medium post covered:

  • Inline short-circuits and nullish coalescing. This uses “||” but any search hits would be buried under information about logical OR operation.
  • Exponential operator and assignment. This is “**” and “**=” which usually gets treated as accidental duplicate characters leading to information about “*” and “*=“.
  • Optional chaining via “?.” a series of punctuation marks who also get ignored by search engines just like “...“.
  • Non-decimal number representation is the least bad of this bunch, at least beginners have something to search with like “What does 0x10 mean in JavaScript”.
  • De-structuring and multiple assignment are the worst. There is literally nothing to put into a search engine. Not even an “...” or “?.” (which gets ignored anyway.) There’s no way for a beginner to tell the syntax would extract selected member values from a JavaScript object.

I can see the value of these JavaScript shorthand for creating terse code, but terse code is not the same as readable code. Even though I’m aware of these concepts now, every time I come across such shorthand I have to stop and think before I could understand the code. It becomes a big speed bump in my thought process, and I don’t like it. I certainly don’t feel it is more readable. However, I have to grudgingly agree the author’s title is true, just not in the way they meant it. They are JavaScript’s Shorthand Syntax That Every Developer Should Know because such code already exists, and every JavaScript developer need to know them well enough to understand code they come across.

Angular Signals Code Lab CSS Requires “No-Quirks” Mode

The sample application for “Getting Started with Angular Signals” is designed to run on the StackBlitz cloud-based development environment. Getting it up and running properly on my local machine (after solving installation and compilation errors) took more effort than I had expected. I wouldn’t call the experience enjoyable, but it was certainly an educational trip through all the infrastructure underlying an Angular app. Now I have all the functional components up and running, I turn my attention to the visual appearance of the app. The layout only seems to work for certain window sizes and not others. I saw a chance for me to practice my CSS layout debugging skills, but what it also taught me was “HTML quirks mode”.

The sample app was styled to resemble a small hand-held device in the vein of the original Nintendo Game Boy: a monochrome screen up top and a keyboard on the bottom. (On second thought, maybe it’s supposed to be a Blackberry.) This app didn’t have audio effects, but there’s a little fake speaker grill in the lower right for a visual finish. The green “enclosure” of this handheld device is the <body> tag of the page, styled with border-radius for its rounded corners and box-shadow to hint at 3D shape.

When things go wrong, the screen and keyboard spills over the right edge of the body. Each of which had CSS specifying a height of 47% and an almost-square aspect-ratio of 10/9. The width, then, would be a function of those two values. The fact that they become too wide and spill over the right edge means they have “too much” height for the specified aspect ratio.

Working my way up the component tree, I found the source of “too much” height was the body tag, which has CSS specifying a width (clamped within a range) and an aspect-ratio of 10/17. The height, then, should be a function of those two values. When things go wrong, the width seems to be clamped to maximum of specified range as expected, but the body is too tall. Something has taken precedence over aspect-ratio:10/17 but that’s where I got stuck: I couldn’t figure out what the CSS layout system had decided was more important than maintaining aspect ratio.

After failing to find an explanation on my own, I turned to the StackBlitz example which worked correctly. Since I’ve learned the online StackBlitz example isn’t exactly the same as the GitHub repository, the first thing I did is to compare CSS. They seemed to match minus the syntax errors I had to fix locally, so that’s not it. I had a hypothesis that StackBlitz has something in their IDE page hierarchy and that’s why it worked in the preview pane. But clicking “Open in new tab” to run the app independent of the rest of StackBlitz IDE HTML still looks fine. Inspecting the object tree and associated stylesheets side-by-side, I saw that my local copy seems to have duplicated styles. But since that just meant one copy completely overrides the other identical copy, it wouldn’t be the explanation.

The next difference I noticed between StackBlitz and local copy is the HTML document type declaration at the top of index.html.

<!DOCTYPE html>

This is absent from project source code, but StackBlitz added it to the root when it opened the app in a new tab. I doubted it had anything to do with my problem because it isn’t a CSS declaration. But in the interest of eliminating differences between them, I added <!DOCTYPE html> to the top of my index.html.

I was amazed to find that was the key. CSS layout now respects aspect-ratio and constrains height of the body, which kept screen and keyboard from spilling over. But… why does the HTML document type declaration affect CSS behavior? A web search eventually led me to the answer: backwards compatibility or “Quirks Mode”. By default, browsers emulate behavior of older browsers. What are those non-standards-compliant behaviors? That is a deep dark rabbit hole I intend to avoid as much as I can. But it’s clear one or more quirks affected aspect-ratio used in this sample app. Having the HTML document type declaration at the top of my HTML activates the “no-quirks” mode that intends to strictly adhere to modern HTML and CSS standards, and now layout works as intended.

The moral of today’s story: Remember to put <!DOCTYPE html> at the top of my index.html for every web app project. If things go wrong, at least the mistake is likely my own fault. Without the tag, there are intentional weirdness because some old browser got things wrong years ago and I don’t want that to mess me up. (Again.)

Right now, I have a hard enough time getting CSS to do my bidding for normal things. Long term, I want to become familiar enough with CSS to make it do not just functional but also fun decorative things.


My code changes are made in my fork of the code lab repository in branch signals-get-started.

Angular Standalone Components for Future Projects

Reading through Angular developer guide for standalone components filled in many of the gaps left after going through the “Getting Started with Angular Standalone Components” code lab. The two are complementary: the developer guide gave us reasons why standalone components exist, and the code lab gave us a taste of how to put them to use. Between framework infrastructure and library support, it becomes practical to make Angular components stand independently from Angular modules.

Which is great, but one important detail is missing from the documentation I’ve read. If it’s such a great idea to have components independent from NgModule, why did components need NgModule to begin with? I assume sometime in the history of Angular, having components live in NgModule was a better idea than having components stand alone. Not knowing those reasons is a blank spot in my Angular understanding.

I had expected to come across some information on when to use standalone components and when to package components in NgModule. Almost every software development design decision is a tradeoff between competing requirements, and I had expected to learn when using a NgModule is a better tradeoff than not having them. But I haven’t seen anything to that effect. It’s possible past reasons for NgModule has gradually atrophied as Angular evolved with the rest of the web, leaving a husk that we can leave behind and there’s no reason to go back. I would still appreciate seeing words to that effect from the Angular team, though.

One purported benefit was to ease the Angular learning curve, making it so we only have to declare dependencies in the component we’re working on instead of having to do it both in the component and in its associated NgModule. As a beginner that reason sounds good to me, so I guess should write future Angular projects with standalone components until I have a reason not to. It’s a fine plan but I worry I might run into situations when using NgModule would be a better choice and I wouldn’t recognize “a reason not to” when it is staring me in the face.

On the topic of future projects, at some point I expect I’ll grow beyond serving static content via GitHub Pages. Fortunately, I think I have a few free/trial options to explore before committing money.

Trying Vite and Its IE11 Legacy Option

While looking over Vue.js’s Quick Start example, I noticed its default set of tools included Vite. I understand it plays a role analogous but not identical to webpack in Angular’s default tool set. I found webpack’s documentation quite opaque, so I thought I would try to absorb what I can from Vite’s documentation. I still don’t understand all the history and issues involved in JavaScript build tools, but I was glad to find Vite documentation more comprehensible.

The introductory “Why Vite?” page explained Vite takes advantage of modern browser features for JavaScript code modules. As a result, the client’s browser can handle some of the work that previously must be done on the developer machine via webpack & friends. However, that still leaves a smaller set of things better done up front by the developer instead of later by the client, and Vite takes care of them.

In time I’ll learn enough about JavaScript to understand what all that meant, but one section caught my attention. Given Vite’s focus on leveraging modern browsers, I was surprised to see “browser compatibility” section included an official plug-in @vitejs/plugin-legacy to support legacy browsers. Given my interest in writing web apps that run on my pile of old Windows Phone 8, this could be very useful!

I opened up my NodeJS test apps repository and followed Vite’s “Getting Started” guide to create a new project using the “vanilla TypeScript” template preset. To verify I’ve got it working as expected, I built and successfully displayed the results on a current build of Google Chrome browser.

Then I added the legacy plugin and rebuilt. It bloated the distribution directory up to 80 kilobytes, which is a huge increase but still almost a third of the size of a blank Angular app and quite manageable even in space-constrained situations. And most importantly: yes, it runs on my old Nokia Lumia 920 phone with Windows Phone 8 operating system. Nice! I’m definitely tucking this away in my toolbox for later use. But for right now, I should probably get back to learning Vue.

Notes on Vue.js Quick Start

After going through Codecademy’s “Learn Vue.js” course, I went to Vue.js site and followed through their quick start “Creating a Vue Application” procedure to see what a “Hello World” looks like. It was quite instructive and showed me many facets of Vue not covered by Codecademy’s course.

The first difference is here we’re creating an application with Vue.js, which means firing up command line tool npm init vue@latest to create an application scaffolding with select features. Since I’m a fan of TypeScript and of maintaining code formatting, I said yes to “TypeScript”, “ESLint” and “Prettier” options and no to the rest.

I then installed all the packages for that scaffolding with npm install and then I ran npm run build to look at the results in /dist/ subdirectory. They added up to a little over 60 kilobytes, which is roughly one-third built size of Angular’s scaffolding. This is even more impressive considering that several kilobytes are placeholders: about a half dozen markup files plus a few SVG files for vector graphics. The drastically smaller file sizes of Vue apps are great, but what have I given up in exchange? That’s something I’ll be looking for as I learn more about both platforms.

Poking around in the scaffolding app, I saw it demonstrated use of Vue componentization via its SFC (Single File Component) file format. A single *.vue file contained a component’s HTML, CSS, and TypeScript/JavaScript. Despite the fact they are all text-based formats and designed to coexist, I’m not a fan of mixing three different syntax in a single file. I prefer Angular’s approach of keeping each type in their own file. To mitigate confusion, I expect Vue’s editor tool Volar would help keep the three types distinct.

Some Vue components in the example are tiny like IconTooling.vue which is literally a wrapper around a chunk of SVG to deliver a vector-graphic icon. Others are a little more substantial like WelcomeItem whose template has three slots for information: #icon, #heading, and everything else. This feels quite different from how Angular components get data from their parents. I look forward to learning more about this style of code organization.

While running npm run build I noticed this Vue app boilerplate build pipeline didn’t use webpack, it used something called Vite instead. Since I couldn’t make heads or tails of webpack on my first pass, I was encouraged that I could understand a lot more of Vite.

Next Study Topic: Vue.js

Having an old Windows Phone 8 die (followed by dissection) was a fresh reminder I haven’t put enough effort towards my desire to “do something interesting” with those obsolete devices. The mysterious decay of one device was a very final bell toll announcing its end, but the clock is ticking on the rest of them as well. Native app development for the platform was shut down years ago, leaving only the browser as an entry point. But even that browser, based on IE11, is getting left further and further behind every day by web evolution.

In one of my on-and-off trips into web development, I ran through Angular framework tutorial and then added legacy project flags to make an IE11-compatible build I could run on a Windows Phone 8. That is no longer possible once Angular dropped support. One of the reasons I chose Angular was because it was an “everything included, plus the kitchen sink” type of deal. An empty Angular app created via its “ng new” command included all the tools already configured for their Angular defaults. I knew the concepts of tools like “bundler”, “minimizer”, etc. but I didn’t know enough to actually use them firsthand. Angular boilerplate helped me get started.

But the reason I chose to start with Angular is also the reason I won’t stay with it: the everything framework is too much framework. Angular targets projects far more complex and sophisticated than what I’m likely to tackle in the near future. Using Angular to create a compass web app was hilarious overkill where size of framework overhead far exceeded size of actual app code.

In my search for something lighter-weight, I briefly looked into Polymer/Lit and decided I overshot too far into too little framework. Looking around for my Goldilocks, one name that has come up frequently in my web development learning is Vue.js. It’s supposed to be lighter and faster than Angular/React but still have some of the preconfigured hand-holding I admit I still need. Maybe it would offer a good middle ground and give me just enough framework for future projects.

One downside is that current version Vue 3 won’t run on IE11, either. However, the documentation claimed most Vue fundamental concepts haven’t changed from Vue 2, which does support IE11 and is still on long-term service status until the end of 2023. Maybe I can get started on Vue 3 and write simple projects that would still run on Vue 2. Even if that doesn’t work, it should help orient me in a simpler setup that I could try to get running on Windows Phone 8.

I’m cautiously optimistic I can learn a lot here, because I saw lots of documentation on Vue project site. Though that is only a measure of quantity and not necessarily quality. It remains to be seen whether the material would go over my head as Lit’s site did. Or if it would introduce new strange concepts with a steep learning curve as RxJS did. I won’t know until I dive in.

Aesthetically, there’s at least one Material Design library to satisfy my preference for web app projects. I’ll have to find out if it would bloat an app as much as Angular Material did.

Codecademy offers one course for Vue.js, so I thought I’d start there.

CSS Beginner Struggles: aspect-ratio and height

Reviewing CSS from web.dev’s “Learn CSS!” course provided a refresher on a lot of material and also introduced me to new material I hadn’t seen before. I had hoped for a bit of “aha” insight to help me with CSS struggles in my project, but that didn’t happen. The closest was a particular piece of information (Flexbox for laying out along one dimension, Grid for two dimensions) that told me I’m on the right track using Flexbox.

A recurring theme with my CSS frustration is the fact height and width are not treated the same way in HTML layout. I like to think of them as peers, two equal and orthogonal dimensions, but that’s not how things work here. It traces back to HTML fundamentals of laying out text for reading text in a left-to-right, top-to bottom languages like English. Like a typesetter, the layout is specified in terms of width. Column width, margin width, etc. Those were the parameters that fed into layout. Height of a paragraph is then determined by the length of text that could fit within specified width. Thus, height is an output result, not an input parameter, of the layout process.

For my Compass web app, I had a few text elements I knew I wanted to lay out. Header, footer, sensor values, etc. After they have all been allocated screen real estate, I wanted my compass needle to be the largest square that could fit within the remaining space. That last part is the problem: while we have ways to denote “all remaining space” for width, there’s no such equivalent for height because height is a function of width and content. This results in unresolvable circular logic when my content (square compass) is a function of height, but the height is a function of my content.

I could get most of the way to my goal with liberal application of “height: 100%” in my CSS rules. It does not appear to inherit/cascade, so I have to specify “height: 100%” on every element down the DOM hierarchy. If I don’t, height of that element collapses to zero because my compass doesn’t have an inherent height of its own.

Once I get to my compass, I could declare it to be a square with aspect-ratio. But when I did so, I find that aspect-ratio does its magic by changing element height to satisfy specified aspect ratio. When my remaining space is wider than it is tall, aspect-ratio expands height so it matches width. This is consistent with how the rest of HTML layout treats width vs. height, and it accomplishes the specified aspect ratio. But now it is too tall to fit within remaining space!

Trying to reign that in, I played with “height: 100%“, “max-height: 100%“, and varying combinations of similar CSS rules. They could affect CSS-specified height values, but seems to have no effect on height change from aspect-ratio. Setting aspect-ratio means height is changed to fit available width and I found no way to declare the reverse in CSS: change width to fit within available height.

From web.dev I saw Codepen.io offered ability to have code snippets in a webpage, so here’s a test to see how it works on my own blog. I pulled the HTML, CSS, and minimal JavaScript representing a Three.js <canvas> into a pen so I could fiddle with this specific problem independent of the rest of the app. I think I’ve embedded it below but here’s a link if the embed doesn’t work.

After preserving a snapshot of my headache in Codepen, I returned to Compass app which still had a problem that needed solving. Unable to express my intent via CSS, I turned to code. I abandoned using aspect-ratio and resized my Three.js canvas to a square whose size is calculated via:

Math.floor(Math.min(clientWidth, clientHeight));

Taking width or height, whichever is smaller, and then rounding down. I have to round down to the nearest whole number otherwise scroll bars pop up, and I don’t want scroll bars. I hate solving a layout problem with code, but it’ll have to do for now. Hopefully sometime in the future I will have a better grasp of CSS and can write the proper stylesheet to accomplish my goal. In the meantime, I look for other ways to make layout more predictable such as making my app full screen.


The source code for my project is publicly available on GitHub, though it no longer uses aspect-ratio as per the workaround described at the end of this poist.

Window Shopping Polymer and Lit

While poking around with browser magnetometer API on Chrome for Android, one of my references was a “Sensor Info” app published by Intel. I was focused on the magnetometer API itself at first, but I mentally noted to come back later to look at the rest of the web app. Now I’m returning for another look, because “Sensor Info” has the visual style of Google’s Material Design and it was far smaller than an Angular project with Angular Material. I wanted to know how it was done.

The easier part of the answer is Material Web, a collection of web components released by Google for web developers to bring Material Design into their applications. “Sensor Info” imported just Button and Icon, unpacked size weighing in at several tens of kilobytes each. Reading the repository README is not terribly confidence inspiring… technically Material Web has yet to reach version 1.0 maturity even though Material Design has moved on to their third iteration. Not sure what’s going on there.

Beyond visual glitz, the “Sensor Info” application was built with both Polymer and Lit. (sensors-app.js declares a SensorsApp class which derives from LitElement, and import a lot of stuff from @polymer) This confused me because I had thought Lit was a successor to Polymer. As I understand it, the Polymer team plans no further work after version 3 and has taken the lessons learned to start from scratch with Lit. Here’s somebody’s compare-and-contrast writeup I got via Reddit. Now I see “Sensor Info” has references to both projects and, not knowing either Polymor or Lit, I don’t think I’ll have much luck deciphering where one ends and another begins. Not a good place for a beginner to start.

I know both are built on the evolving (stabilizing?) web components standard, and both promise to be far simpler and lightweight than frameworks like Angular or React. I like that premise, but such lightweight “non-opinionated” design also means a beginner is left without guidance. “Do whatever you want” is a great freedom but not helpful when a beginner has no idea what they want yet.

One example is the process of taking the set of web components in use and packaging them together for web app publishing. They expect the developer to use a tool like webpack, but there is no affinity to webpack. A developer can choose to use any other tool. Great, but I hadn’t figured out webpack myself nor any alternatives, so this particular freedom was not useful. I got briefly excited when I saw that there are “Starter Kits” already packaged with tooling that are not required (remember, non-opiniated!) but are convenient for starting out. Maybe there’s a sample webpack.config.js! Sadly, I looked over the TypeScript starter kit and found no mention of webpack or similar tool. Darn. I guess I’ll have to revisit this topic sometime after I learn webpack.

Webpack First Look Did Not Go Well

I’ve used Three.js in two projects so far to handle 3D graphics, but I’ve been referencing it as a monolithic everything bundle. Three.js documentation told me there was a better way:

When installing from npm, you’ll almost always use some sort of bundling tool to combine all of the packages your project requires into a single JavaScript file. While any modern JavaScript bundler can be used with three.js, the most popular choice is webpack.

— Three.js Installation Guide

In my magnetometer test project, I tried to bring in webpack to optimize three.js but aborted after getting disoriented. Now I’m going to sit down and read through its documentation to get a better feel of what it’s all about. Here are my notes from a first look as a beginner.

I have a minor criticism about their home page. The first link is to their “Getting Started” guide, the second link is to their “Concepts” section. I followed the first link to “Getting Started”, which is the first page in their “Guides” section. I got to the end of that first page and saw “Next” link is about Asset Management, the next guide page. Each guide page linked to the next. I quickly got into guide pages who used acronym or terminology that had not yet been explained. Later I realized this was because the terminology was covered in the “Concepts” section. In hindsight I should not have clicked “Next” at the end of “Getting Started” guide. I should have gone back to “Concepts” to learn the lingo, before reading the rest of the guides.

Reading through the guides, I quickly understood that webpack is a very JavaScript-centric system built by people who think JavaScript first. I wanted to learn how to use webpack to integrate my JavaScript code and my HTML markup, but these guide pages didn’t seem to cover my use scenario. Starting right off the bat with the Getting Started guide, they used code to build their markup:

   const element = document.createElement('div');
   element.innerHTML = _.join(['Hello', 'webpack'], ' ');
   document.body.appendChild(element);

Wow. Was that really necessary? What about people who wanted to, you know, write HTML in HTML?

    <body><div>Hello webpack</div></body>

Is that considered quaint and old fashioned nowadays? I didn’t find anything in “Guides” or “Concepts” discussing such integration. I had thought the section on “HtmlWebpackPlugin” would have my answer, but it’s actually a chunk of JavaScript code that erased my index.html (destroying my markup) with a bare-bones index.html that loads the generated JavaScript bundles and have no markup. How does my own markup figure in this picture? Perhaps the documentation authors felt it was too obvious to write down, but I certainly don’t understand how to do it. I feel like an old man yelling at a cloud “tell me how to use my HTML!”

I had thought putting my HTML into the output directory was a basic task, but I failed. And I was dismayed to see “Concepts” and “Guides” pages got deeper and more esoteric from there. I understand webpack is very powerful and can solve many problems with modularizing JavaScript code, but it also requires a certain mindset that I have yet to acquire. Webpack’s bare bones generated index.html looks very similar to the generated index.html of an Angular app (which uses both HTML and webpack) so there must be a way to do this. I just don’t know it yet.

Until I learn what’s going on, for the near future I will use webpack only indirectly: it is already configured and running as part of Angular web app framework’s command line tools suite. I plan to do a few Angular projects to become more familiar with it. And now that I’ve read through webpack concepts, I should recognize pieces of Angular workflow as “Aha, they used webpack to do that.”

Visualizing Magnetometer Data with Three.js

I’m happy that my simple exploratory web app was able to obtain data from my phone’s integrated magnetometer. I recognize there are some downsides to how easy it was, but right now I’m happy I can move forward. Ten times a second (10Hz is the maximum supported update rate) my callback receives x, y, and z values in addition to auxiliary data like a timestamp. That’s great, but the underlying meaning is not very intuitive for me to grasp. What I want next is a visualization of that three-axis magnetometer data.

I turned to the JavaScript 3D graphics library Three.js. The last time I used Three.js was to visualize the RGB332 color space, using a 2D projection to help me make sense of data along three dimensions of color: a cylinder representing HSV color space and a rectangular solid representing RGB. Now I want to visualize a single vector in three-dimensional space representing the local magnetic field as reported by my phone’s magnetometer. I was a little intimidated by the math for calculation 3D transforms. I tried to make my RGB332 color app transition between HSV and RGB color space but it never looked right because I didn’t understand the 3D transform math.

Fortunately, this time I didn’t have to do any of my own math at all. Three.js has a built-in function that accepts the x, y, and z components of a target coordinate and calculates the rotation required to have a 3D project look at that point. My responsibility is to create an object that will convey this information. I chose to follow the precedence of an analog compass which is built out of a small magnetic needle. Shaped like a narrow diamond with one half painted red and the other half painted white. For this 3D visualization I created a shape out of two cones, one red and one white. When this shape looks at the magnetometer vector, it functions very similarly to the sliver of magnet inside a compass.

As a precaution, I added a check for WebGL before firing up Three.js code. I was pretty confident any Android Chrome that supported the magnetometer API would support WebGL as well, but it was good practice to confirm.

One thing I’m not doing (but should) is to account for screen orientation. Chrome developers have added a feature to automatically adjust for screen orientation but right now I’m just going to deactivate auto-rotate on my phone (or… phones!)


Source code for my exploratory project is publicly available on GitHub

Magnetometer API Privacy Concerns

Many Android phones have an integrated magnetometer available to native apps. Chrome browser for Android also makes that capability available to web apps, but right now it is hidden by default as a feature preview. Once I enabled that feature, I was able to follow some sample code online and quickly obtain access to magnetometer data in my own web app. That was so easy! Why was it blocked by default?

Apparently, the answer (or at least a part of it) was that it was too easy. Making magnetometer and other hardware sensor data freely available to web apps would feed into hardware-based browser fingerprinting. Even though magnetometer data by itself might be innocuous, it could be combined with other seemingly-innocent data to uniquely identify users thereby piercing privacy protections. This is bad, and purportedly why Apple has so far declined to support sensor APIs.

That article was in 2020, though, and the web moves fast. When I read up on magnetometer API on MDN (Mozilla Developer Network) I encountered an entire section on obtaining user permission to read hardware sensor data. Since I didn’t have to do any of that for my own test app to obtain magnetometer data, I guess this requirement is only present in Mozilla’s own Firefox browser. Or perhaps it was merely a proposal hoping to satisfy Apple’s official objection to supporting sensor API.

I found no mention of Mozilla’s permission management framework in the official magnetometer API specification. There’s a “Security and Privacy Considerations” section but it’s pretty thin and I don’t see how it would address fingerprinting concerns. For what it’s worth, “limiting maximum sample frequency” was listed as a potential mitigation, and Chrome 111 only allows up to 10Hz.

Today users like myself have to explicitly activate this experimental feature. And at the top of “chrome://flags” page where we do so, there’s an explicit warning that enabling experimental features could compromise privacy. In theory, people opting-in to magnetometer today is aware of potential abuse, but that risk has to be addressed before it’s rolled out to everyone. In the meantime, I have opted in and I’m going to have some fun.

Magnetometer API in Android Chrome Browser

I became curious about magnetometers and was deep into shopping for a prototype breakout board when I remembered I already had some on hand. The bad news is that they’re buried inside mobile devices, the good news is that they’re already connected to all the supporting circuitry they need. Accessing them is then “merely” a software task.

Android app developers can access magnetometer values via position sensor APIs. It’s possible to query everything from raw uncalibrated values to device orientation information computed from calibrated magnetometer data fused with other sensor data. Apple iOS app developers have the Core Motion library from which they can obtain similar information.

But I prefer not to develop a native mobile app because of all the overhead involved. For Android, I would have to install Android Studio (a multi-gigabyte download) and put my device into Developer Mode which hampers its ability to run certain security-focused tasks. For iOS I would have to install Xcode, which is at least as big of a hassle, and I’m not sure what I’d have to do on an iOS device. (Installing TestFlight is part of the picture, but I don’t know the whole picture.)

Looking for an alternative: What if I could access sensor data from something with lower overhead, like a small web app? Checking on the ever-omniscient caniuse.com, I found that a magnetometer API does exist. However, it is not (yet?) standardized and hidden behind an optional flag that the user has to explicitly enable. I typed chrome://flags into my address bar and found the “Generic Sensor Extra Classes” option to switch from “Default” to “Enable”. After making this change, the associated caniuse.com magnetometer test turned from red to green.

One annoyance of working with magnetometer on Android is that I have to work against an actual device. While Chrome developer tools has an area for injecting sensor data into web apps under test, it does not (yet?) include ability to emulate magnetometer data. And looking over Android Studio documentation, I found settings to emulate sensors like an accelerometer but no mention of magnetometer either.

Looking online for some sample code, I found small code snippets in Google Chrome Developer’s blog about the Sensor API. Lots of useful reference on that page but I wanted a complete web app to look at. I found what I was looking for in a “Sensor Info” web app published from Intel’s GitHub account. (This was a surprising source, pretty far from Intel’s main moneymaker of CPUs. What is their interest in this field? Sadly, that corporate strategic answer is not to be found in this app. I choose to be happy that it exists.) Launching the app, clicking “+” allowed me to add the magnetometer sensor and start seeing data stream through. After looking through this Intel web app’s source code repository, I wrote my own minimalist test app streaming magnetometer data. Success! I’m glad it was this easy, but perhaps that was too easy?


Source code for this exploratory project is publicly available on GitHub

Notes on “Using MongoDB with Node.js” from MongoDB University

Most instructional material (and experimentation) for MongoDB uses the MongoDB Shell (mongosh), which is “a fully functional JavaScript and Node.js 16.x REPL environment for interacting with MongoDB deploymentsaccording to mongosh documentation. Making mongosh the primary command line interface useful for exploration, experimentation, and education like on Codecademy or MongoDB University.

Given the JavaScript focus of MongoDB, I was not surprised there is a set of first-party driver libraries to translate to/from various programming languages. But I was surprised to find that Node.js (JavaScript) was among the list of drivers. If this was all JavaScript anyway, why do we need a driver? The answer is that we don’t use JavaScript to talk to the underlying database. That is the job of BSON, the binary data representation used by MongoDB for internal storage. Compact and machine-friendly, it is also how data is transmitted over the network. Which is why we need a Node.js library to convert from JSON to BSON for data transmission. I started the MongoDB University course “Using MongoDB with Node.js” to learn more about using this library.

It was a short course, as befitting the minimal translation required of this JavaScript-focused database. The first course covered how to connect to a MongoDB instance from our Node.js environment. I decided to do my exercises with a Node.js Docker container.

docker run -it --name node-mongo-lab -v C:\Users\roger\coding\MongoDB\node-mongo-lab:/node-mongo-lab node:lts /bin/sh

The exercise is “Hello World” level, connecting to a MongoDB instance and listing all available databases. Success means we’ve verified all libraries & their dependencies are install correctly, that our MongoDB authentication is set up correctly, and that our networking path is clear. I thought that was a great starting point for more exercises, and was disappointed that we actually didn’t use our own Node.js environment any further in this course. The rest of the course used the Instruqt in-browser environment.

We had a lightning-fast review of MongoDB CRUD Operations and how we would do them with the Node.js driver library. All the commands and parameters are basically identical to what we’ve been doing in mongosh. The difference is that we need an instance of the client library as the starting point, from which we could obtain object representing a database and a collection with it. (client.db([database name]).collection([collection name]) Once we have that reference, everything else looks exactly as they did in mongosh. Except now they are code to be executed by Node.js runtime instead of typed. One effect of running code instead of typing commands is that it’s much easier to ensure transaction sessions complete within 60 seconds.

For me, a great side effect of this course is seeing JavaScript async/await in action doing more sophisticated things than simple straightforward things. The best example came from this code snippet demonstrating MongoDB Aggregation:

    let result = await accountsCollection.aggregate(pipeline)
    for await (const doc of result) {
      console.log(doc)
    }

The first line is straightforward: we run our aggregation pipeline and await its result. That result is an instance of MongoDB cursor which is not the entire collection of results but merely access to a portion of that collection. Cursors allow us to start processing data without having to load everything. This saves memory, bandwidth, and processing overhead. And in order to access bits of that collection, we have this “for await” loop I’ve never seen before. Good to know!

Notes on Codecademy “Learn Node-SQLite”

After my SQL fresher course, shortly after learning Node.js, I thought the natural progression was to put them together with Codecademy’s “Learn Node-SQLite” course. The name node-sqlite3 is not a mathematical subtraction but that of a specific JavaScript library bridging worlds of JavaScript and SQL. This course was a frustrating disappointment. (Details below) In hindsight, I think I would have been better off skipping this course and learn the library outside of Codecademy.

About the library: Our database instructions such as queries must be valid SQL commands stored as strings in JavaScript source code. We have the option of putting some parameters into those strings in JavaScript fashion, but the SQL commands are mostly string literals. Results of queries are returned to the caller using Node’s error-first asynchronous callback function convention, and query results are accessible as JavaScript objects. Most of library functionality are concentrated in just a few methods, with details available from API documentation.

This Codecademy course is fairly straightforward, covering the basics of usage so we can get started and explore further on our own. I was amused that some of the examples were simple to the point of duplicating SQL functionality. Specifically the example for db.each() shows how we can tally values from a query which meant we ended up writing a lot of code just to duplicate SQL’s SUM() function. But it’s just an example, so understandable.

The course is succinct to the point of occasionally missing critical information. Specifically, the section about db.run() say “Add a function callback with a single argument and leave it empty for now. Make sure that this function is not an arrow.” but didn’t say why our callback function must not use arrow syntax. This minor omission became a bigger problem when we roll into the after-class quiz, which asked why it must not use arrow syntax. Well, you didn’t tell me! A little independent research found the answer: arrow notation functions have a different behavior around the “this” object than other function notations. And for db.run(), our feedback is stored in properties like this.lastID which would not be accessible in an arrow syntax function. Despite such little problems, the instruction portion of the course were mostly fine. Which brings us to the bad news…

The Code Challenge section is a disaster.

It suffers from the same problem I had with Code Challenge section of the Learn Express course: lack of feedback on failures. Our code was executed using some behind-the-scenes mechanism, which meant we couldn’t see our console.log() output. And unlike the Learn Express course, I couldn’t workaround this limitation by throwing exceptions. No console logs, no exceptions, we don’t even get to see syntax errors! The only feedback we receive is always the same “You did it wrong” message no matter the actual cause.

Hall of Shame Runner-Up: No JavaScript feedback. When I make a JavaScript syntax error, the syntax error message was not shown. Instead, I was told “Did you execute the correct SQL query?” so I wasted time looking at the wrong thing.

Hall of Shame Bronze Medal: No SQL feedback. When I make a SQL command error, I want to see the error message given to our callback function. But console.log(error) output is not shown, so I was stabbing in the dark. For Code Challenge #13, my mistake was querying from “Bridges” table when the sample database table is actually singular “Bridge”. If I could log the error, I would have seen “No such table Bridges” which would have been much more helpful than the vague “Is your query correct?” feedback.

Hall of Shame Silver Medal: Incomplete Instructions. Challenge #14 asked us to build a query where “month is the current month”. I used “month=11” and got nothing. The database had months in words, so I actually needed to use “month=’November'”. I wasted time trying to diagnose this problem because I couldn’t run a “SELECT * FROM Table” to see what the data looked like.

Hall of Shame Gold Medal Grand Prize Winner: Challenge #12 asks us to write a function. My function was not accepted because I did not declare it using the same JavaScript function syntax used in the solution. Instructions said nothing about which function syntax to use. After I clicked “View Solution” and saw what the problem was (image above) I got so angry at the time it wasted, I had to step away for a few hours before I could resume. This was bullshit.


These Hall of Shame (dis)honorees almost turned me off of Codecademy entirely, but after a few days away to calm down, I returned to learn what Codecademy has to teach about PostgreSQL

Replace node-static with serve-static for ESP32 Sawppy Development

One of the optional middleware modules maintained by the Expressjs team is express.static, which can handle serving static assets like HTML, CSS, and images. It was used in code examples for Codecademy’s Learn Express course, and I made a mental note to investigate further after class. I thought it might help me with a problem I already had on hand, and it did!

When I started writing code for a micro Sawppy rover running on an ESP32, I wanted to be able to iterate on client-side code without having to reflash an ESP32. So as an educational test run of Node.js, I wrote a JavaScript counterpart to the code I wrote (in C/C++) for running on ESP32. While they are two different codebases, I intended for the HTTP interface to be identical and indistinguishable by the HTML/CSS/JavaScript client code I wrote. Most of this server-side work was focused around websocket, but I also needed to serve some static files. I looked on nodejs.org and found “How to serve static files” in their knowledge base. That page gave an example using the node-static module, which I copied for my project.

Things were fine for a while, but then I started getting messages from the Github Dependabot nagging me to fix a critical security flaw in my repository due to its use of a library called minimist. It was an indirect dependency I picked up by using node-static, so I figured it’ll be fixed after I pick up an update to node-static. But that update never came. As of this writing, the node-static package on NPM hadn’t been updated for four years. I see updates made on the GitHub repository, but for whatever reason NPM hasn’t picked that up and thus its registry remains outdated.

The good news is that my code isn’t deployed on an internet-facing server. This Node.js server is only for local development of my ESP32 Sawppy client-side browser code, which vastly minimizes the window of vulnerability. But still, I don’t like running known vulnerable code, even if it is only accessible from my own computer and only while I’m working on ESP32 Sawppy code. I want to get this fixed somehow.

After I got nginx set up as a local web server, I thought maybe I could switch to using nginx to serve these static files too. But there’s a hitch: a websocket connection starts as a HTTP request for an upgrade to websocket. So the HTTP server must interoperate with the websocket server for a smooth handover. It’s possible to set this up with nginx, but the instructions to do so is above my current nginx skill level. To keep this simple I need to stay within Node.js.

Which brought me back to Express and its express.static file server. I thought maybe I could fire up an Express app, use just this express.static middleware, and almost nothing else of Express. It’s overkill but it’s not stupid if it works. I just had to figure out how it would handover to my websocket code. Reading Express documentation for express.static, I saw it was built on top of a NPM module called serve-static, and was delighted to learn it can be used independent of Express! Their README included an example: Serve files with vanilla node.js http server and this was exactly what I needed. By using the same Node.js http module, my websocket upgrade handover code will work in exactly the same way. At the end, switching from node-static to serve-static was nearly a direct replacement requiring minimal code edit. And after removing node-static from my package.json, GitHub dependabot was happy and closed out my security issue. I will be free from nagging messages, at least until the next security concern. That might be serious if I had deployed to be internet accessible, but the odds of that just dropped.

Notes on Express “Getting Started” Guide

During the instruction of Codecademy’s “Learn Express” course, we see a few middleware modules that we can optionally use in our project as needed. Examples used in the course are morgan and body-parser, and one of the quizzes asked us to look up vhost. Course material even started using serve-static before we learned about middleware modules at all. These four middleware modules were among those popular enough to be adopted by the Expressjs team who now maintain them.

Since that meant I already had a browser tab open to the Express project site, I decided to poke around. Specifically, I wanted to see how their own Getting Started guide compared to the Codecademy course I just finished. My verdict: the official Express site provides a wider breadth of information but not nearly as much depth for educating a newcomer. If I hadn’t taken the Codecademy course and tried to get started with this site, I would have been able to get a simple Express application up and running but I would not have understood much of what was going on. Especially if I had created an app using the boilerplate application generator. Even after the Codecademy course I don’t know what most of these settings mean!

But the official site had wider breadth, as Codecademy didn’t even mention the boilerplate tool. It also has many lists of pointers to resources, like the aforementioned list of popular middleware modules. Another list I expect to be useful is a sample of options for database integration. Some minimal contextual information was provided with each listed link, but it’s up to us to follow those links and go from there. The only place where this site goes in depth is the Express API reference, which makes sense as the official site for Express should naturally serve as the authoritative source for such information!

I anticipate that I will use Express for at least a few learning/toy projects in the future, at which point I will need to return to this site for API reference and pointers to resources that might help me solve problems in the future. However, before I even get very far into Express, this site has already helped me solve an immediate problem: node-static is out of date.

Notes on Codecademy “Learn Express”

I may have my quibbles with Codecademy’s Learn Node.js course, but it at least gave me a better understanding to supplement what I had learned bumping around on my own. But the power of Node isn’t just the runtime, it’s the software ecosystem which has grown up around it. I have many many choices of what to learn from this point, and I decided to try the Learn Express course.

Before I started the course, I understood Express was one of the earlier Node.js frameworks for building back end of websites in JavaScript. And while there have been many others that have come online since, with more features and capabilities, Express is still popular because it set out not to pack itself with features and capabilities. This meant if we wanted to prototype something slightly off the beaten path, Express would not get in our way. This sounded like a good tool to have in the toolbox.

After taking the course, I learned how Express accomplishes those goals. Express Routes helps us map HTTP methods (GET/POST/PUT/DELETE) to JavaScript code via “Routes”, and for each route we can compose multiple JavaScript modules in the form of “Middleware”. With this mechanism we can assemble arbitrary web API by chaining middleware modules like LEGO pieces to respond to HTTP methods. And… that’s basically the entirety of core Express. Everything else is optional, so we only need to pull in what we need (in the form of middlware modules) for a particular project.

When introducing Routes in Express, our little learning JavaScript handler functions are actually fully qualified Middleware, but we didn’t know it yet. What I did notice is that it had the signature of three parameters: (request, response, next). The Routes course talked about reading request to build our response, but it never talked about next. Students who are curious about them and striking out to search on their own as I did would find information about “chaining”, but it wouldn’t make sense until we learned Middleware. I thought it would have been nice if the course would say “we’ll learn about next later, when we learn about Middleware” or something to that effect.

My gripe with this course is in its quiz sections. We are given partial chunk of JavaScript and told to fill in certain things. When we click “Check Work” we trigger some validation code to see if we did it right. If we did it wrong, we might get an error message to help us on our way. But sometimes the only feedback we receive is that our answer is incorrect, with no further feedback. Unlike earlier Node course exercises, we were not given a command prompt to run “node app.js” and see our output. This meant we could not see the test input, we could not see our program’s behavior, and we could not debug with console.log(). I tried to spin up my own Node.js Docker container to try running the sample code, but we weren’t given entire programs to run and we weren’t given the test input so that was a bust.

I eventually found a workaround: use exceptions. Instead of console.log('debug message') I could use throw Error('debug message') and that would show up on the Codecademy UI. This is far less than ideal.

Once I got past the Route section, I proceeded to Middleware. Most of this unit was focused on showing us how various Middleware mechanisms allow us to reduce code duplication. My gripe with this section is that the course made us do useless repetitive work before telling us to replace them with much more elegant Middleware modules. I understand this is how the course author chose to make their point, but I’m grumpy at useless make-work that I would delete a few minutes later.

By the end of the course, we know Express basics of Route and Middleware and got a little bit of practice building routes from freely available middleware modules. The course ends by telling us there are a lot of Express middleware out there. I decided to look into Express documentation for some starting points.