Angular Signals Code Lab CSS Requires “No-Quirks” Mode

The sample application for “Getting Started with Angular Signals” is designed to run on the StackBlitz cloud-based development environment. Getting it up and running properly on my local machine (after solving installation and compilation errors) took more effort than I had expected. I wouldn’t call the experience enjoyable, but it was certainly an educational trip through all the infrastructure underlying an Angular app. Now I have all the functional components up and running, I turn my attention to the visual appearance of the app. The layout only seems to work for certain window sizes and not others. I saw a chance for me to practice my CSS layout debugging skills, but what it also taught me was “HTML quirks mode”.

The sample app was styled to resemble a small hand-held device in the vein of the original Nintendo Game Boy: a monochrome screen up top and a keyboard on the bottom. (On second thought, maybe it’s supposed to be a Blackberry.) This app didn’t have audio effects, but there’s a little fake speaker grill in the lower right for a visual finish. The green “enclosure” of this handheld device is the <body> tag of the page, styled with border-radius for its rounded corners and box-shadow to hint at 3D shape.

When things go wrong, the screen and keyboard spills over the right edge of the body. Each of which had CSS specifying a height of 47% and an almost-square aspect-ratio of 10/9. The width, then, would be a function of those two values. The fact that they become too wide and spill over the right edge means they have “too much” height for the specified aspect ratio.

Working my way up the component tree, I found the source of “too much” height was the body tag, which has CSS specifying a width (clamped within a range) and an aspect-ratio of 10/17. The height, then, should be a function of those two values. When things go wrong, the width seems to be clamped to maximum of specified range as expected, but the body is too tall. Something has taken precedence over aspect-ratio:10/17 but that’s where I got stuck: I couldn’t figure out what the CSS layout system had decided was more important than maintaining aspect ratio.

After failing to find an explanation on my own, I turned to the StackBlitz example which worked correctly. Since I’ve learned the online StackBlitz example isn’t exactly the same as the GitHub repository, the first thing I did is to compare CSS. They seemed to match minus the syntax errors I had to fix locally, so that’s not it. I had a hypothesis that StackBlitz has something in their IDE page hierarchy and that’s why it worked in the preview pane. But clicking “Open in new tab” to run the app independent of the rest of StackBlitz IDE HTML still looks fine. Inspecting the object tree and associated stylesheets side-by-side, I saw that my local copy seems to have duplicated styles. But since that just meant one copy completely overrides the other identical copy, it wouldn’t be the explanation.

The next difference I noticed between StackBlitz and local copy is the HTML document type declaration at the top of index.html.

<!DOCTYPE html>

This is absent from project source code, but StackBlitz added it to the root when it opened the app in a new tab. I doubted it had anything to do with my problem because it isn’t a CSS declaration. But in the interest of eliminating differences between them, I added <!DOCTYPE html> to the top of my index.html.

I was amazed to find that was the key. CSS layout now respects aspect-ratio and constrains height of the body, which kept screen and keyboard from spilling over. But… why does the HTML document type declaration affect CSS behavior? A web search eventually led me to the answer: backwards compatibility or “Quirks Mode”. By default, browsers emulate behavior of older browsers. What are those non-standards-compliant behaviors? That is a deep dark rabbit hole I intend to avoid as much as I can. But it’s clear one or more quirks affected aspect-ratio used in this sample app. Having the HTML document type declaration at the top of my HTML activates the “no-quirks” mode that intends to strictly adhere to modern HTML and CSS standards, and now layout works as intended.

The moral of today’s story: Remember to put <!DOCTYPE html> at the top of my index.html for every web app project. If things go wrong, at least the mistake is likely my own fault. Without the tag, there are intentional weirdness because some old browser got things wrong years ago and I don’t want that to mess me up. (Again.)

Right now, I have a hard enough time getting CSS to do my bidding for normal things. Long term, I want to become familiar enough with CSS to make it do not just functional but also fun decorative things.


My code changes are made in my fork of the code lab repository in branch signals-get-started.

Angular Standalone Components for Future Projects

Reading through Angular developer guide for standalone components filled in many of the gaps left after going through the “Getting Started with Angular Standalone Components” code lab. The two are complementary: the developer guide gave us reasons why standalone components exist, and the code lab gave us a taste of how to put them to use. Between framework infrastructure and library support, it becomes practical to make Angular components stand independently from Angular modules.

Which is great, but one important detail is missing from the documentation I’ve read. If it’s such a great idea to have components independent from NgModule, why did components need NgModule to begin with? I assume sometime in the history of Angular, having components live in NgModule was a better idea than having components stand alone. Not knowing those reasons is a blank spot in my Angular understanding.

I had expected to come across some information on when to use standalone components and when to package components in NgModule. Almost every software development design decision is a tradeoff between competing requirements, and I had expected to learn when using a NgModule is a better tradeoff than not having them. But I haven’t seen anything to that effect. It’s possible past reasons for NgModule has gradually atrophied as Angular evolved with the rest of the web, leaving a husk that we can leave behind and there’s no reason to go back. I would still appreciate seeing words to that effect from the Angular team, though.

One purported benefit was to ease the Angular learning curve, making it so we only have to declare dependencies in the component we’re working on instead of having to do it both in the component and in its associated NgModule. As a beginner that reason sounds good to me, so I guess should write future Angular projects with standalone components until I have a reason not to. It’s a fine plan but I worry I might run into situations when using NgModule would be a better choice and I wouldn’t recognize “a reason not to” when it is staring me in the face.

On the topic of future projects, at some point I expect I’ll grow beyond serving static content via GitHub Pages. Fortunately, I think I have a few free/trial options to explore before committing money.

Trying Vite and Its IE11 Legacy Option

While looking over Vue.js’s Quick Start example, I noticed its default set of tools included Vite. I understand it plays a role analogous but not identical to webpack in Angular’s default tool set. I found webpack’s documentation quite opaque, so I thought I would try to absorb what I can from Vite’s documentation. I still don’t understand all the history and issues involved in JavaScript build tools, but I was glad to find Vite documentation more comprehensible.

The introductory “Why Vite?” page explained Vite takes advantage of modern browser features for JavaScript code modules. As a result, the client’s browser can handle some of the work that previously must be done on the developer machine via webpack & friends. However, that still leaves a smaller set of things better done up front by the developer instead of later by the client, and Vite takes care of them.

In time I’ll learn enough about JavaScript to understand what all that meant, but one section caught my attention. Given Vite’s focus on leveraging modern browsers, I was surprised to see “browser compatibility” section included an official plug-in @vitejs/plugin-legacy to support legacy browsers. Given my interest in writing web apps that run on my pile of old Windows Phone 8, this could be very useful!

I opened up my NodeJS test apps repository and followed Vite’s “Getting Started” guide to create a new project using the “vanilla TypeScript” template preset. To verify I’ve got it working as expected, I built and successfully displayed the results on a current build of Google Chrome browser.

Then I added the legacy plugin and rebuilt. It bloated the distribution directory up to 80 kilobytes, which is a huge increase but still almost a third of the size of a blank Angular app and quite manageable even in space-constrained situations. And most importantly: yes, it runs on my old Nokia Lumia 920 phone with Windows Phone 8 operating system. Nice! I’m definitely tucking this away in my toolbox for later use. But for right now, I should probably get back to learning Vue.

Notes on Vue.js Quick Start

After going through Codecademy’s “Learn Vue.js” course, I went to Vue.js site and followed through their quick start “Creating a Vue Application” procedure to see what a “Hello World” looks like. It was quite instructive and showed me many facets of Vue not covered by Codecademy’s course.

The first difference is here we’re creating an application with Vue.js, which means firing up command line tool npm init vue@latest to create an application scaffolding with select features. Since I’m a fan of TypeScript and of maintaining code formatting, I said yes to “TypeScript”, “ESLint” and “Prettier” options and no to the rest.

I then installed all the packages for that scaffolding with npm install and then I ran npm run build to look at the results in /dist/ subdirectory. They added up to a little over 60 kilobytes, which is roughly one-third built size of Angular’s scaffolding. This is even more impressive considering that several kilobytes are placeholders: about a half dozen markup files plus a few SVG files for vector graphics. The drastically smaller file sizes of Vue apps are great, but what have I given up in exchange? That’s something I’ll be looking for as I learn more about both platforms.

Poking around in the scaffolding app, I saw it demonstrated use of Vue componentization via its SFC (Single File Component) file format. A single *.vue file contained a component’s HTML, CSS, and TypeScript/JavaScript. Despite the fact they are all text-based formats and designed to coexist, I’m not a fan of mixing three different syntax in a single file. I prefer Angular’s approach of keeping each type in their own file. To mitigate confusion, I expect Vue’s editor tool Volar would help keep the three types distinct.

Some Vue components in the example are tiny like IconTooling.vue which is literally a wrapper around a chunk of SVG to deliver a vector-graphic icon. Others are a little more substantial like WelcomeItem whose template has three slots for information: #icon, #heading, and everything else. This feels quite different from how Angular components get data from their parents. I look forward to learning more about this style of code organization.

While running npm run build I noticed this Vue app boilerplate build pipeline didn’t use webpack, it used something called Vite instead. Since I couldn’t make heads or tails of webpack on my first pass, I was encouraged that I could understand a lot more of Vite.

Next Study Topic: Vue.js

Having an old Windows Phone 8 die (followed by dissection) was a fresh reminder I haven’t put enough effort towards my desire to “do something interesting” with those obsolete devices. The mysterious decay of one device was a very final bell toll announcing its end, but the clock is ticking on the rest of them as well. Native app development for the platform was shut down years ago, leaving only the browser as an entry point. But even that browser, based on IE11, is getting left further and further behind every day by web evolution.

In one of my on-and-off trips into web development, I ran through Angular framework tutorial and then added legacy project flags to make an IE11-compatible build I could run on a Windows Phone 8. That is no longer possible once Angular dropped support. One of the reasons I chose Angular was because it was an “everything included, plus the kitchen sink” type of deal. An empty Angular app created via its “ng new” command included all the tools already configured for their Angular defaults. I knew the concepts of tools like “bundler”, “minimizer”, etc. but I didn’t know enough to actually use them firsthand. Angular boilerplate helped me get started.

But the reason I chose to start with Angular is also the reason I won’t stay with it: the everything framework is too much framework. Angular targets projects far more complex and sophisticated than what I’m likely to tackle in the near future. Using Angular to create a compass web app was hilarious overkill where size of framework overhead far exceeded size of actual app code.

In my search for something lighter-weight, I briefly looked into Polymer/Lit and decided I overshot too far into too little framework. Looking around for my Goldilocks, one name that has come up frequently in my web development learning is Vue.js. It’s supposed to be lighter and faster than Angular/React but still have some of the preconfigured hand-holding I admit I still need. Maybe it would offer a good middle ground and give me just enough framework for future projects.

One downside is that current version Vue 3 won’t run on IE11, either. However, the documentation claimed most Vue fundamental concepts haven’t changed from Vue 2, which does support IE11 and is still on long-term service status until the end of 2023. Maybe I can get started on Vue 3 and write simple projects that would still run on Vue 2. Even if that doesn’t work, it should help orient me in a simpler setup that I could try to get running on Windows Phone 8.

I’m cautiously optimistic I can learn a lot here, because I saw lots of documentation on Vue project site. Though that is only a measure of quantity and not necessarily quality. It remains to be seen whether the material would go over my head as Lit’s site did. Or if it would introduce new strange concepts with a steep learning curve as RxJS did. I won’t know until I dive in.

Aesthetically, there’s at least one Material Design library to satisfy my preference for web app projects. I’ll have to find out if it would bloat an app as much as Angular Material did.

Codecademy offers one course for Vue.js, so I thought I’d start there.

CSS Beginner Struggles: aspect-ratio and height

Reviewing CSS from web.dev’s “Learn CSS!” course provided a refresher on a lot of material and also introduced me to new material I hadn’t seen before. I had hoped for a bit of “aha” insight to help me with CSS struggles in my project, but that didn’t happen. The closest was a particular piece of information (Flexbox for laying out along one dimension, Grid for two dimensions) that told me I’m on the right track using Flexbox.

A recurring theme with my CSS frustration is the fact height and width are not treated the same way in HTML layout. I like to think of them as peers, two equal and orthogonal dimensions, but that’s not how things work here. It traces back to HTML fundamentals of laying out text for reading text in a left-to-right, top-to bottom languages like English. Like a typesetter, the layout is specified in terms of width. Column width, margin width, etc. Those were the parameters that fed into layout. Height of a paragraph is then determined by the length of text that could fit within specified width. Thus, height is an output result, not an input parameter, of the layout process.

For my Compass web app, I had a few text elements I knew I wanted to lay out. Header, footer, sensor values, etc. After they have all been allocated screen real estate, I wanted my compass needle to be the largest square that could fit within the remaining space. That last part is the problem: while we have ways to denote “all remaining space” for width, there’s no such equivalent for height because height is a function of width and content. This results in unresolvable circular logic when my content (square compass) is a function of height, but the height is a function of my content.

I could get most of the way to my goal with liberal application of “height: 100%” in my CSS rules. It does not appear to inherit/cascade, so I have to specify “height: 100%” on every element down the DOM hierarchy. If I don’t, height of that element collapses to zero because my compass doesn’t have an inherent height of its own.

Once I get to my compass, I could declare it to be a square with aspect-ratio. But when I did so, I find that aspect-ratio does its magic by changing element height to satisfy specified aspect ratio. When my remaining space is wider than it is tall, aspect-ratio expands height so it matches width. This is consistent with how the rest of HTML layout treats width vs. height, and it accomplishes the specified aspect ratio. But now it is too tall to fit within remaining space!

Trying to reign that in, I played with “height: 100%“, “max-height: 100%“, and varying combinations of similar CSS rules. They could affect CSS-specified height values, but seems to have no effect on height change from aspect-ratio. Setting aspect-ratio means height is changed to fit available width and I found no way to declare the reverse in CSS: change width to fit within available height.

From web.dev I saw Codepen.io offered ability to have code snippets in a webpage, so here’s a test to see how it works on my own blog. I pulled the HTML, CSS, and minimal JavaScript representing a Three.js <canvas> into a pen so I could fiddle with this specific problem independent of the rest of the app. I think I’ve embedded it below but here’s a link if the embed doesn’t work.

After preserving a snapshot of my headache in Codepen, I returned to Compass app which still had a problem that needed solving. Unable to express my intent via CSS, I turned to code. I abandoned using aspect-ratio and resized my Three.js canvas to a square whose size is calculated via:

Math.floor(Math.min(clientWidth, clientHeight));

Taking width or height, whichever is smaller, and then rounding down. I have to round down to the nearest whole number otherwise scroll bars pop up, and I don’t want scroll bars. I hate solving a layout problem with code, but it’ll have to do for now. Hopefully sometime in the future I will have a better grasp of CSS and can write the proper stylesheet to accomplish my goal. In the meantime, I look for other ways to make layout more predictable such as making my app full screen.


The source code for my project is publicly available on GitHub, though it no longer uses aspect-ratio as per the workaround described at the end of this poist.

Window Shopping Polymer and Lit

While poking around with browser magnetometer API on Chrome for Android, one of my references was a “Sensor Info” app published by Intel. I was focused on the magnetometer API itself at first, but I mentally noted to come back later to look at the rest of the web app. Now I’m returning for another look, because “Sensor Info” has the visual style of Google’s Material Design and it was far smaller than an Angular project with Angular Material. I wanted to know how it was done.

The easier part of the answer is Material Web, a collection of web components released by Google for web developers to bring Material Design into their applications. “Sensor Info” imported just Button and Icon, unpacked size weighing in at several tens of kilobytes each. Reading the repository README is not terribly confidence inspiring… technically Material Web has yet to reach version 1.0 maturity even though Material Design has moved on to their third iteration. Not sure what’s going on there.

Beyond visual glitz, the “Sensor Info” application was built with both Polymer and Lit. (sensors-app.js declares a SensorsApp class which derives from LitElement, and import a lot of stuff from @polymer) This confused me because I had thought Lit was a successor to Polymer. As I understand it, the Polymer team plans no further work after version 3 and has taken the lessons learned to start from scratch with Lit. Here’s somebody’s compare-and-contrast writeup I got via Reddit. Now I see “Sensor Info” has references to both projects and, not knowing either Polymor or Lit, I don’t think I’ll have much luck deciphering where one ends and another begins. Not a good place for a beginner to start.

I know both are built on the evolving (stabilizing?) web components standard, and both promise to be far simpler and lightweight than frameworks like Angular or React. I like that premise, but such lightweight “non-opinionated” design also means a beginner is left without guidance. “Do whatever you want” is a great freedom but not helpful when a beginner has no idea what they want yet.

One example is the process of taking the set of web components in use and packaging them together for web app publishing. They expect the developer to use a tool like webpack, but there is no affinity to webpack. A developer can choose to use any other tool. Great, but I hadn’t figured out webpack myself nor any alternatives, so this particular freedom was not useful. I got briefly excited when I saw that there are “Starter Kits” already packaged with tooling that are not required (remember, non-opiniated!) but are convenient for starting out. Maybe there’s a sample webpack.config.js! Sadly, I looked over the TypeScript starter kit and found no mention of webpack or similar tool. Darn. I guess I’ll have to revisit this topic sometime after I learn webpack.

Webpack First Look Did Not Go Well

I’ve used Three.js in two projects so far to handle 3D graphics, but I’ve been referencing it as a monolithic everything bundle. Three.js documentation told me there was a better way:

When installing from npm, you’ll almost always use some sort of bundling tool to combine all of the packages your project requires into a single JavaScript file. While any modern JavaScript bundler can be used with three.js, the most popular choice is webpack.

— Three.js Installation Guide

In my magnetometer test project, I tried to bring in webpack to optimize three.js but aborted after getting disoriented. Now I’m going to sit down and read through its documentation to get a better feel of what it’s all about. Here are my notes from a first look as a beginner.

I have a minor criticism about their home page. The first link is to their “Getting Started” guide, the second link is to their “Concepts” section. I followed the first link to “Getting Started”, which is the first page in their “Guides” section. I got to the end of that first page and saw “Next” link is about Asset Management, the next guide page. Each guide page linked to the next. I quickly got into guide pages who used acronym or terminology that had not yet been explained. Later I realized this was because the terminology was covered in the “Concepts” section. In hindsight I should not have clicked “Next” at the end of “Getting Started” guide. I should have gone back to “Concepts” to learn the lingo, before reading the rest of the guides.

Reading through the guides, I quickly understood that webpack is a very JavaScript-centric system built by people who think JavaScript first. I wanted to learn how to use webpack to integrate my JavaScript code and my HTML markup, but these guide pages didn’t seem to cover my use scenario. Starting right off the bat with the Getting Started guide, they used code to build their markup:

   const element = document.createElement('div');
   element.innerHTML = _.join(['Hello', 'webpack'], ' ');
   document.body.appendChild(element);

Wow. Was that really necessary? What about people who wanted to, you know, write HTML in HTML?

    <body><div>Hello webpack</div></body>

Is that considered quaint and old fashioned nowadays? I didn’t find anything in “Guides” or “Concepts” discussing such integration. I had thought the section on “HtmlWebpackPlugin” would have my answer, but it’s actually a chunk of JavaScript code that erased my index.html (destroying my markup) with a bare-bones index.html that loads the generated JavaScript bundles and have no markup. How does my own markup figure in this picture? Perhaps the documentation authors felt it was too obvious to write down, but I certainly don’t understand how to do it. I feel like an old man yelling at a cloud “tell me how to use my HTML!”

I had thought putting my HTML into the output directory was a basic task, but I failed. And I was dismayed to see “Concepts” and “Guides” pages got deeper and more esoteric from there. I understand webpack is very powerful and can solve many problems with modularizing JavaScript code, but it also requires a certain mindset that I have yet to acquire. Webpack’s bare bones generated index.html looks very similar to the generated index.html of an Angular app (which uses both HTML and webpack) so there must be a way to do this. I just don’t know it yet.

Until I learn what’s going on, for the near future I will use webpack only indirectly: it is already configured and running as part of Angular web app framework’s command line tools suite. I plan to do a few Angular projects to become more familiar with it. And now that I’ve read through webpack concepts, I should recognize pieces of Angular workflow as “Aha, they used webpack to do that.”

Visualizing Magnetometer Data with Three.js

I’m happy that my simple exploratory web app was able to obtain data from my phone’s integrated magnetometer. I recognize there are some downsides to how easy it was, but right now I’m happy I can move forward. Ten times a second (10Hz is the maximum supported update rate) my callback receives x, y, and z values in addition to auxiliary data like a timestamp. That’s great, but the underlying meaning is not very intuitive for me to grasp. What I want next is a visualization of that three-axis magnetometer data.

I turned to the JavaScript 3D graphics library Three.js. The last time I used Three.js was to visualize the RGB332 color space, using a 2D projection to help me make sense of data along three dimensions of color: a cylinder representing HSV color space and a rectangular solid representing RGB. Now I want to visualize a single vector in three-dimensional space representing the local magnetic field as reported by my phone’s magnetometer. I was a little intimidated by the math for calculation 3D transforms. I tried to make my RGB332 color app transition between HSV and RGB color space but it never looked right because I didn’t understand the 3D transform math.

Fortunately, this time I didn’t have to do any of my own math at all. Three.js has a built-in function that accepts the x, y, and z components of a target coordinate and calculates the rotation required to have a 3D project look at that point. My responsibility is to create an object that will convey this information. I chose to follow the precedence of an analog compass which is built out of a small magnetic needle. Shaped like a narrow diamond with one half painted red and the other half painted white. For this 3D visualization I created a shape out of two cones, one red and one white. When this shape looks at the magnetometer vector, it functions very similarly to the sliver of magnet inside a compass.

As a precaution, I added a check for WebGL before firing up Three.js code. I was pretty confident any Android Chrome that supported the magnetometer API would support WebGL as well, but it was good practice to confirm.

One thing I’m not doing (but should) is to account for screen orientation. Chrome developers have added a feature to automatically adjust for screen orientation but right now I’m just going to deactivate auto-rotate on my phone (or… phones!)


Source code for my exploratory project is publicly available on GitHub

Magnetometer API Privacy Concerns

Many Android phones have an integrated magnetometer available to native apps. Chrome browser for Android also makes that capability available to web apps, but right now it is hidden by default as a feature preview. Once I enabled that feature, I was able to follow some sample code online and quickly obtain access to magnetometer data in my own web app. That was so easy! Why was it blocked by default?

Apparently, the answer (or at least a part of it) was that it was too easy. Making magnetometer and other hardware sensor data freely available to web apps would feed into hardware-based browser fingerprinting. Even though magnetometer data by itself might be innocuous, it could be combined with other seemingly-innocent data to uniquely identify users thereby piercing privacy protections. This is bad, and purportedly why Apple has so far declined to support sensor APIs.

That article was in 2020, though, and the web moves fast. When I read up on magnetometer API on MDN (Mozilla Developer Network) I encountered an entire section on obtaining user permission to read hardware sensor data. Since I didn’t have to do any of that for my own test app to obtain magnetometer data, I guess this requirement is only present in Mozilla’s own Firefox browser. Or perhaps it was merely a proposal hoping to satisfy Apple’s official objection to supporting sensor API.

I found no mention of Mozilla’s permission management framework in the official magnetometer API specification. There’s a “Security and Privacy Considerations” section but it’s pretty thin and I don’t see how it would address fingerprinting concerns. For what it’s worth, “limiting maximum sample frequency” was listed as a potential mitigation, and Chrome 111 only allows up to 10Hz.

Today users like myself have to explicitly activate this experimental feature. And at the top of “chrome://flags” page where we do so, there’s an explicit warning that enabling experimental features could compromise privacy. In theory, people opting-in to magnetometer today is aware of potential abuse, but that risk has to be addressed before it’s rolled out to everyone. In the meantime, I have opted in and I’m going to have some fun.

Magnetometer API in Android Chrome Browser

I became curious about magnetometers and was deep into shopping for a prototype breakout board when I remembered I already had some on hand. The bad news is that they’re buried inside mobile devices, the good news is that they’re already connected to all the supporting circuitry they need. Accessing them is then “merely” a software task.

Android app developers can access magnetometer values via position sensor APIs. It’s possible to query everything from raw uncalibrated values to device orientation information computed from calibrated magnetometer data fused with other sensor data. Apple iOS app developers have the Core Motion library from which they can obtain similar information.

But I prefer not to develop a native mobile app because of all the overhead involved. For Android, I would have to install Android Studio (a multi-gigabyte download) and put my device into Developer Mode which hampers its ability to run certain security-focused tasks. For iOS I would have to install Xcode, which is at least as big of a hassle, and I’m not sure what I’d have to do on an iOS device. (Installing TestFlight is part of the picture, but I don’t know the whole picture.)

Looking for an alternative: What if I could access sensor data from something with lower overhead, like a small web app? Checking on the ever-omniscient caniuse.com, I found that a magnetometer API does exist. However, it is not (yet?) standardized and hidden behind an optional flag that the user has to explicitly enable. I typed chrome://flags into my address bar and found the “Generic Sensor Extra Classes” option to switch from “Default” to “Enable”. After making this change, the associated caniuse.com magnetometer test turned from red to green.

One annoyance of working with magnetometer on Android is that I have to work against an actual device. While Chrome developer tools has an area for injecting sensor data into web apps under test, it does not (yet?) include ability to emulate magnetometer data. And looking over Android Studio documentation, I found settings to emulate sensors like an accelerometer but no mention of magnetometer either.

Looking online for some sample code, I found small code snippets in Google Chrome Developer’s blog about the Sensor API. Lots of useful reference on that page but I wanted a complete web app to look at. I found what I was looking for in a “Sensor Info” web app published from Intel’s GitHub account. (This was a surprising source, pretty far from Intel’s main moneymaker of CPUs. What is their interest in this field? Sadly, that corporate strategic answer is not to be found in this app. I choose to be happy that it exists.) Launching the app, clicking “+” allowed me to add the magnetometer sensor and start seeing data stream through. After looking through this Intel web app’s source code repository, I wrote my own minimalist test app streaming magnetometer data. Success! I’m glad it was this easy, but perhaps that was too easy?


Source code for this exploratory project is publicly available on GitHub

Notes on “Using MongoDB with Node.js” from MongoDB University

Most instructional material (and experimentation) for MongoDB uses the MongoDB Shell (mongosh), which is “a fully functional JavaScript and Node.js 16.x REPL environment for interacting with MongoDB deploymentsaccording to mongosh documentation. Making mongosh the primary command line interface useful for exploration, experimentation, and education like on Codecademy or MongoDB University.

Given the JavaScript focus of MongoDB, I was not surprised there is a set of first-party driver libraries to translate to/from various programming languages. But I was surprised to find that Node.js (JavaScript) was among the list of drivers. If this was all JavaScript anyway, why do we need a driver? The answer is that we don’t use JavaScript to talk to the underlying database. That is the job of BSON, the binary data representation used by MongoDB for internal storage. Compact and machine-friendly, it is also how data is transmitted over the network. Which is why we need a Node.js library to convert from JSON to BSON for data transmission. I started the MongoDB University course “Using MongoDB with Node.js” to learn more about using this library.

It was a short course, as befitting the minimal translation required of this JavaScript-focused database. The first course covered how to connect to a MongoDB instance from our Node.js environment. I decided to do my exercises with a Node.js Docker container.

docker run -it --name node-mongo-lab -v C:\Users\roger\coding\MongoDB\node-mongo-lab:/node-mongo-lab node:lts /bin/sh

The exercise is “Hello World” level, connecting to a MongoDB instance and listing all available databases. Success means we’ve verified all libraries & their dependencies are install correctly, that our MongoDB authentication is set up correctly, and that our networking path is clear. I thought that was a great starting point for more exercises, and was disappointed that we actually didn’t use our own Node.js environment any further in this course. The rest of the course used the Instruqt in-browser environment.

We had a lightning-fast review of MongoDB CRUD Operations and how we would do them with the Node.js driver library. All the commands and parameters are basically identical to what we’ve been doing in mongosh. The difference is that we need an instance of the client library as the starting point, from which we could obtain object representing a database and a collection with it. (client.db([database name]).collection([collection name]) Once we have that reference, everything else looks exactly as they did in mongosh. Except now they are code to be executed by Node.js runtime instead of typed. One effect of running code instead of typing commands is that it’s much easier to ensure transaction sessions complete within 60 seconds.

For me, a great side effect of this course is seeing JavaScript async/await in action doing more sophisticated things than simple straightforward things. The best example came from this code snippet demonstrating MongoDB Aggregation:

    let result = await accountsCollection.aggregate(pipeline)
    for await (const doc of result) {
      console.log(doc)
    }

The first line is straightforward: we run our aggregation pipeline and await its result. That result is an instance of MongoDB cursor which is not the entire collection of results but merely access to a portion of that collection. Cursors allow us to start processing data without having to load everything. This saves memory, bandwidth, and processing overhead. And in order to access bits of that collection, we have this “for await” loop I’ve never seen before. Good to know!

Notes on Codecademy “Learn Node-SQLite”

After my SQL fresher course, shortly after learning Node.js, I thought the natural progression was to put them together with Codecademy’s “Learn Node-SQLite” course. The name node-sqlite3 is not a mathematical subtraction but that of a specific JavaScript library bridging worlds of JavaScript and SQL. This course was a frustrating disappointment. (Details below) In hindsight, I think I would have been better off skipping this course and learn the library outside of Codecademy.

About the library: Our database instructions such as queries must be valid SQL commands stored as strings in JavaScript source code. We have the option of putting some parameters into those strings in JavaScript fashion, but the SQL commands are mostly string literals. Results of queries are returned to the caller using Node’s error-first asynchronous callback function convention, and query results are accessible as JavaScript objects. Most of library functionality are concentrated in just a few methods, with details available from API documentation.

This Codecademy course is fairly straightforward, covering the basics of usage so we can get started and explore further on our own. I was amused that some of the examples were simple to the point of duplicating SQL functionality. Specifically the example for db.each() shows how we can tally values from a query which meant we ended up writing a lot of code just to duplicate SQL’s SUM() function. But it’s just an example, so understandable.

The course is succinct to the point of occasionally missing critical information. Specifically, the section about db.run() say “Add a function callback with a single argument and leave it empty for now. Make sure that this function is not an arrow.” but didn’t say why our callback function must not use arrow syntax. This minor omission became a bigger problem when we roll into the after-class quiz, which asked why it must not use arrow syntax. Well, you didn’t tell me! A little independent research found the answer: arrow notation functions have a different behavior around the “this” object than other function notations. And for db.run(), our feedback is stored in properties like this.lastID which would not be accessible in an arrow syntax function. Despite such little problems, the instruction portion of the course were mostly fine. Which brings us to the bad news…

The Code Challenge section is a disaster.

It suffers from the same problem I had with Code Challenge section of the Learn Express course: lack of feedback on failures. Our code was executed using some behind-the-scenes mechanism, which meant we couldn’t see our console.log() output. And unlike the Learn Express course, I couldn’t workaround this limitation by throwing exceptions. No console logs, no exceptions, we don’t even get to see syntax errors! The only feedback we receive is always the same “You did it wrong” message no matter the actual cause.

Hall of Shame Runner-Up: No JavaScript feedback. When I make a JavaScript syntax error, the syntax error message was not shown. Instead, I was told “Did you execute the correct SQL query?” so I wasted time looking at the wrong thing.

Hall of Shame Bronze Medal: No SQL feedback. When I make a SQL command error, I want to see the error message given to our callback function. But console.log(error) output is not shown, so I was stabbing in the dark. For Code Challenge #13, my mistake was querying from “Bridges” table when the sample database table is actually singular “Bridge”. If I could log the error, I would have seen “No such table Bridges” which would have been much more helpful than the vague “Is your query correct?” feedback.

Hall of Shame Silver Medal: Incomplete Instructions. Challenge #14 asked us to build a query where “month is the current month”. I used “month=11” and got nothing. The database had months in words, so I actually needed to use “month=’November'”. I wasted time trying to diagnose this problem because I couldn’t run a “SELECT * FROM Table” to see what the data looked like.

Hall of Shame Gold Medal Grand Prize Winner: Challenge #12 asks us to write a function. My function was not accepted because I did not declare it using the same JavaScript function syntax used in the solution. Instructions said nothing about which function syntax to use. After I clicked “View Solution” and saw what the problem was (image above) I got so angry at the time it wasted, I had to step away for a few hours before I could resume. This was bullshit.


These Hall of Shame (dis)honorees almost turned me off of Codecademy entirely, but after a few days away to calm down, I returned to learn what Codecademy has to teach about PostgreSQL

Replace node-static with serve-static for ESP32 Sawppy Development

One of the optional middleware modules maintained by the Expressjs team is express.static, which can handle serving static assets like HTML, CSS, and images. It was used in code examples for Codecademy’s Learn Express course, and I made a mental note to investigate further after class. I thought it might help me with a problem I already had on hand, and it did!

When I started writing code for a micro Sawppy rover running on an ESP32, I wanted to be able to iterate on client-side code without having to reflash an ESP32. So as an educational test run of Node.js, I wrote a JavaScript counterpart to the code I wrote (in C/C++) for running on ESP32. While they are two different codebases, I intended for the HTTP interface to be identical and indistinguishable by the HTML/CSS/JavaScript client code I wrote. Most of this server-side work was focused around websocket, but I also needed to serve some static files. I looked on nodejs.org and found “How to serve static files” in their knowledge base. That page gave an example using the node-static module, which I copied for my project.

Things were fine for a while, but then I started getting messages from the Github Dependabot nagging me to fix a critical security flaw in my repository due to its use of a library called minimist. It was an indirect dependency I picked up by using node-static, so I figured it’ll be fixed after I pick up an update to node-static. But that update never came. As of this writing, the node-static package on NPM hadn’t been updated for four years. I see updates made on the GitHub repository, but for whatever reason NPM hasn’t picked that up and thus its registry remains outdated.

The good news is that my code isn’t deployed on an internet-facing server. This Node.js server is only for local development of my ESP32 Sawppy client-side browser code, which vastly minimizes the window of vulnerability. But still, I don’t like running known vulnerable code, even if it is only accessible from my own computer and only while I’m working on ESP32 Sawppy code. I want to get this fixed somehow.

After I got nginx set up as a local web server, I thought maybe I could switch to using nginx to serve these static files too. But there’s a hitch: a websocket connection starts as a HTTP request for an upgrade to websocket. So the HTTP server must interoperate with the websocket server for a smooth handover. It’s possible to set this up with nginx, but the instructions to do so is above my current nginx skill level. To keep this simple I need to stay within Node.js.

Which brought me back to Express and its express.static file server. I thought maybe I could fire up an Express app, use just this express.static middleware, and almost nothing else of Express. It’s overkill but it’s not stupid if it works. I just had to figure out how it would handover to my websocket code. Reading Express documentation for express.static, I saw it was built on top of a NPM module called serve-static, and was delighted to learn it can be used independent of Express! Their README included an example: Serve files with vanilla node.js http server and this was exactly what I needed. By using the same Node.js http module, my websocket upgrade handover code will work in exactly the same way. At the end, switching from node-static to serve-static was nearly a direct replacement requiring minimal code edit. And after removing node-static from my package.json, GitHub dependabot was happy and closed out my security issue. I will be free from nagging messages, at least until the next security concern. That might be serious if I had deployed to be internet accessible, but the odds of that just dropped.

Notes on Express “Getting Started” Guide

During the instruction of Codecademy’s “Learn Express” course, we see a few middleware modules that we can optionally use in our project as needed. Examples used in the course are morgan and body-parser, and one of the quizzes asked us to look up vhost. Course material even started using serve-static before we learned about middleware modules at all. These four middleware modules were among those popular enough to be adopted by the Expressjs team who now maintain them.

Since that meant I already had a browser tab open to the Express project site, I decided to poke around. Specifically, I wanted to see how their own Getting Started guide compared to the Codecademy course I just finished. My verdict: the official Express site provides a wider breadth of information but not nearly as much depth for educating a newcomer. If I hadn’t taken the Codecademy course and tried to get started with this site, I would have been able to get a simple Express application up and running but I would not have understood much of what was going on. Especially if I had created an app using the boilerplate application generator. Even after the Codecademy course I don’t know what most of these settings mean!

But the official site had wider breadth, as Codecademy didn’t even mention the boilerplate tool. It also has many lists of pointers to resources, like the aforementioned list of popular middleware modules. Another list I expect to be useful is a sample of options for database integration. Some minimal contextual information was provided with each listed link, but it’s up to us to follow those links and go from there. The only place where this site goes in depth is the Express API reference, which makes sense as the official site for Express should naturally serve as the authoritative source for such information!

I anticipate that I will use Express for at least a few learning/toy projects in the future, at which point I will need to return to this site for API reference and pointers to resources that might help me solve problems in the future. However, before I even get very far into Express, this site has already helped me solve an immediate problem: node-static is out of date.

Notes on Codecademy “Learn Express”

I may have my quibbles with Codecademy’s Learn Node.js course, but it at least gave me a better understanding to supplement what I had learned bumping around on my own. But the power of Node isn’t just the runtime, it’s the software ecosystem which has grown up around it. I have many many choices of what to learn from this point, and I decided to try the Learn Express course.

Before I started the course, I understood Express was one of the earlier Node.js frameworks for building back end of websites in JavaScript. And while there have been many others that have come online since, with more features and capabilities, Express is still popular because it set out not to pack itself with features and capabilities. This meant if we wanted to prototype something slightly off the beaten path, Express would not get in our way. This sounded like a good tool to have in the toolbox.

After taking the course, I learned how Express accomplishes those goals. Express Routes helps us map HTTP methods (GET/POST/PUT/DELETE) to JavaScript code via “Routes”, and for each route we can compose multiple JavaScript modules in the form of “Middleware”. With this mechanism we can assemble arbitrary web API by chaining middleware modules like LEGO pieces to respond to HTTP methods. And… that’s basically the entirety of core Express. Everything else is optional, so we only need to pull in what we need (in the form of middlware modules) for a particular project.

When introducing Routes in Express, our little learning JavaScript handler functions are actually fully qualified Middleware, but we didn’t know it yet. What I did notice is that it had the signature of three parameters: (request, response, next). The Routes course talked about reading request to build our response, but it never talked about next. Students who are curious about them and striking out to search on their own as I did would find information about “chaining”, but it wouldn’t make sense until we learned Middleware. I thought it would have been nice if the course would say “we’ll learn about next later, when we learn about Middleware” or something to that effect.

My gripe with this course is in its quiz sections. We are given partial chunk of JavaScript and told to fill in certain things. When we click “Check Work” we trigger some validation code to see if we did it right. If we did it wrong, we might get an error message to help us on our way. But sometimes the only feedback we receive is that our answer is incorrect, with no further feedback. Unlike earlier Node course exercises, we were not given a command prompt to run “node app.js” and see our output. This meant we could not see the test input, we could not see our program’s behavior, and we could not debug with console.log(). I tried to spin up my own Node.js Docker container to try running the sample code, but we weren’t given entire programs to run and we weren’t given the test input so that was a bust.

I eventually found a workaround: use exceptions. Instead of console.log('debug message') I could use throw Error('debug message') and that would show up on the Codecademy UI. This is far less than ideal.

Once I got past the Route section, I proceeded to Middleware. Most of this unit was focused on showing us how various Middleware mechanisms allow us to reduce code duplication. My gripe with this section is that the course made us do useless repetitive work before telling us to replace them with much more elegant Middleware modules. I understand this is how the course author chose to make their point, but I’m grumpy at useless make-work that I would delete a few minutes later.

By the end of the course, we know Express basics of Route and Middleware and got a little bit of practice building routes from freely available middleware modules. The course ends by telling us there are a lot of Express middleware out there. I decided to look into Express documentation for some starting points.

Notes on Codecademy “Learn Node.js”

I’ve taken most of Codecademy’s HTML/CSS course catalog for front-end web development, ending with a series of very educational exercises created outside of Codecademy’s learning environment. I think I’m pretty well set up to execute web browser client-side portions of my project ideas, but I also need to get some education on server-side coding before I can put all the pieces together. I’ve played with Node.js earlier, but I’m far from an expert. It should be helpful to get a more formalized introduction via Codecademy, starting with Learn Node.js.

This course recommends going through Introduction to JavaScript as a prerequisite, so the course assumes we already know those basics. The course does not place the same requirement on Intermediate JavaScript, so some of the relevant course material is pulled into this Node.js course. Section on Node modules were reruns for me, but here it’s augmented with additional details and a pointer to official documentation.

The good news for the overlap portions is that it meant I already had partial credit for Learn Node.js as soon as I started, the bad news is the Codecademy’s own back-end got a little confused. I clicked through “Next” for a quick review, and by doing so it skipped me over a few lessons that I had not yet seen. My first hint something was wrong was getting tossed into a progress checking quiz and being baffled: “I don’t remember seeing this material before!” I went back to examine the course syllabus, where I saw the skipped portions. The quiz was much easier once I went through that material!

This course taught me about error-first callback functions, something that is apparently an old convention for asynchronous JavaScript (or just Node) code that I hadn’t been aware of. I think I stumbled across this in my earlier experiments and struggled to use the effectively. Here I learn they were the conceptual predecessor to promises, which led to async/await which plays nice with promises. But what about even older error-first callback code? This is where util.promisify() comes into the picture, so that everyone can work together. Recognizing what error-first callbacks are and knowing how to interoperate via util.promisify(), should be very useful.

The course instructs us on how to install Node.js locally on our development computers, but I’m going to stick with using Docker containers. Doing so would be inconvenient if I wanted to rely on globally installed libraries, but I want to avoid global installations as much as possible anyway. NPM is perfectly happy to work at project scope and that just takes mapping my project directory as a volume into the Docker container.

After all, I did that as a Docker & Node test run with ESP32 Sawppy’s web interface. But that brought in some NPM headaches: I was perpetually triggering GitHub dependabot warnings about security vulnerabilities in NPM modules I hadn’t even realized I was using. Doing a straight “update to latest” did not resolve these issues, I eventually figured out it was because I had been using node-static to serve static pages in my projects. But the node-static package hadn’t been updated in years and so it certainly wouldn’t have picked up security fixes. Perhaps I could switch it to another static server NPM module like http-server, or get rid of that altogether and keep using nginx as sheer overkill static web server.

Before I decide, though, this Learn Node.js course ended with a few exercises building our own HTTP server using Node libraries. These were a little more challenging than typical Codecademy in-course exercises. One factor is that the instructions told us to do a lot of things with no way to incrementally test them as we go. We didn’t fire it up the server to listen for traffic (server.listen()) until the second-from-final step, and by then I had accumulated a lot of mistakes that took time to untangle from the rest of the code. The second factor is that the instructions were more vague than usual. Some Codecademy exercises tell us exactly what to type and on which line, and I think that didn’t leave enough room for us to figure things out for ourselves and learn. This exercise would sometimes tell us “fill in the request header” without details or even which Node.js API to use. We had to figure it all out ourselves. I realize this is a delicate balance when writing course material. I feel Codecademy is usually too much “do exactly this” for my taste, but the final project of Learn Node.js might have gone too far in the “left us flailing uselessly” direction.

In the meantime, I believe I have enough of a start to continue learning about server-side JavaScript. My next step is to learn Express.

Angular on Window Phone 8.1

One of the reasons learning CSS was on my to-do list was because I didn’t know enough to bring an earlier investigation to conclusion. Two years ago, I ran through tutorials for Angular web application framework. The experience taught me I needed to learn more about JavaScript before using Angular, which uses TypeScript which is a derivative(?) of JavaScript. I also needed to learn more about CSS in order to productively utilize the Material style component libraries that I had wanted to use.

One side experiment of my Angular adventure was to test the backwards compatibility claims for the framework. By default, Angular does not build support for older browsers, but it could be configured to do so. Looking through the referenced browserlist project, I see an option of “ie_mob” for Microsoft Internet Explorer on “Other Mobile” devices a.k.a. the stock web browser for Windows Phone.

I added ie_mob 11 to the list of browser targets in an Angular project. This backwards compatibility mode is not handled by the Angular development server (ng serve) so I had to run a full build (ng build) and spin up an nginx container to serve the entire /dist project subdirectory.

Well now, it appeared to work! Or at least, more of this test app showed up on screen than if I hadn’t listed ie_mob on the list of browser targets.

However, scrolling down unveiled some problems with elements that did not get rendered, below the “Next Steps” section. Examing the generated HTML, it didn’t look very different from the rest of the page. However, these elements did use different CSS rules not used by the rest of the page.

Hypothesis: The HTML is fine, the TypeScript has been transpiled to Windows Phone friendly dialects, but the page used CSS rules that were not supported by Windows Phone. Lacking CSS knowledge, that’s where my investigation had to stop. Microsoft has long since removed debugging tools for Windows Phone so I couldn’t diagnose it further except by code review or trial and error.

Another interesting observation on this backwards-compatible build is vendor-es5.js. This polyfill performing JavaScript compatibility magic is over 2.5 MB all by itself (2,679,414 bytes) and it has to sit parallel with the newer and slightly smaller vendor-es2015.js (2,202,719 bytes). While a few megabytes are fairly trivial for modern computers, this combination of the two would not fit in the 4MB flash on an ESP32.


Initial Chunk Files | Names                |      Size
vendor-es5.js       | vendor               |   2.56 MB
vendor-es2015.js    | vendor               |   2.10 MB
polyfills-es5.js    | polyfills-es5        | 632.14 kB
polyfills-es2015.js | polyfills            | 128.75 kB
main-es5.js         | main                 |  57.17 kB
main-es2015.js      | main                 |  53.70 kB
runtime-es2015.js   | runtime              |   6.16 kB
runtime-es5.js      | runtime              |   6.16 kB
styles.css          | styles               | 116 bytes

                    | Initial ES5 Total    |   3.23 MB
                    | Initial ES2015 Total |   2.28 MB

For such limited scenarios, we have to run the production build. After doing so (ng build --prod) we see much smaller file sizes:

node ➜ /workspaces/pie11/pie11test (master ✗) $ ng build --prod
✔ Browser application bundle generation complete.
✔ ES5 bundle generation complete.
✔ Copying assets complete.
✔ Index html generation complete.

Initial Chunk Files                      | Names                |      Size
main-es5.19cb3571e14c54f33bbf.js         | main                 | 152.89 kB
main-es2015.19cb3571e14c54f33bbf.js      | main                 | 134.28 kB
polyfills-es5.ea28eaaa5a4162f498ba.js    | polyfills-es5        | 131.97 kB
polyfills-es2015.1ca0a42e128600892efa.js | polyfills            |  36.11 kB
runtime-es2015.a4dadbc03350107420a4.js   | runtime              |   1.45 kB
runtime-es5.a4dadbc03350107420a4.js      | runtime              |   1.45 kB
styles.163db93c04f59a1ed41f.css          | styles               |   0 bytes

                                         | Initial ES5 Total    | 286.31 kB
                                         | Initial ES2015 Total | 171.84 kB

ESP8266 MicroPython Exception Handling Helps Robustness

I had to solve a few problems encountered publishing data to MQTT using ESP8266 MicroPython, running into MQTTException raised by the library. On the upside, dealing with MQTTException reminded me that I don’t usually have the luxury of exception handling on microcontrollers.

Exception handling in Python is my favorite part so far of using MicroPython on a microcontroller. I’m no stranger to calling APIs and checking error codes in typical C programming style and I can certainly work in that environment, but I do enjoy using a language like Python with exception handling mechanisms because it allows me to structure code in a way I find much more readable. This is important, especially for small projects where I don’t expect to look at the code on a regular basis. By the time I need to come back and modify the code months or years later, I’m looking at it with essentially fresh eyes. Comments are critical, but a good structure is very helpful too!

If I don’t have any exception handlers, an error would stop execution of my program and break into REPL awaiting diagnosis and repair. This is great while I’m developing the code, but I won’t want that later. During runtime I expect errors to be one of three types:

  1. Failing to connect to WiFi. This could happen if my WiFi router is in the middle of a firmware update, and for such harmless scenarios the best thing is to go to sleep and try again later.
  2. Failing to connect to MQTT broker. This could happen if I took down my Mosquitto docker container, again probably for an update.
  3. Failure to publish ADC data. This could happen if the WiFi router or Mosquitto went down in between connection and data publishing.

For all of these cases, the best thing to do is to try again later. Which for this project is actually the exact same thing I want to do even when everything is successful: go to sleep for a minute and repeat everything upon wake.

My first implementation caught all exceptions and proceeded to deep sleep for retry in one minute, but this is a problem: if I encounter a problem outside of the expected errors, or if I want to break into REPL for any other reason like updating the program with a new feature, I have only a very narrow window of time to do so. In fact, it was too fast for me to catch it awake!

So I actually want to do something different in case of error: keep the ESP8266 awake for 30 seconds or so. Long enough for me to connect a serial terminal and hit Control-C to break into REPL. I could trigger this path by taking down my Mosquitto docker container causing scenario #2 above.

This is an improvement over my first implementation, but I couldn’t upload my improved code. The ESP8266 wakes up, try to report ADC, and immediately go to deep sleep no matter what happens. After some time tearing my hair out trying to break into this narrow time window, I resorted to reflashing the ESP8266 with fresh MicroPython. Now I could actually get into REPL and upload the new code. It’s a good thing I keep these little code projects publicly accessible on GitHub where I could get a copy for my own use if I had to erase it.

I really like what I’ve seen of MicroPython so far, and it’ll definitely be a consideration for future projects. But for this project I’m changing course for no fault of MicroPython.

Second ESP8266 Voltage Monitor is Directly Wired to Buck Converter

Once I got my MicroPython ESP8266 connected to my home network, I expect to continue working with it over the network instead of an USB cable. Which meant it was time for me to take this development board and wire it to a DC voltage buck converter as I did earlier. However, this time I’m going to skip on the perforated prototype circuit board and going for direct wiring. (Sometimes called deadbug style due to folded pins and wires.)

But without the prototype board, I have to handle my own spacing. I cut up an expired credit card and placed the sheet of plastic in between Wemos D1 Mini clone (*) and its MP1584EN DC buck converter (*). Wires looped around the outside of this sheet to carry power lines 3.3V and GND, as well as the pair of 1 Megaohm resistors in series to ADC input pin for measuring voltage.

And relative to the previous iteration, I added one more wire: connecting ESP8266 GPIO16 (labeled D0 on a Wemos D1 Mini board) to the reset (RST) pin. This is required for an ESP8266 to wake from deep sleep, and this requirement is the very first sentence on MicroPython section for ESP8266 deep sleep. I’m going to guess that it is front and center because enough people forgot to do this critical step and their ESP8266 wouldn’t wake from sleep.

Once this package was tested to function over MicroPython WebREPL, I wrapped the whole thing up in clear heat shrink tube(*) (not pictured in title image) for a nice compact package. I could now query ADC value representing input voltage over WebREPL, but that’s not useful until I could report that value via MQTT.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.