ESP8266 MicroPython Exception Handling Helps Robustness

I had to solve a few problems encountered publishing data to MQTT using ESP8266 MicroPython, running into MQTTException raised by the library. On the upside, dealing with MQTTException reminded me that I don’t usually have the luxury of exception handling on microcontrollers.

Exception handling in Python is my favorite part so far of using MicroPython on a microcontroller. I’m no stranger to calling APIs and checking error codes in typical C programming style and I can certainly work in that environment, but I do enjoy using a language like Python with exception handling mechanisms because it allows me to structure code in a way I find much more readable. This is important, especially for small projects where I don’t expect to look at the code on a regular basis. By the time I need to come back and modify the code months or years later, I’m looking at it with essentially fresh eyes. Comments are critical, but a good structure is very helpful too!

If I don’t have any exception handlers, an error would stop execution of my program and break into REPL awaiting diagnosis and repair. This is great while I’m developing the code, but I won’t want that later. During runtime I expect errors to be one of three types:

  1. Failing to connect to WiFi. This could happen if my WiFi router is in the middle of a firmware update, and for such harmless scenarios the best thing is to go to sleep and try again later.
  2. Failing to connect to MQTT broker. This could happen if I took down my Mosquitto docker container, again probably for an update.
  3. Failure to publish ADC data. This could happen if the WiFi router or Mosquitto went down in between connection and data publishing.

For all of these cases, the best thing to do is to try again later. Which for this project is actually the exact same thing I want to do even when everything is successful: go to sleep for a minute and repeat everything upon wake.

My first implementation caught all exceptions and proceeded to deep sleep for retry in one minute, but this is a problem: if I encounter a problem outside of the expected errors, or if I want to break into REPL for any other reason like updating the program with a new feature, I have only a very narrow window of time to do so. In fact, it was too fast for me to catch it awake!

So I actually want to do something different in case of error: keep the ESP8266 awake for 30 seconds or so. Long enough for me to connect a serial terminal and hit Control-C to break into REPL. I could trigger this path by taking down my Mosquitto docker container causing scenario #2 above.

This is an improvement over my first implementation, but I couldn’t upload my improved code. The ESP8266 wakes up, try to report ADC, and immediately go to deep sleep no matter what happens. After some time tearing my hair out trying to break into this narrow time window, I resorted to reflashing the ESP8266 with fresh MicroPython. Now I could actually get into REPL and upload the new code. It’s a good thing I keep these little code projects publicly accessible on GitHub where I could get a copy for my own use if I had to erase it.

I really like what I’ve seen of MicroPython so far, and it’ll definitely be a consideration for future projects. But for this project I’m changing course for no fault of MicroPython.

Second ESP8266 Voltage Monitor is Directly Wired to Buck Converter

Once I got my MicroPython ESP8266 connected to my home network, I expect to continue working with it over the network instead of an USB cable. Which meant it was time for me to take this development board and wire it to a DC voltage buck converter as I did earlier. However, this time I’m going to skip on the perforated prototype circuit board and going for direct wiring. (Sometimes called deadbug style due to folded pins and wires.)

But without the prototype board, I have to handle my own spacing. I cut up an expired credit card and placed the sheet of plastic in between Wemos D1 Mini clone (*) and its MP1584EN DC buck converter (*). Wires looped around the outside of this sheet to carry power lines 3.3V and GND, as well as the pair of 1 Megaohm resistors in series to ADC input pin for measuring voltage.

And relative to the previous iteration, I added one more wire: connecting ESP8266 GPIO16 (labeled D0 on a Wemos D1 Mini board) to the reset (RST) pin. This is required for an ESP8266 to wake from deep sleep, and this requirement is the very first sentence on MicroPython section for ESP8266 deep sleep. I’m going to guess that it is front and center because enough people forgot to do this critical step and their ESP8266 wouldn’t wake from sleep.

Once this package was tested to function over MicroPython WebREPL, I wrapped the whole thing up in clear heat shrink tube(*) (not pictured in title image) for a nice compact package. I could now query ADC value representing input voltage over WebREPL, but that’s not useful until I could report that value via MQTT.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

ESP8266 MicroPython Automatically Remembers WiFi

There were a few speed bumps on my way to a MicroPython interactive prompt, also known as REPL the read, evaluate, print loop. But once I got there, I was pretty impressed. It was much friendlier to iterative experimentation than Arduino on an ESP8266, because I don’t have to reflash and reboot every time. And since the ESP8266 has WiFi capabilities, getting REPL over the network (WebREPL) is even cooler. Now I can experiment while it runs on another power source, completely independent of USB for either power of data.

Before I got there, though, I needed to get this ESP8266 on my home WiFi network. By default, MicroPython sets up an access point for its own network so I need to turn “AP mode” off. Then I turn on “station mode” which allows connection to my WiFi router given its SSID and password.

import network

ap = network.WLAN(network.AP_IF)
ap.active(False)

sta = network.WLAN(network.STA_IF)
sta.active(True)
sta.config(dhcp_hostname='my hostname')
sta.connect('my wifi ssid','my wifi password')

I added one optional element: the dhcp_hostname parameter. This is the name shown to my router and probably other devices on my home network. If I don’t set this, the default name is “ESP-” followed by six hexadecimal digits of the ESP8266’s MAC address. That’s not a particularly memorable name so I wanted something I could remember and recognize.

And then, to my surprise, MicroPython remembered the network settings upon restart. I wrote a piece of Python code to perform this routine that I could run whenever I rebooted the board. But when I set out to test it by rebooting the board, it automatically reconnected to WiFi. This tells me a successful WiFi connection would cause a write to flash memory, which implies I should not run my WiFi connection code upon every startup. I expect to make this board go to deep sleep frequently and, if it writes WiFi information to flash every time it wakes up, I will quickly wear out the flash.

But that is just a hypothesis. As MicroPython is an open source project, it should be possible for me to dig into the code and figure out exactly when MicroPython writes WiFi connection information to flash. Perhaps it isn’t as bad as I feared it would be. Until then, however, I will hold off running my WiFi connection script.

A downside of not running my script is the DHCP hostname, which is not remembered upon reboot and this board reverted back to the default ESP-prefix name. But I can live with that for now, the next step is to set up my hardware for playing with deep sleep under battery power.

A Few Speed Bumps on the Road to ESP8266 MicroPython

I decided to play with MicroPython on an ESP8266 and started with MicroPython documentation page appropriately titled Quick reference for the ESP8266. It was almost (but not entirely) smooth sailing with the inexpensive Wemos D1 Mini clone(*) I had on hand.

I had recently switched desktop computers, with a fresh installation of Windows, so everything had to be reinstalled. Starting with Python, since I need that to run esptool tool to flash Espressif devices. It got its own virtual Python environment with venv and I could start working with the ESP8266.

I verified that flash size matched 4MB as per Amazon product listing with esptool.py --port COM4 flash_id

Then the first step in MicroPython directions: erase whatever might be in flash: esptool.py --port COM4 erase_flash

Followed by flashing the board with MicroPython, version 1.17 was the latest as of this writing: esptool.py --port COM4 --baud 460800 write_flash --flash_size=detect 0 esp8266-20210902-v1.17.bin

esptool.py v3.1
Serial port COM4
Connecting....
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
Uploading stub...
Running stub...
Stub running...
Changing baud rate to 460800
Changed.
Configuring flash size...
Auto-detected Flash size: 4MB
Flash will be erased from 0x00000000 to 0x0009afff...
Flash params set to 0x0040
Compressed 633688 bytes to 416262...
Wrote 633688 bytes (416262 compressed) at 0x00000000 in 9.4 seconds (effective 537.1 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting via RTS pin...

That looked good! But I thought I’d verify anyway: esptool.py --port COM4 --baud 460800 verify_flash --flash_size=detect 0 esp8266-20210902-v1.17.bin

esptool.py v3.1
Serial port COM4
Connecting....
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
Uploading stub...
Running stub...
Stub running...
Changing baud rate to 460800
Changed.
Configuring flash size...
Auto-detected Flash size: 4MB
Flash params set to 0x0040
Verifying 0x9ab58 (633688) bytes @ 0x00000000 in flash against esp8266-20210902-v1.17.bin...
-- verify OK (digest matched)
Hard resetting via RTS pin...

This all looked good, but during this process I found communication with my board was unreliable. Occasionally I would fail to connect:

esptool.py v3.1
Serial port COM4
Connecting........_____....._____....._____....._____....._____....._____....._____

A fatal error occurred: Failed to connect to Espressif device: Invalid head of packet (0x08)

The frustrating part is that I don’t know what causes this, all I could do is retry until it worked. I didn’t notice anything I did differently between the times that worked and the times that failed. Is it the ESP8266? Is it the CH340 serial port bridge? Is it my USB cable? I can’t tell. The good news with MicroPython is that, once it is flashed, I could work via serial port without further headaches with esptool.

I remembered that PlatformIO Visual Studio Code had a serial port monitor, and it was indeed able to connect. But as the name stated, it was only a monitor and while I could see a MicroPython prompt I couldn’t type any commands back. Looking around Visual Studio extension marketplace I found a serial terminal extension published by Nordic Semiconductor. This allowed me to type commands into the MicroPython prompt and verify it worked, but frustratingly I could not copy/paste in this terminal. So much for a modern integrated environment! I returned to trusty old PuTTY for my MicroPython serial terminal needs and got to work.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Notes on Codecademy “Build Deep Learning Models with TensorFlow”

Once I upgraded to a Codecademy Pro membership, I started taking courses from its Python catalog with the goal of building a foundation to understand deep learning neural networks. Aside from a few scenic detours, most of my course choices were intended to build upon each other to fulfill what I consider prerequisites for a Codecademy “Skill Path”: Build Deep Learning Models with TensorFlow

This was the first “Skill Path” I took, and I wasn’t quite sure what to expect as Codecademy implied they are different than the courses I took before. But once I got into this “skill path”… it feels pretty much like another course. Just a longer one, with more sessions. It picked up where the “Learn the Basics of Machine Learning” course left off with neural perceptrons, and dived deeper into neural networks.

In contrast to earlier courses that taught various concepts by using them to solve regression problems, this course spent more time on classification problems. We are still using scikit-learn a lot, but as promised by the title we’re also using TensorFlow. Note the course work mostly stayed in the Keras subset of TensorFlow 2 API. Keras used to be a separate library for making it easier to work with TensorFlow version 1, but it has since been merged into TensorFlow version 2 as part of the big revamp between versions.

I want to call attention to an item linked as “additional resources” for the skill path: a book titled “Deep Learning with Python” by François Chollet. (Author, or at least one of the primary people, behind Keras.) Following various links associated with the title, I found that there’s since been a second edition and the first chapter of the book is available to read online for free! I loved reading this chapter, which managed to condense a lot of background on deep learning into a concise history of the field. If the rest of the book is as good as the first chapter, I will learn a lot. The only reason I haven’t bought the book (yet) is that, based on the index, the book doesn’t get into unsupervised reinforcement learning like the type I want to put into my robot projects.

Back to the Codecademy course…. err, skill path: we get a lot of hands-on exercises using Keras to build TensorFlow models and train them on data for various types of problems. This is great, but I felt there was a significant gap in the material. I appreciated learning that different loss functions and optimizers will be used for regression versus classification problems, and we put them to work in their respective domains. But we were merely told which function to use for each exercise, the course doesn’t go into why they were chosen for the problem. I had hoped that the Keras documentation Optimizers Overview page would describe relative strengths and weaknesses of each optimizer, but it was merely a list of optimizers by name. I feel like such a comparison chart must exist somewhere, but it’s not here.

I didn’t quite finish this skill path. I lost motivation to finish the “Portfolio Project” portion of the skill path where we are directed to create a forest cover classification model. My motivation for deep learning lies in reinforcement learning, not classification or regression problems, so my attention has wandered elsewhere. At this point I believe I’ve exhausted all the immediately applicable resources on Codecademy as there are no further deep learning material nor is there anything on reinforcement learning. So I bid a grateful farewell to Codecademy for teaching me many important basics over the past few months and started looking elsewhere.

Notes on Codecademy Intermediate Python Courses

I thought Codecademy’s course “Getting Started Off Platform for Data Science” really deserved more focus than it did when I initially browsed the catalog, regretting that I saw it at the end of my perusal of beginner friendly Python courses. But life moves on. I started going through some intermediate courses with an eye on future studies in machine learning. Here are some notes:

  • Learn Recursion with Python I took purely for fun and curiosity with no expectation of applicability to modern machine learning. In school I learned recursion with Lisp, a language ideally suited for the task. Python wasn’t as good of a fit for the subject, but it was alright. Lisp was also the darling of artificial intelligence research for a while, but I guess the focus has since evolved.
  • Learn Data Visualization with Python gave me more depth on two popular Python graphing libraries: Matplotlib and Seaborn. These are both libraries with lots of functionality so “more depth” is still only a brief overview. Still, I anticipate skills here to be useful in the future and not just in machine learning adventures.
  • Learn Statistics with NumPy was expected to be a direct follow-up to the beginner-friendly Statistics with Python course, but it was not a direct sequel and there’s more overlap than I thought there’d be. This course is shorter, with less coverage on statistics but more about NumPy. After taking the course I think I had parsed the course title as “(Learn Statistics) with NumPy” but I think it’s more accurate to think of it as “Learn (Statistics with NumPy)”
  • Linear Regression in Python is a small but important step up the foothills on the way to climbing the mountain of machine learning. Finding the best line to fit a set of data teaches important concepts like loss functions. And doing it on a 2D plot of points gives us an intuitive grasp of what the process looks like before we start adding variables and increasing the number of dimensions involved. Many concepts are described and we get exercises using the scikit-learn library which implements those algorithms.
  • Learn the Basics of Machine Learning was the obvious follow-up, diving deeper into machine learning fundamentals. All of my old friends are here: Pandas, NumPy, scikit-learn, and more. It’s a huge party of Python libraries! I see this course as a survey of major themes in machine learning, of which neural networks was only a part. It describes a broader context which I believe is a good thing to have in the back of my head. I hope it helps me avoid the trap of trying to use neural nets to solve everything a.k.a. “When I get a shiny new hammer everything looks like a nail”.

Several months after I started reorienting myself with Python 3, I felt like I had the foundation I needed to start digging into the current state of the art of deep learning research. I have no illusions about being able to contribute anything, I’m just trying to learn enough to apply what I can read in papers. My next step is to learn to build a deep learning model.

Notes on Codecademy “Getting Started Off Platform for Data Science”

I like Codecademy’s format of having a bit of information that is followed immediately by an opportunity to try it myself. I like learn-by-doing as a beginner, even if the teaching/learning environment can be limited at times. But one thing that I didn’t like was the fact if I am to put my Python knowledge to use, I would have to venture outside of the learning environment and Codecademy didn’t used to provide information how.

The Learn Python 3 course made effort to help students work outside of the Codecademy environment with “Off-Platform Project”. These came in the form of Jupyter notebooks that I could download, and a page with some instructions on how to use them: a link to Codecademy’s command line course, a link to instructions for installing Python on my own computer, and a link on installing Jupyter notebooks. It’s a bit scattered.

What I didn’t know at the time was that Codecademy had already assembled an entire course covering these points. Getting Started Off Platform for Data Science is an orientation for everyone as we eventually venture off Codecademy’s learning platform. It starts with an introduction to the command line, then Python development tools like Jupyter Notebooks and other IDEs, wrapping up with an introduction to Github. This is great! Why didn’t they put more emphasis on this earlier? I think it would have been super helpful to beginners.

Though admittedly, I didn’t follow those installation instructions anyway. Python isn’t very good about library version management and the community has sidestepped the issue by using virtual environments to keep Python libraries separated in different per-project worlds. I’ve used venv and Anaconda to do this, and recently I’ve also started playing with Docker containers. For my own trip through Codecademy’s off-platform projects using Jupyter notebooks, I ran Jupyter Lab using their jupyter/datascience-notebook Docker image. That turned out to be sheer overkill and I probably could have just used the much lighter-weight jupyter/base-notebook image.

In hindsight I think it would have been useful for me to review Getting Started Off Platform for Data Science before I started reorienting myself with Python. I wouldn’t have followed it by the letter, but it had information that would have been useful beforehand. But as fate had it, it became the final course I took in the beginner-friendly section before I started trying intermediate courses.

Codecademy Beginner Friendly Python Fields

Once Codecademy got me reoriented with the Python programming language, I looked at some of their other beginner-friendly courses under the Python umbrella. I wanted to get some practice using Python, but I didn’t want to go through exercises for the sake of exercises. I wanted to make some effort at keeping things focused on my ultimate goal of learning about modern advances in machine learning.

  1. Learn Data Analysis with Pandas was my first choice, because I recognized “Pandas” as the name of a popular Python library for preparing data for machine learning. Making it relevant to the direction I am aiming for. The course title has “Data Analysis” and not “Machine Learning” but that was fine because it was only an introduction to the library. Not enough to get into field-specific knowledge, but more than enough to teach me Pandas vocabulary so I could navigate Pandas references and find my own answers in the future.
  2. How to Clean Data with Python followed up with more examples of Pandas in action. Again the course is nominally focused for data analytics but all the same concepts apply to cleaning data before feeding into machine learning algorithms.
  3. Exploratory Data Analysis in Python is a longer course with more ways to apply Pandas, including a machine learning specific section. Relative to other courses, this one is heavy on reading and light on hands-on practice, a consequence of the more general nature of the topic. And finally, this course let me dip my toes in another popular Python library I wanted to learn: NumPy.
  4. Learn Statistics with Python was how I dove into NumPy waters. After barely skating by some statics and number crunching in the previous course, I wanted a refresher in basic statistics. Alongside the refresher I also learn how to calculate common statistics using the NumPy library. And after the statistics calculations are done, we want to visualize them! Enter yet another popular Python library: matplotlib.
  5. Probability is the natural course to follow a refresher in basic statistics. They cover only the most basic and common applications of statistics and probability for data analysis, we’re on our own to explore in further depth outside of the class. I anticipate probability to play a role in machine learning, as some answers are going to be vague with room for interpretation. I foresee a poor (or misleadingly confident) grasp of probability will lead me astray.
  6. Differential Calculus was a course I poked my head into. I remembered it was quite a complex subject in school and was surprised Codecademy claimed anyone could learn it in two hours. It turns out the course should be more accurately titled “an introduction to numpy.gradient()“. Which… yes, it is a numerical application of differential calculus but it is definitely not the entirety of differential calculus. I guess it follows the trend of these courses: overly simplfied titles that skim the basics of a few things. Teach just enough for us to learn more on our own later.
  7. Linear Algebra starts to get into Python code that has direct relevance to machine learning. I know linear regression is a starting point and I knew I needed an introduction to linear algebra before I could grasp how linear regression algorithms work.
  8. Learn How to Get Started with Natural Language Processing was a disappointment to me, but it was not the fault of the course. It’s just that the machine learning systems in this field aren’t usually reinforcement learning systems. Which was the subfield of machine learning that most interested me. At least the course was short, and taught me enough so I know to skip other Codecademy natural language courses for myself.

The final Codecademy “Beginner friendly” Python course I took was titled “Getting Started Off Platform for Data Science.” I don’t think Codecademy put enough emphasis on this one.

Getting Reacquainted with Python via Codecademy

A few years ago I started learning Python and applied that knowledge to write control software for SGVHAK rover. I haven’t done very much with Python since, and my skills have become rusty. Since Python is very popular in modern machine learning research, a field that I am interested in exploring, I knew I had to get back to studying Python eventually.

I remember that I enjoyed learning Python from Codecademy, so I returned to see what courses had been added since my visit years ago. The Codecademy Python catalog has definitely grown, and I was not surprised to see most of it are only accessible to the paid Pro tier. If I want to make a serious run at this, I’ll have to pay up. Fortunately, like a lot of digital content on the internet, it’s not terribly difficult to find discounts for Codecademy Pro. Armed with one of these discount venues, I upgraded to the Pro tier and got to work. Here are some notes on a few introductory courses:

  • Learn Python 2 was where I started before, because SGVHAK rover used RoboClaw motor controllers and their Python library at the time was not compatible with Python 3. I couldn’t finish the course earlier because it was a mix of free and Pro content, and I wasn’t a Codecademy Pro subscriber at the time. I’m not terribly interested in finishing this course now. Python 2 was officially history as of January 1st, 2020. The only reason I might revisit this course is if I tackle the challenge of working in an old Python 2 codebase.
  • Right now I’m more interested in the future, so for my refresher course I started with Learn Python 3. This course has no prerequisites and starts at the very beginning with printing Hello World to the console and building up from there. I found the progression reasonable with one glaring exception: At the end of the course there were some coding challenges, and the one regarding Python classes required students to create base classes and derived classes. Problem: class inheritance was never covered in course instructions! I don’t think they expected students to learn this on their own. It feels like an instruction chapter got moved to the intermediate course, but its corresponding exercise was left in place. Other than that, the class was pretty good.
  • Inheritance and other related concepts weren’t covered until the “Object-Oriented Programming” section of Learn Intermediate Python 3, which didn’t have as smooth or logical of a progression. It felt more like a grab-bag of intermediate concepts that they decided to cut out of the beginner course. This class was not terrible, but it did diminish the advantage of learning through Codecademy versus reading bits and pieces on my own. Still, I learned a lot of useful bits about Python that I hadn’t known before. I’m glad I spent time here.

With some Python basics down — some I knew from before and some that were new to me — I poked around other beginner-friendly Codecademy Python courses.

Notes Of A Three.js Beginner: QuaternionKeyframeTrack Struggles

When I started researching how to programmatically animate object rotations in three.js, I was warned that quaternions are hard to work with and can easily bamboozle beginners. I gave it a shot anyway, and I can confirm caution is indeed warranted. Aside from my confusion between Euler angles and quaternions, I was also ignorant of how three.js keyframe track objects process their data arrays. Constructor of keyframe track objects like QuaternionKeyframeTrack accept (as their third parameter) an array of key values. I thought it would obviously be an array of quaternions like [quaterion1, quaternion2], but when I did that, my CPU utilization shot to 100% and the browser stopped responding. Using the browser debugger, I saw it was stuck in this for() loop:

class QuaternionLinearInterpolant extends Interpolant {
  constructor(parameterPositions, sampleValues, sampleSize, resultBuffer) {
    super(parameterPositions, sampleValues, sampleSize, resultBuffer);
  }
  interpolate_(i1, t0, t, t1) {
    const result = this.resultBuffer, values = this.sampleValues, stride = this.valueSize, alpha = (t - t0) / (t1 - t0);
    let offset = i1 * stride;
    for (let end = offset + stride; offset !== end; offset += 4) {
      Quaternion.slerpFlat(result, 0, values, offset - stride, values, offset, alpha);
    }
    return result;
  }
}

I only have two quaterions in my key frame values, but it is stepping through in increments of 4. So this for() loop immediately shot past end and kept looping. The fact it was stepping by four instead of by one was the key clue. This class doesn’t want an array of quaternions, it wants an array of quaternion numerical fields flattened out.

  • Wrong: [quaterion1, quaternion2]
  • Right: [quaterion1.x, quaterion1.y, quaterion1.z, quaterion1.w, quaternion2.x, quaternion2.y, quaternion2.z, quaternion2.w]

The latter can also be created via quaterion1.toArray().concat(quaternion2.toArray()).

Once I got past that hurdle, I had an animation on screen. But only half of the colors animated in the way I wanted. The other half of the colors went in the opposite direction while swerving wildly on screen. In a HSV cylinder, colors are rotated across the full range of 360 degrees. When I told them to all go to zero in the transition to a cube, the angles greater than 180 went one direction and the angles less than 180 went the opposite direction.

Having this understanding of the behavior, however, wasn’t much help in trying to get things working the way I had it in my head. I’m sure there are some amateur hour mistakes causing me grief but after several hours of ever-crazier animations, I surrendered and settled for hideous hacks. Half of the colors still behaved differently from the other half, but at least they don’t fly wildly across the screen. It is unsatisfactory but will have to suffice for now. I obviously don’t understand quaternions and need to study up before I can make this thing work the way I intended. But that’s for later, because this was originally supposed to be a side quest to the main objective: the Arduino color composite video out library I’ve released with known problems I should fix.

[Code for this project is publicly available on GitHub]

Notes Of A Three.js Beginner: Euler Angles vs. Quaternions

I was pretty happy with how quickly I was able to get a static 3D visualization on screen with the three.js library. My first project to turn the static display into an interactive color picker also went smoothly, giving me a great deal of self confidence for proceeding to the next challenge: adding an animation. And this was where three.js put me in my place reminding me I’m still only a beginner in both 3D graphics and JavaScript.

Before we get to details on how I fell flat on my face, to be fair three.js animation system is optimized for controlling animations created using content creation tools such as Blender. In this respect, it is much like Unity 3D. In both of these tools, programmatically generated animations are not the priority. In fact there weren’t any examples for me to follow in the manual. I hunted around online and found DISCOVER three.js, which proclaimed itself as “The Missing Manual for three.js”. The final chapter (so far) of this online book talks about animations. This chapter had an ominous note on animation rotations:

As we mentioned back in the chapter on transformations, quaternions are a bit harder to work with than Euler angles, so, to avoid becoming bamboozled, we’ll ignore rotations and focus on position and scale for now.

This is worrisome, because my goal is to animate the 256 colors between two color model layouts. From the current layout of a HSV cylinder, to a RGB cube. This required dealing with rotations and just as the warning predicted that’s what kicked my butt.

The first source confusion is between Euler angles and quaternions when dealing with three.js 3D object properties. Object3D.rotation is an object representing Euler angles, so trying to use QuaternionKeyframeTrack to animate object rotation resulted in a lot of runtime errors because the data types didn’t match. This problem I blame on JavaScript in general and not three.js specifically. In a strongly typed language like C there would be an error indicating I’ve confused my types. In JavaScript I only see errors at runtime, in this case one of these two:

  1. When the debug console complains “NaN error” it probably meant I’ve accidentally used Euler angles when quaternions are expected. Both of those data types have fields called x, y, and z. Quaterions have a fourth numeric field named w, while Euler angles have a string indicating order. Trying to use an Euler angle as quaternion would result in the order string trying to fit in w, which is not a number hence the NaN error.
  2. When the debug console complains “THREE.Quaternion: .setFromEuler() encountered an unknown order:” it means I’ve done the reverse and accidentally used Quaternion when Euler angles are expected. This one is fortunately a bit more obvious: numeric value w is not a string and does not dictate an order.

Getting this sorted out was annoying, but this headache was nothing compared to my next problem: using QuaternionKeyframeTrack to animate object rotations.

[Code for this project is publicly available on GitHub]

Notes Of A Three.js Beginner: Color Picker with Raycaster

I was pleasantly surprised at how easy it was to use three.js to draw 256 cubes, each representing a different color from the 8-bit RGB332 palette available for use in my composite video out library. Arranged in a cylinder representing the HSV color model, it failed to give me special insight on how to flatten it into a two-dimension color chart. But even though I didn’t get what I had originally hoped for, I thought it looked quite good. So I decided to get deeper into three.js to make this more useful. Towards the end of three.js getting started guide is a list of Useful Links pointing to additional resources, and I thought the top link Three.js Fundamentals was as good of a place to start as any. It gave me enough knowledge to navigate the rest of three.js reference documentation.

After several hours of working with it, my impression is that three.js is a very powerful but not very beginner-friendly library. I think it’s reasonable for such a library to expect that developers already know some fundamentals of 3D graphics and JavaScript. From there it felt fairly straightforward to start using tools in the library. But, and this is a BIG BUT, there is a steep drop if we should go off the expected path. The library is focused on performance, and in exchange there’s less priority on fault tolerance, graceful recovery, or even helpful debugging messages for when things go wrong. There’s not much to prevent us from shooting ourselves in the foot and we’re on our own to figure out what went wrong.

The first exercise was to turn my pretty HSV cylinder into a color picker, making it an actually useful tool for choosing colors from the RGB332 color palette. I added pointer down + pointer up listeners and if they both occurred on the same cube, I change the background color to that color and display the two-digit hexadecimal code representing that color. Changing the background allows instant comparison to every other color in the cylinder. This functionality requires the three.js Raycaster class, and the documentation example translated across to my application without much fuss, giving me confidence to tackle the next project: add the ability to switch between HSV color cylinder and RGB color cube, where I promptly fell on my face.

[Code for this project is publicly available on GitHub]

HSV Color Wheel of 256 RGB332 Colors

I have a rectangular spread of all 256 colors of the 8-bit RGB332 color cube. This satisfies the technical requirement to present all the colors possible in my composite video out library, built by bolting the Adafruit GFX library on top of video signal generation code of rossumur’s ESP_8_BIT project for ESP32. But even though it satisfies the technical requirements, it is vaguely unsatisfying because making a slice for each of four blue channel values necessarily meant separating some similar colors from each other. While Emily went to Photoshop to play with creative arrangements, I went into code.

I thought I’d look into arranging these colors in the HSV color space, which I was first exposed to via Pixelblaze I used in my Glow Flow project. HSV is good for keeping similar colors together and is typically depicted as a wheel of colors with the angles around the circle corresponding to the H or hue axis. However, that still leaves two more dimensions of values: saturation and value. We still have the general problem of three variables but only two dimensions to represent them, but again I hoped the limited set of 256 colors could be put to advantage. I tried working through the layout on paper, then a spreadsheet, but eventually decided I need to see the HSV color space plotted out as a cylinder in three dimensional space.

I briefly considered putting something together in Unity3D, since I have a bit of familiarity with it via my Bouncy Bouncy Lights project. But I thought Unity would be too heavyweight and overkill for this project, specifically because I didn’t need a built-in physics engine for this project. Building a Unity 3D project takes a good chunk of time and imposes downtime breaking my trains of thought. Ideally I can try ideas and see them instantly by pressing F5 like a web page.

Which led me to three.js, a JavaScript library for 3D graphics in a browser. The Getting Started guide walked me through creating a scene with a single cube, and I got the fast F5 refresh that I wanted. In addition to rendering, I wanted a way to look around a HSV space. I found OrbitControls in the three.js examples library, letting us manipulate the camera view using a pointer device (mouse, touchpad, etc.) and that was enough for me to get down to business.

I wrote some JavaScript to convert each of the 256 RGB values into their HSV equivalents, and from there to a HSV coordinate in three dimensions. When the color cylinder popped up on screen, I was quite disappointed to see no obvious path to flatten that to two dimensions. But even though it didn’t give me the flash of insight I sought, the layout is still interesting. I see a pattern, but it is not consistent across the whole cylinder. There’s something going on but I couldn’t immediately articulate what it is.

Independent of those curiosities, I decided the cylinder looks cool all on its own, so I’ll keep working with it to make it useful.

[Code for this project is publicly available on GitHub]

Notes After Node.js Introduction

After I ran through the Docker Getting Started tutorial, I went back into the docker container (the one we built as we progressed through the tutorial) and poked around some more. The tutorial template was an Express application, built on Node.js. Since I had parts of this infrastructure in hand, I thought I will just run with it and use Node.js to build a simple web server. The goal is to create a desktop placeholder for an ESP32 acting as a web server, letting me play and experiment quickly without constantly re-flashing my ESP32.

The other motivation was that I wanted to get better at JavaScript. Not necessarily because of the language itself (I’m not a huge fan) but because of the huge community that has sprung up around it, sharing reusable components. I was impressed by what I saw of Node-RED (built on Node.js) and even more impressed when I realized that was only a small subset of the overall Node.js community. More than a few times I’ve researched a topic and found that there was an available JavaScript tool for doing it. (Like building desktop apps.) I’m tired of looking at this treasure trove from the outside, I want this in my toolbox.

Node.js is available with a Windows installer, but given my recent knowledge, I went instead to their officially published Docker images. Using that to run through Node.js introduction required making a few on-the-fly adaptations to Node.js in a container, but I did not consider that a hinderance. I consider it great container practice! But this only barely scratches the surface of what I can find within the Node.js community. Reading the list of Node.js Frameworks and Tools I realized not only have I not heard about many of these things, I don’t even understand the words used to describe them! But there were a lot of great information in that introduction. On the infrastructure side, the live demo code was made possible by Glitch.com, another online development environment I’ll mentally file away alongside Cloud9 and StackBlitz.

Even though Google’s V8 JavaScript engine is at the heart of both Chrome browser and Node.js server, there are some obviously significant differences between running in a server environment versus running in browser. But sometimes there are concepts from one world that can be useful in the the other, and I was fascinated by people who try to bring these worlds closer together. One example is Browserify which brings some server-side component management features to browser-side code. And Event Emitter is a project working in the other direction, bringing browser-style events to server-side code.

As far as I can tell, the JavaScript language itself was not explicitly designed with a focus on handling asynchronous operations. However, because it evolved in the web world where network latency is an ever-present factor, the ecosystem has developed in that direction out of necessity. The flexibility of the language permitted an evolution into asynchronous callbacks, but “possible” isn’t necessarily “elegant” leading to scenarios like callback hell. To make asynchronous callbacks a little more explicit, JavaScript Promise was introduced to help improve code readability. There’s even a tool to convert old-school callbacks into Promises. But as nice as that was, people saw room for improvement, and they built on top of Promises to create the much more easier-to-read async/await pattern for performing asynchronous operations in JavaScript. Anything that makes JavaScript easier to read is a win in my book.

With such an enormous ecosystem, it’s clear I can spend a lot more time learning Node.js. There are plenty of resources online like Node School to do so, but I wanted to maintain some resemblance of focus on my little rover project. So I went back to the Docker getting started tutorial and researching how to adapt it to my needs. I started looking at a tool called webpack to distill everything into a few static files I can serve from an ESP32, but then I decided I should be able to keep my project simple enough I wouldn’t need webpack. And for serving static files, I learned Express would be overkill. There’s a Node.js module node-static available for serving static files so I’ll start by using that and build up as needed from there. This gives me enough of a starting point on server side so I can start looking at the client side.

I Started Learning Jamstack Without Realizing It

My recent forays into learning about static-site generators, and the earlier foray into Angular framework for single-page applications, had a clearly observable influence on my web search results. Especially visible are changes in the “relevant to your interests” sidebars. “Jamstack” specifically started popping up more and more frequently as a suggestion.

Web frameworks have been evolving very rapidly. This is both a blessing when bug fixes and new features are added at a breakneck pace, and a curse because knowledge is quickly outdated. There are so many web stacks I can’t even begin to track of what’s what. With Hugo and Angular on my “devise a project for practice” list I had no interest in adding yet another concept to my to-do list.

But with the increasing frequency of Jamstack being pushed on my search results list, it was a matter of time before an unintentional click took me to Jamstack.org. I read the title claim in the time it took for me to move my mouse cursor towards the “Back” button on my browser.

The modern way to build [websites & apps] that delivers better performance

Yes, of course, they would all say that. No framework would advertise they are the old way, or that they deliver worse performance. So none of the claim is the least bit interesting, but before I clicked “Back” I noticed something else: the list of logos scrolling by included Angular, Hugo, and Netlify. All things that I have indeed recently looked at. What’s going on?

So instead of clicking “Back”, I continued reading and learned proponents of Jamstack are not promoting a specific software tool like I had ignorantly assumed. They are actually proponents of an approach to building web applications. JAM stands for (J)avaScript, web (A)PIs, and (M)arkup. Tools like Hugo and Angular (and others on that scrolling list) are all under that umbrella. An application developer might have to choose between Angular and its peers like React and Vue, but no matter the decision, the result is still JAM.

Thanks to my click mistake, I now know I’ve started my journey down the path of Jamstack philosophy without even realizing it. Now I have another keyword I can use in my future queries.

Notes on Angular Architecture Guide

After completing two beginner tutorials, I returned for another pass through the Angular Architecture Guide. These words now make a lot more sense when backed by the hands-on experience of the two tutorials. There are still a few open question marks, though, the long standing top of my list are Angular Modules. I think each of the tutorial is contained into a single module? That would explain why I haven’t seen it in action yet. It doesn’t help that JavaScript also has a “Module” concept, makes things confusing to a beginner like myself. And as if that’s not confusing enough, there are also “Libraries” which are different from modules… how? I expect these concepts won’t become concrete until I tackle larger projects that incorporate code from multiple different sources.

In contrast, Angular Components have become very concrete and these pages even use excerpts from the “Tour of Heroes” tutorial to illustrate its concepts. I feel I have a basic grasp of components now, but I’m also aware I still need to read up on a few things:

  • Component metadata so far has been a copy-and-paste affair with limited explanation. It’ll be a challenge to strike out on my own and get the metadata correct. Not just because I don’t know what I should do, but also I haven’t seen what the error messages are like.
  • Data binding with the chart of four types of binding looks handy, I hope it’s repeated and expanded in more detail elsewhere in documentation so I can get some questions answered. One example: I read “Angular processes all data bindings once for each JavaScript event cycle” but I have to ask… what’s a JavaScript event cycle? The answer lies elsewhere.
  • Pipes feel like something super useful and powerful, looking forward to playing more with the concept and maybe even create a few of my own.
  • Directives seem to be a generic concept whose umbrella is fuzzy, but I’ve been using task-specific directives in the tutorial. *ngFor and *ngIf are structural directives. Attribute directive was used in two-way data binding. Both types have their own specific guide pages.

Reading the broad definition of services I started thinking “Feels like my UWP Logger, maybe I can implement a logger service as exercise.” Only to find out they’re way ahead of me – the example is a logger (to console) service! I had hoped this section would help clear up the concept of service providers, but it is still fuzzy in my head. I understand the default is root, which is a single instance for everything, and this is what was used in the tutorials. I will need to go find examples of providers at other granularities, which apparently can be as specific as per component where each instance of component gets its own instance of that service. When would that be useful? I have yet to learn the answer. But at least I’ve written these questions down so I’m less likely to forget.

Fixing Warnings of TypeScript Strict Mode Violation After “Tour of Heroes” Tutorial

Once I’ve reached the end of the Angular “Tour of Heroes” tutorial, I went back to address something from the beginning. When I created my Angular project, I added the --strict flag for the optional “Strict Mode“. My motivation was the belief that, since I’m starting from scratch, I might as well learn it under enforcement of best practices. I soon realized this was also adding an extra burden on myself for following the tutorial, because the example code does not always follow strict mode. This means when something goes wrong, I might have copied the tutorial code incorrectly, but it’s also possible I copied correctly but it’s just doesn’t conform to strict mode.

As a beginner, I really didn’t need this extra work, since I had enough problems with basics on my own. But I decided to forge onward and figure it out as I went. During the tutorial, I fixed strict mode errors in two categories:

  1. Something so trivial that I could fix quickly and move on.
  2. Something so serious that compilation failed and I had to fix it before I could move on.

In between those two extremes were errors that were not trivial, but only resulted in compilation warnings that I could ignore and address later. They are visible in this GitHub commit:

I first had to understand what the code was doing, which was why I imported MessageService to gain access to logging. The compiler warnings were both about uninitialized variables, but the correct solution was different for the two cases.

For the hero input field, the tutorial code logic treats undefined as a valid case. It is in fact dependent on undefined hero to know when not to display the detail panel. Once I understood this behavior, I felt declaring the variable type as a Hero OR undefined was the correct path forward.

For the Observable collection of Heroes, the correct answer is different. The code never counts on undefined being a valid value, so I did not need to do the if/else checks to handle it as an “OR undefined” value. What I needed instead was an initial value of a degenerate Observable that does nothing until we can get a real Observable. Combing through RxJS documentation I saw this was recognized as a need and was actually done in a few different ways that have since been deprecated. The current recommendation is to use the constant defined as EMPTY, which is what I used to resolve the strict mode errors.

Notes on “Tour of Heroes” Tutorial: Other Web Server Interactions

The Angular “Tour of Heroes” tutorial section 6 “Get Data From Server” covered the standard interactions: Create, Read, Update, and Delete commonly referred to as CRUD. But it didn’t stop there and covered a few other server operations. I was intrigued the next section was titled “Search by name” because I was curious how a client side single page application could search data on a static web server.

It turns out the application does not actually perform the search, the “search by name” example is built around sending a GET URL with a “name=” query parameter and processing the results. So the example here isn’t actually specific to searching, it’s just a convenient example to demonstrate a general server query/response outside of the simplified CRUD mode. It can be argued that this fits under the umbrella of R of CRUD, but this is as far as I’ll pick that nit.

Lucky for us, the search query parameter appears to be part of the feature set of the in-memory web API we’re using to stand in for a web server. Like the rest of the tutorial CRUD operations, none of this will actually work on a static web server. But that’s fine, we get to see the magic of Observable used in a few different ways. Like a new convention of ending an observable name with the $ character and asynchronously piping into a *ngFor directive.

For me, the most interesting new concept introduced in this section is the rate-limiting functionality of debounceTime() filter from RxJS. Aside from being a piece of commonly needed (and thus commonly re-implemented) functionality, it shows the power of using RxJS for asynchronous operations where we can stick these kind of filters in the pipeline to accomplish goals. I don’t fully understand how it works or how I might reuse it in other contexts. I think I have to learn more about Subject<T> first? But anyway, what we’ve seen here is pretty cool and worth following up with more reading later.

Notes on “Tour of Heroes” Tutorial: C and D of CRUD

This Angular tutorial on server interactions started with (R)ead and (U)pdate operations, then we moved to C(reate) of CRUD by adding a hero to the Tour of Heroes. I found it interesting that content creation was not the first thing to be covered, even though it is the obvious first step in life cycle of a piece of information which I supposed was why it was the first letter of CRUD. The authors of this tutorial gave us the luxury of a stub set of data so we can explore other concepts before we worry about data creation.

Even with that background, creation was still confusing to me. For example, our input element has a hash #heroName. I assume the hash prefix tells some piece of Angular infrastructure to do something… but what? That was completely unexplained and I have no idea how to use it myself later. Even worse, they didn’t even give me a keyword to search for, so I’ll start with input element documentation and hunt from there.

Another piece of auto-magic is in generation of hero ID. I felt that was a mistake because the identifier will be the first piece of information we’ll need to understand in any debugging task. The tutorial authors may not think these details are important, but I do, so I’ll have to chase down details later.

And finally we have D(elete) of CRUD. The mind-boggling part for me was learning that RxJS only cares about delivering information to subscribers. If there are no subscribers, RxJS will decide it is not important and won’t do the thing. This had to be called out because in this context it meant we must hang a subscriber on a delete operation, even if we don’t do anything with the response, or else RxJS will not perform the operation. On the one hand, emphasizing that an Observable does nothing unless there are subscribers is a very valuable point to bring up. On the other hand, this feels like an inelegant hack.

I can accept this as an oddity of a system that we just have to learn to live with in the world of Angular development. Even though I can see it biting me in the future if I ever forget, I can see how it is a worthwhile tradeoff to get everything else RxJS offers to make server interaction easier.

Notes on “Tour of Heroes” Tutorial: R and U of CRUD

First up in the server interaction portion of this Angular tutorial is R(ead) of CRUD, and we see how to retrieve data using RxJS as an intermediary for making HTTP GET calls. There were unfortunately a few pieces of unexplained magic, such as the presence of a method getHeroNo404() that was explicitly called out in a sidebar yet whose purpose is, strangely, not explained in that sidebar. We can infer it has something to do with error handling, since HTTP 404 is a common error and the tutorial started covering error handling. A very important part of writing any web pap, since HTTP fails all the time for factors out of our control. Still, it would have been nice to know more about how getHeroNo404() fits into the picture.

Next up is U(pdate) of CRUD, and while I’m happy to see some compile-time verification, there’s still some sadness that a lot are still left up to runtime. When I made a mistake in binding button (click)="save()" without defining a save() function in TS, it did not generate a compile time error even though at first glance all the pieces to detect this at compile time are present. In reality, it’s not until runtime and a click on the button do we get the error. And again we need to learn to interpret the error. Because while the lack of save() is correct, the user never declared anything named ctx_r3 and so we had to work backwards by guessing what Angular framework had created on our behalf.

ERROR TypeError: ctx_r3.save is not a function
at HeroDetailComponent_div_0_Template_button_click_12_listener (hero-detail.component.html:10)
at executeListenerWithErrorHandling (core.js:14317)
at wrapListenerIn_markDirtyAndPreventDefault (core.js:14352) at HTMLButtonElement.<anonymous> (platform-browser.js:582)
at ZoneDelegate.invokeTask (zone-evergreen.js:399)
at Object.onInvokeTask (core.js:27428)
at ZoneDelegate.invokeTask (zone-evergreen.js:398)
at Zone.runTask (zone-evergreen.js:167)
at ZoneTask.invokeTask [as invoke] (zone-evergreen.js:480)
at invokeTask (zone-evergreen.js:1621) defaultErrorLogger @ core.js:4197 handleError @ core.js:4245 handleError @ core.js:8627 executeListenerWithErrorHandling @ core.js:14320 wrapListenerIn_markDirtyAndPreventDefault @ core.js:14352 (anonymous) @ platform-browser.js:582 invokeTask @ zone-evergreen.js:399 onInvokeTask @ core.js:27428 invokeTask @ zone-evergreen.js:398 runTask @ zone-evergreen.js:167 invokeTask @ zone-evergreen.js:480 invokeTask @ zone-evergreen.js:1621 globalZoneAwareCallback @ zone-evergreen.js:1647

Thankfully this was towards the end of the tutorial and chances are good anyone following along would have seen enough to know how to diagnose these problems. Because it certainly doesn’t get any easier as the tutorial continues to the remainder of CRUD.