Notes on Codecademy “Learn Intermediate TypeScript” (And npm “–“)

A TypeScript project incurs some overhead versus a plain JavaScript project. This is an unavoidable fact of TypeScript’s nature compiling to JavaScript. Investing in this overhead can pay off for larger projects, but how large is large enough to benefit? I expect I won’t have a good grasp of that payoff point until I get more experience, but I think I have a better idea after going through Codecademy’s “Learn Intermediate TypeScript” course.

Codecademy is very proud of their browser-based integrated learning environment where students can learn and put what they’ve learned into practice immediately. But this particular course is focused on how we can put TypeScript to use in our own projects outside of Codecademy’s environment. The hands-on portion of course consists of downloading ZIP files of partially complete TypeScript projects to our own computers, then following instructions to see what happens.

Under this system, we get experience running the TypeScript compiler tsc at the command line directly, and inside the context of a project managed via npm and its associated package.json file. This is also where we can keep our TypeScript configuration in a tsconfig.json file. This feels like a good way to use TypeScript. If we’re already incurring the overhead of setting up such a project, it doesn’t take that much additional effort to fold TypeScript into the works.

Amusingly, the most important lesson I learned from this course had nothing to do with TypeScript but was about setting up JavaScript projects with npm. Early on, we were instructed to run “npm run tsc -- --watch” and “npm start -- --watch” but without explaination what those two commands meant. I took a detour to try to figure out exactly what went on with those two lines.

The first part is easy: we’re running something via npm command line tool. That something is listed inside the “scripts” section of package.json. It can be a direct mapping: “tsc” mapped to the TypeScript compiler executable. Or it can be something more complex, as “start” mapped to “nodemon build/index.js” I’m still uncertain about the “run”, as it seems to be optional and/or the default action. Inferred from the fact it was present in one but not the other.

The next item was challenging: “–” What does the double-dash mean? Unfortunately, I found it impossible to perform a web search on “–” and get relevant results. A combination of the fact “–” isn’t a word we can search for and that it is used in many different contexts (C programmers know it as a value decrement) but I didn’t wasn’t sure what context to search within. It wasn’t until another page later on in this course that I learned “–” tells npm to stop interpreting command-line parameters and pass along everything afterwards to the script. So for example, “npm tsc — –init” means: Use “npm” to launch command “tsc” described in package.json. Then “–” meant there are no more parameters for npm, and “–init” should be given to “tsc”. Result: use the current Node.js environment to run command “tsc –init”

This all made sense once I figured it out, but I certainly couldn’t have guessed it from just looking at “–“. There will be more unknowns as I dive back into Angular framework documentation with “Understanding Angular”

Web Dev Alphabet Soup: CORS and CSRF

After a helpful comment helped me find documentation on the no-longer-mysterious AS7341 SMUX (sensor multiplexor) I went to learn more about another mystery I stumbled across as a beginner web developer: CORS (cross-origin resource sharing.) Why does CORS policy exist? After a bit of poking around, I believe the answer is to mitigate a type of attack under the umbrella of CSRF (cross-site request forgery.)

When developing my AS7341 web app, I had the AS7341 accessible via a HTTP GET on my ESP32 and thought I could develop the HTML interface on my desktop machine. But when my desktop-served JavaScript tried to query my ESP32, I was blocked by browser CORS policy. By default, JavaScript served from one server (my desktop) is not allowed to query resources on another (my ESP32.)

Reading various resources online, I learned I could set my ESP32’s HTTP response header “Access-Control-Allow-Origin” to a wildcard “*” to opt out of CORS protection. But that’s merely a “make the error go away” kind of recommendation. I know CORS is security related, but I don’t understand the motivation. What security problem does CORS prevent? Without knowing the motivation, I don’t know what I am opening up by setting “Access-Control-Allow-Origin : *” In my web app, I started out cautiously by only setting that header when I’m developing the HTML UI, serving from my desktop to query my ESP32. In “production”, my ESP32 will serve the HTML and would not need “Access-Control-Allow-Origin : *” in the header to query itself, so that header is absent.

Is that the right thing to do, or is that being overly cautious? I set out to learn more. Curiously, reading MDN and other resources give me information about HOW CORS works, but not a lot about WHY CORS exists. I guess CORS documentation assume the reader already knows! Based on that fact, I know I am looking for a relatively common website security issue that is now considered basic knowledge by network professionals.

Another data point is the fact that CORS is only applicable to HTTP queries from JavaScript running in the browser. From a command line on my desktop, I can use the “curl” tool to query my ESP32 and CORS does nothing to block that. My browser on my desktop can query the endpoint directly and that is not blocked by CORS policy, either.

Things didn’t make much sense until I found a key piece of information: HTTP request sent from a browser’s JavaScript runtime not only sends the URL and its parameters, but the browser would also attach all cookies set by that host. These cookies may contain user authentication (the “Keep me logged in” checkbox) and it makes sense such capability shouldn’t be available to just any piece of JavaScript served by random hosts. Knowing this fact and knowing the kind of abuse such code can cause eventually led me to a category of security attacks known as CSRF (cross-site request forgery.)

Once I understood CORS is here to mitigate a subset of CSRF attacks, I could look at my ESP32 AS7341 access endpoint and decide CSRF is not a problem here. Setting “Access-Control-Allow-Origin : *” does not open me up to security nastiness, so my ESP32 sketch sets that header all the time now not just during development. This is a handy bit of knowledge, but it merely scratched the surface of web security. Another item I found to be big and intimidating is OAuth.


Code for this project is publicly available on GitHub

Chart.js For Visualizing AS7341 Data

There’s no shortage of web frameworks that help us put pretty things on screen. I’ve been eyeing A-Frame, Three.js, and D3.js for use in the right project but all would be overkill for my AS7341 interface: I just need to plot eight data points and there’s no need for interactive drill-down. Would the web development ecosystem have something that fits the bill? The answer is definitely “Yes” because this is the same ecosystem that gave us “leftpad” and the debacle it caused. Yeah, I could spend a few hours and write my own, but I know I don’t have to.

I went on NPM to search for charting modules and as soon as I typed “chart” I got the suggestion to look at Chart.js. A brief read of documentation told me this fits my needs. Simple, lightweight, and minimal interactivity capabilities that I plan to turn off anyway. No need for fancy graphics of WebGL or DOM interactivity of SVG, Chart.js draws onscreen using HTML Canvas. Canvas was the API I used for my Micro Sawppy browser interface, so I have a rough idea of what Canvas could and could not do.

With my limited needs, I don’t expect to use most of Chart.js capabilities. But I’m happy to incorporate those that are convenient and require minimal/no effort on my part. One good bit of visual polish is its ability to animate updates to chart data, smoothly growing or shrinking bars in my bar chart based on updated AS7341 sensor data. Another bit of convenience was the ability to specify color used for each bar. I could draw the bar for one AS7341 sensor with the color that corresponds to its wavelength, which helps give me an intuitive grasp of the spectrum seen by AS7341. A quick web search found Academo’s interactive wavelength to color converter and I used that to determine colors of each bar F1-F8.

What about the other sensors? I’m completely ignoring the flicker detector right now, and I decided not to draw the clear channel. From my experiments, the clear channel typically has the highest value (which makes sense as it’s the sensor without any color filters blocking input) so I used its value as the Y-axis maximum. I also plotted the near infrared channel, but since it’s invisible I plotted it using an arbitrary chosen dark red color. This seemed to work when I first wrote the code late at night under artificial light. The next morning, I played under natural sunlight and that was an entirely different beast.


Code for this project is publicly available on GitHub

Overkill Options: A-Frame, Three.js and D3.js

After getting input controls sorted out on my AS7341 interface project, it’s time for the fun part: visualizing the output! Over the past few years of reading about web technologies online, I’ve collected a list of things I wanted to play with. My AS7341 project is not the right fit for these tools, so this list awaits a project with the right fit.

At this point I’ve taken most of Codecademy’s current roster of courses under their HTML/CSS umbrella. One of the exceptions is “Learn A-Frame (VR)“. I’m intrigued by the possibilities of VR but putting that in a browser definitely feels like something ahead of its time. “VR in a browser” has been ahead of its time since 1997’s VRML, and people have kept working to make it happen ever since. A brief look at A-Frame documentation made my head spin: I need to get more proficient with web technologies and have a suitable project before I dive in.

If I have a project idea that doesn’t involve full-blown VR immersion (AS7341 project does not) but could use 3D graphics capability (still does not) I can still access 3D graphics hardware from the browser via WebGL. Which by now is widely supported across browsers. In the likely case working directly with WebGL API is too nuts-and-bolts for my taste, there are multiple frameworks that help take care of low-level details. One of them is Three.js, which was the foundation for a lot of cool-looking work. In fact, A-Frame is built on top of Three.js. I’ve dipped my toes in Three.js when I used it to build my RGB332 color picker.

Dropping a dimension to land of 2D, several projects I’ve admired were built using D3.js. This framework for building “Data-Driven Documents” seems like a great way to interactively explore and drill into sets of data. On a similar front, I’ve also learned of Tableau which is commercial software covering many scenarios for data visualization and exploration. I find D3.js more interesting for two reasons. First, I like the idea of building a custom-tailored solution. And second, Tableau was acquired by Salesforce in 2019. Historically speaking, acquisitions don’t end well for hobbyists on a limited budget.

All of the above frameworks are overkill for what I need right now for an AS7341 project: there are only a maximum of 11 different sensor channels. (Spectral F1-F8 + Clear + Near IR + Flicker.) And I’m focusing on just the color spectra F1-F8. A simple bar chart of eight bars would suffice here, so I went looking for something simpler and found Chart.js.

Learning Plan for Angular Round 2

Reviewing the TypeScript Handbook was very educational, even if I didn’t understand all of it. It was enough to make me feel confident I have what I need to get more out of revisiting Angular web framework. When I tried to learn Angular the first time, I only had a basic grasp of HTML, CSS, and JavaScript. Because of this weak foundation with weak supports, I didn’t really know enough to put Angular to work. I just ran through the tutorial and didn’t do much with it. Over the past few weeks, I’ve been patching up holes in my knowledge of web development, and I hope to have better results if I visit Angular again. It’s no guarantee of success, and there’s a good chance I’d only learn enough to realize I need to revisit the other topics like CSS and JavaScript again. But even in that case I’d learn more than I know today, and that is itself a win.

So given what I’ve learned recently, here is how I intend to tackle my second round of learning Angular:

  1. Read through Angular introduction again.
  2. Just skim instructions for the StackBlitz-based shopping cart demo without repeating hands-on activity. I like the idea of StackBlitz but its web-based development environment was different enough from a local development environment that I’ve decided I prefer to skip it in favor of practicing local development.
  3. Hands-on follow through the “Tour of Heroes” tutorial for the second time.

After finishing “Tour of Heroes” again, put my recent learning to work enhancing it:

  1. The “Tour of Heroes” tutorial was focused on Angular application framework mechanics, so the visual HTML and CSS is very plain. Put my recent HTML and CSS learning to work and spiff up that site. Including a mobile-friendly layout via media queries.
  2. The “Tour of Heroes” tutorial used a small class as a local proxy substitution for server-side database backend, storing its data in memory using JavaScript collection classes. Remove that proxy and migrate it to run on a Node.js server.
  3. Upgrade backend interface code to a more robust web API implemented using Express.
  4. Upgrade backend store to a MongoDB instance instead of in-memory JavaScript objects.

If I get this far, I would have practiced the entire MEAN stack. However, the MongoDB side would be quite lightweight given the limited demands of “Tour of Heroes”. Fortunately, in the MongoDB University course, we were given several practice databases of nontrivial size. I could build an Angular web app on top of one of those databases.

And if I’m successful with that, I would then have enough skill to tackle a MEAN stack project from scratch.

Tha’s quite a plan with many steps! I’ll likely deviate from this plan as I hit various roadblocks and work to resolve them, and it’ll take at least several weeks. But it feels exciting to have a longer-term plan. But first, a look at the Angular framework to see how it has changed since my first visit.

Notes on TypeScript Handbook

I liked the idea of TypeScript, a tool that makes JavaScript more predictable and manageable. An introduction from Codecademy’s Learn TypeScript course was quite instructive, but I knew that wasn’t the whole picture. To learn more, I decided to read through the TypeScript Handbook. It is a document meant to be shorter and easier to read than the formal TypeScript language definition, at a tradeoff of less precision and skipping over details on edge cases.

The majority of value in TypeScript is in keeping track of code intent by assigning data types to variables. At first glance I thought it would be helpful but not a huge deal, but I was wrong. TypeScript can infer a lot from these type assignments. My favorite example was named “Exhaustiveness checking“, where TypeScript could infer when a switch() statement is missing a case. This is a class of problem I frequently try to make visible in my own code. Unfortunately, I could only log an error and/or throwing an exception in case I fall into default. Which is a runtime solution where my code would come to a screeching halt long after I wrote it. But with TypeScript exhaustiveness checking combined with TypeScript ‘never‘ Type, I could turn such mistakes into compile-time error “is not assignable to type 'never'“. This is huge and this capability alone is enough to turn me into a TypeScript fan.

However, TypeScript can’t solve all JavaScript ills, because TypeScript always maintains full runtime compatibility with JavaScript. This means TypeScript labels like “readonly” only communicates intent for compile-time checking and could not guarantee the value will remain immutable at runtime. It couldn’t solve the fact JavaScript evolution ended up with a very convoluted model on what “this” means. Sometimes TypeScript tries to solve a problem, only for JavaScript to end up with a different solution to the same problem. Such as TypeScript labeling class members as private (intent-only compile time check) and JavaScript’s “#” private marker (actual runtime enforcement?) And if JavaScript changes how inherited fields are initialized, TypeScript must follow suit while doing their best to provide a mitigation for existing code.

Another area where TypeScript had no choice but to follow suit with JavaScript is adopting all the different ways a function could be declared. My eyes started watering as I read through the function section of TypeScript handbook, I had never even seen most of these ways! Designing TypeScript so it could annotate type information on all of these function declaration methods must have been a huge headache. But this was just a taste: there’s a whole “Type Manipulation” section of the handbook describing how to carry type information through all kinds of different convolutions. The author seems proud of this capability, claiming “we can express complex operations and values in a succinct, maintainable way” but I got lost. This is information I could not absorb on the first pass and would have to return frequently as reference.

Since the “easy to read” handbook already lost me on several points, I don’t think I’m quite ready to jump into the full language specification just yet. But there was one more item I wanted to look up: class decorators are used in Angular to create every component, and I didn’t quite understand how that worked. TypeScript reference page for decorators called it an experimentational feature of TypeScript that is also a proposal to JavaScript, which means things might change in the future if JavaScript officially adopts decorators in a way incompatible with what TypeScript does today. Oh well, if that happens, I’m confident the TypeScript people are well practiced at dealing with it. In the meantime, I have to devise a plan for my own TypeScript practice.

Notes on Codecademy “Learn TypeScript”

I want to go back and take another look at Angular web application development framework, because I’ve learned a lot about web development since my earlier attempt. But before I do, I want to review TypeScript, which is what Angular uses to tame some of JavaScript’s wild nature. Conveniently, Codecademy has a “Learn TypeScript” course for me to do exactly that.

As its name implies, TypeScript imposes some of the benefits of data type enforcement to JavaScript’s default data type system, where anything can go anywhere. The downside of JavaScript’s flexibility is that it also allowed many bugs to hide until emerging at very inconvenient times. While it’s always possible to move to an entirely different language if type safety is desirable, the beauty of TypeScript is that it maintains full compatibility with JavaScript so we don’t have to leave that ecosystem (JS runtimes, libraries, tools, etc.) to gain benefits of compile-time type checks. TypeScript accomplishes this magic by performing its checks on TypeScript source files. Once everything has been verified to be satisfactory, that source code is translated to standard JavaScript for execution.

But before that compiler runs, TypeScript syntax gives us a lot of tools to organize our code to catch bugs at “compile” time. (More accurately TypeScript-to-JavaScript transpiling time.) There are static code analysis features to find problems. Starting with simple ones like a mistyped variable name would get caught because it has no declared type. We also get (illusions of) features like enum so we can constrain values within a defined valid set.

However, in order to stay compatible with JavaScript, TypeScript couldn’t offer all the rigorousness of a strictly typed language. There are various middle ground features sprinkled throughout so we don’t have to take all-or-nothing. The type guard of “[property] in [object]” aligns with “duck typing” patterns: checks for the method we care about instead of the exact object type or interface. It was also amusing to see support for generics, which is becoming a feature in strictly typed languages but now we have it in TypeScript as well compiling down to “do whatever you want” JavaScript. That leads to things like Index Signatures with no real counterpart in strictly typed languages, and I credit its existence to JavaScript for being weird. I wouldn’t blame everything on JavaScript, though. I sometimes stumble across TypeScript union types, which lets us support a limited set of types. In one practice exercise, we have an array that can be an array of one of two types. I typed TypeA[] | TypeB[] which made sense to my brain, but that was not acceptable. I had to use (TypeA | TypeB)[] and I still don’t understand why.

This course also exposed me to certain JavaScript things that were not specific to TypeScript. There was a brief mention of documentation comments: /** */ blocks that included markup like @param and @returns. I vaguely recall seeing this before but I no longer remember what this is called and the course didn’t give me a pointer. This course was also the first time I saw JavaScript rest parameters and its counterpart spread syntax. And finally, this is where I learned of number.toFixed() which I will definitely use in the future.

Like all Codecademy courses, this one gives us enough context for us to navigate product document on our own. In this case, the official reference is The TypeScript Handbook. Explanations in the handbook make a lot more sense to me after this Codecademy course than it did before, so I’d say “Learn TypeScript” was worth my time investment.

Replace node-static with serve-static for ESP32 Sawppy Development

One of the optional middleware modules maintained by the Expressjs team is express.static, which can handle serving static assets like HTML, CSS, and images. It was used in code examples for Codecademy’s Learn Express course, and I made a mental note to investigate further after class. I thought it might help me with a problem I already had on hand, and it did!

When I started writing code for a micro Sawppy rover running on an ESP32, I wanted to be able to iterate on client-side code without having to reflash an ESP32. So as an educational test run of Node.js, I wrote a JavaScript counterpart to the code I wrote (in C/C++) for running on ESP32. While they are two different codebases, I intended for the HTTP interface to be identical and indistinguishable by the HTML/CSS/JavaScript client code I wrote. Most of this server-side work was focused around websocket, but I also needed to serve some static files. I looked on nodejs.org and found “How to serve static files” in their knowledge base. That page gave an example using the node-static module, which I copied for my project.

Things were fine for a while, but then I started getting messages from the Github Dependabot nagging me to fix a critical security flaw in my repository due to its use of a library called minimist. It was an indirect dependency I picked up by using node-static, so I figured it’ll be fixed after I pick up an update to node-static. But that update never came. As of this writing, the node-static package on NPM hadn’t been updated for four years. I see updates made on the GitHub repository, but for whatever reason NPM hasn’t picked that up and thus its registry remains outdated.

The good news is that my code isn’t deployed on an internet-facing server. This Node.js server is only for local development of my ESP32 Sawppy client-side browser code, which vastly minimizes the window of vulnerability. But still, I don’t like running known vulnerable code, even if it is only accessible from my own computer and only while I’m working on ESP32 Sawppy code. I want to get this fixed somehow.

After I got nginx set up as a local web server, I thought maybe I could switch to using nginx to serve these static files too. But there’s a hitch: a websocket connection starts as a HTTP request for an upgrade to websocket. So the HTTP server must interoperate with the websocket server for a smooth handover. It’s possible to set this up with nginx, but the instructions to do so is above my current nginx skill level. To keep this simple I need to stay within Node.js.

Which brought me back to Express and its express.static file server. I thought maybe I could fire up an Express app, use just this express.static middleware, and almost nothing else of Express. It’s overkill but it’s not stupid if it works. I just had to figure out how it would handover to my websocket code. Reading Express documentation for express.static, I saw it was built on top of a NPM module called serve-static, and was delighted to learn it can be used independent of Express! Their README included an example: Serve files with vanilla node.js http server and this was exactly what I needed. By using the same Node.js http module, my websocket upgrade handover code will work in exactly the same way. At the end, switching from node-static to serve-static was nearly a direct replacement requiring minimal code edit. And after removing node-static from my package.json, GitHub dependabot was happy and closed out my security issue. I will be free from nagging messages, at least until the next security concern. That might be serious if I had deployed to be internet accessible, but the odds of that just dropped.

Notes on Express “Getting Started” Guide

During the instruction of Codecademy’s “Learn Express” course, we see a few middleware modules that we can optionally use in our project as needed. Examples used in the course are morgan and body-parser, and one of the quizzes asked us to look up vhost. Course material even started using serve-static before we learned about middleware modules at all. These four middleware modules were among those popular enough to be adopted by the Expressjs team who now maintain them.

Since that meant I already had a browser tab open to the Express project site, I decided to poke around. Specifically, I wanted to see how their own Getting Started guide compared to the Codecademy course I just finished. My verdict: the official Express site provides a wider breadth of information but not nearly as much depth for educating a newcomer. If I hadn’t taken the Codecademy course and tried to get started with this site, I would have been able to get a simple Express application up and running but I would not have understood much of what was going on. Especially if I had created an app using the boilerplate application generator. Even after the Codecademy course I don’t know what most of these settings mean!

But the official site had wider breadth, as Codecademy didn’t even mention the boilerplate tool. It also has many lists of pointers to resources, like the aforementioned list of popular middleware modules. Another list I expect to be useful is a sample of options for database integration. Some minimal contextual information was provided with each listed link, but it’s up to us to follow those links and go from there. The only place where this site goes in depth is the Express API reference, which makes sense as the official site for Express should naturally serve as the authoritative source for such information!

I anticipate that I will use Express for at least a few learning/toy projects in the future, at which point I will need to return to this site for API reference and pointers to resources that might help me solve problems in the future. However, before I even get very far into Express, this site has already helped me solve an immediate problem: node-static is out of date.

Notes on Codecademy “Learn Express”

I may have my quibbles with Codecademy’s Learn Node.js course, but it at least gave me a better understanding to supplement what I had learned bumping around on my own. But the power of Node isn’t just the runtime, it’s the software ecosystem which has grown up around it. I have many many choices of what to learn from this point, and I decided to try the Learn Express course.

Before I started the course, I understood Express was one of the earlier Node.js frameworks for building back end of websites in JavaScript. And while there have been many others that have come online since, with more features and capabilities, Express is still popular because it set out not to pack itself with features and capabilities. This meant if we wanted to prototype something slightly off the beaten path, Express would not get in our way. This sounded like a good tool to have in the toolbox.

After taking the course, I learned how Express accomplishes those goals. Express Routes helps us map HTTP methods (GET/POST/PUT/DELETE) to JavaScript code via “Routes”, and for each route we can compose multiple JavaScript modules in the form of “Middleware”. With this mechanism we can assemble arbitrary web API by chaining middleware modules like LEGO pieces to respond to HTTP methods. And… that’s basically the entirety of core Express. Everything else is optional, so we only need to pull in what we need (in the form of middlware modules) for a particular project.

When introducing Routes in Express, our little learning JavaScript handler functions are actually fully qualified Middleware, but we didn’t know it yet. What I did notice is that it had the signature of three parameters: (request, response, next). The Routes course talked about reading request to build our response, but it never talked about next. Students who are curious about them and striking out to search on their own as I did would find information about “chaining”, but it wouldn’t make sense until we learned Middleware. I thought it would have been nice if the course would say “we’ll learn about next later, when we learn about Middleware” or something to that effect.

My gripe with this course is in its quiz sections. We are given partial chunk of JavaScript and told to fill in certain things. When we click “Check Work” we trigger some validation code to see if we did it right. If we did it wrong, we might get an error message to help us on our way. But sometimes the only feedback we receive is that our answer is incorrect, with no further feedback. Unlike earlier Node course exercises, we were not given a command prompt to run “node app.js” and see our output. This meant we could not see the test input, we could not see our program’s behavior, and we could not debug with console.log(). I tried to spin up my own Node.js Docker container to try running the sample code, but we weren’t given entire programs to run and we weren’t given the test input so that was a bust.

I eventually found a workaround: use exceptions. Instead of console.log('debug message') I could use throw Error('debug message') and that would show up on the Codecademy UI. This is far less than ideal.

Once I got past the Route section, I proceeded to Middleware. Most of this unit was focused on showing us how various Middleware mechanisms allow us to reduce code duplication. My gripe with this section is that the course made us do useless repetitive work before telling us to replace them with much more elegant Middleware modules. I understand this is how the course author chose to make their point, but I’m grumpy at useless make-work that I would delete a few minutes later.

By the end of the course, we know Express basics of Route and Middleware and got a little bit of practice building routes from freely available middleware modules. The course ends by telling us there are a lot of Express middleware out there. I decided to look into Express documentation for some starting points.

Notes on Codecademy “Learn Node.js”

I’ve taken most of Codecademy’s HTML/CSS course catalog for front-end web development, ending with a series of very educational exercises created outside of Codecademy’s learning environment. I think I’m pretty well set up to execute web browser client-side portions of my project ideas, but I also need to get some education on server-side coding before I can put all the pieces together. I’ve played with Node.js earlier, but I’m far from an expert. It should be helpful to get a more formalized introduction via Codecademy, starting with Learn Node.js.

This course recommends going through Introduction to JavaScript as a prerequisite, so the course assumes we already know those basics. The course does not place the same requirement on Intermediate JavaScript, so some of the relevant course material is pulled into this Node.js course. Section on Node modules were reruns for me, but here it’s augmented with additional details and a pointer to official documentation.

The good news for the overlap portions is that it meant I already had partial credit for Learn Node.js as soon as I started, the bad news is the Codecademy’s own back-end got a little confused. I clicked through “Next” for a quick review, and by doing so it skipped me over a few lessons that I had not yet seen. My first hint something was wrong was getting tossed into a progress checking quiz and being baffled: “I don’t remember seeing this material before!” I went back to examine the course syllabus, where I saw the skipped portions. The quiz was much easier once I went through that material!

This course taught me about error-first callback functions, something that is apparently an old convention for asynchronous JavaScript (or just Node) code that I hadn’t been aware of. I think I stumbled across this in my earlier experiments and struggled to use the effectively. Here I learn they were the conceptual predecessor to promises, which led to async/await which plays nice with promises. But what about even older error-first callback code? This is where util.promisify() comes into the picture, so that everyone can work together. Recognizing what error-first callbacks are and knowing how to interoperate via util.promisify(), should be very useful.

The course instructs us on how to install Node.js locally on our development computers, but I’m going to stick with using Docker containers. Doing so would be inconvenient if I wanted to rely on globally installed libraries, but I want to avoid global installations as much as possible anyway. NPM is perfectly happy to work at project scope and that just takes mapping my project directory as a volume into the Docker container.

After all, I did that as a Docker & Node test run with ESP32 Sawppy’s web interface. But that brought in some NPM headaches: I was perpetually triggering GitHub dependabot warnings about security vulnerabilities in NPM modules I hadn’t even realized I was using. Doing a straight “update to latest” did not resolve these issues, I eventually figured out it was because I had been using node-static to serve static pages in my projects. But the node-static package hadn’t been updated in years and so it certainly wouldn’t have picked up security fixes. Perhaps I could switch it to another static server NPM module like http-server, or get rid of that altogether and keep using nginx as sheer overkill static web server.

Before I decide, though, this Learn Node.js course ended with a few exercises building our own HTTP server using Node libraries. These were a little more challenging than typical Codecademy in-course exercises. One factor is that the instructions told us to do a lot of things with no way to incrementally test them as we go. We didn’t fire it up the server to listen for traffic (server.listen()) until the second-from-final step, and by then I had accumulated a lot of mistakes that took time to untangle from the rest of the code. The second factor is that the instructions were more vague than usual. Some Codecademy exercises tell us exactly what to type and on which line, and I think that didn’t leave enough room for us to figure things out for ourselves and learn. This exercise would sometimes tell us “fill in the request header” without details or even which Node.js API to use. We had to figure it all out ourselves. I realize this is a delicate balance when writing course material. I feel Codecademy is usually too much “do exactly this” for my taste, but the final project of Learn Node.js might have gone too far in the “left us flailing uselessly” direction.

In the meantime, I believe I have enough of a start to continue learning about server-side JavaScript. My next step is to learn Express.

Notes on Codecademy “Learn Intermediate JavaScript”

After going through Codecademy’s “Learn JavaScript” course, the obvious follow-up was their “Learn Intermediate JavaScript” course, so I did and I liked it. It had a lot of very useful information for actually using JavaScript for projects. The first course covered the fundamentals of JavaScript but such knowledge sort of floated in an abstract space. It wasn’t really applicable until this intermediate course covered the two most common JavaScript runtime environments: browser on the client end, and Node.js for the server end. Which, of course, added their own ways of doing things and their own problems.

Before we got into that, though, we expanded the first class’s information on objects in general to classes in particular. Now I’m getting into an objected-oriented world that’s more familiar to my brain. This was helped by the absence of multiple different shorthand for doing the same thing. I don’t know if the course just didn’t cover them (possible) or the language has matured enough we no longer have people dogpiling ridiculous ideas just to save a few characters (I hope so.)

Not that we’re completely freed from inscrutable shorthand, though. After the excitement of seeing how JavaScript can be deployed in actual settings, I got angry at the language again when I learned of ES6 “Default Exports and Imports”.

// This will work...
import resources from 'module.js'
const { valueA, valueB } = resources;
 
// This will not work...
import { valueA, valueB } from 'module.js'

This is so stupid. It makes sense in hindsight after they explained the shorthand and why it breaks down, but just looking at this example makes me grumpy. JavaScript modules is so messed up this course didn’t try to cover everything, just pointing us to Mozilla.org documentation to sort it out on our own.

After modules, we covered asynchronous programming. Another very valuable and useful aspect of actually using JavaScript on the web. Starting with JavaScript Promises, then async/await which is an ES8 syntax for writing more readable code but still using Promises under the hood. My criticism here is JavaScript’s lack of strong typing, making it easy to make mistakes that wouldn’t fall apart until runtime. This is so bad we even have an “Avoiding Common Mistakes” section in this course, which seems like a good idea in every lesson but apparently only important enough here.

Once async/await had been covered, we finally had enough background to build browser apps that interact with web APIs using browser’s fetch() API. The example project “Film Finder” felt a lot more relevant and realistic than every other Codecademy class project I’ve seen to date. It also introduces me to The Movie Database project, which at first glance looks like a great alternative to IMDB which has become overly commercialized.

After the Film Finder project, this course goes into errors and error handling mechanisms, along with debugging JavaScript. I can see why it’s placed here: none of this would make sense unless we knew something about JavaScript code, but a lot of these lessons would have been very helpful for people struggling with earlier exercises. I’m sad to think of the possibility that there might exist people who would benefit from this information, but never got this far because they got stuck in an earlier section because they needed help debugging.

The best part of this final section is a walkthrough of browser developer tools to profile memory and CPU usage. There are a lot of knobs and levers in these tools that would easily overwhelm a beginner. It is very useful to have a walkthrough that focused just on a few very common problems, and how to find them. Once we know a few places to start, it gives a starting point for exploring the rest of developer tools. This was fantastic, my only regret is that only applies to browser-side JavaScript. We’d have to learn an entirely different set of tools for server-side Node.js code.

But that’s enough JavaScript fun for now, onward to the third pillar of web development: CSS.

Notes on Codecademy “Introduction to Javascript”

After reviewing HTML on Codecademy, I proceeded to review JavaScript with their Introduction to Javascript course (also titled Learn Javascript in some places, I’m using their URL as the definitive name.) I personally never cared for JavaScript but it is indisputably ubiquitous in the modern world. I must have a certain level of competency in order to execute many project ideas.

The history of JavaScript is far messier than other programming languages. It evolved organically, addressing the immediate need of the next new browser version from whoever believes they had some innovation to offer. It was the wild west until all major players agreed to standardize JavaScript in the form of ECMAScript 6. (ES6) While the language has continued to evolve, ES6 is the starting point for this course.

A standard is nice, but not as nice as it might look at first glance. In the interest of acceptance, it was not practical for ES6 to break from all the history that proceeded it. This, I felt, was the foundation of all of my dissatisfaction with JavaScript. Because it had to maintain compatibility, it has to accept all the different ways somebody thought to do something. I’m sure they thought they were being clever, but I see it as unnecessary confusion. Several instances came up in this course:

  • Somebody thought it was a good idea for comparison to perform automatic type conversion before performing the comparison. It probably solved somebody’s immediate problem, but the long-term effect is that “==” operator became unpredictable. The situation is so broken that this course teaches beginners to use the “===” operator and never mentions “==”.
  • The whole concept of “truthy” and “falsy” evaluations makes code hard to understand except for those who have memorized all of the rules involved. I don’t object a beginner course covering such material “this is a bad idea but it’s out there so you need to know.” However, this course makes it sound like a good thing “Truthy and falsy evaluations open a world of short-hand possibilities!” and I strongly object to this approach. Don’t teach beginners to write hard-to-understand code!
  • JavaScript didn’t start with functions, but the concept was so useful different people each hacked their own ways to declare functions. Which means we now have function declaration (function say(text) {}) function expression (const say = function(text) {}) arrow function (const say = (text) => {}) and the concise body arrow function (const say = text => {}). I consider the latter inscrutable, sacrificing readability for the sake of saving a few characters. A curse inflicted upon everyone who had to learn JavaScript since. (An anger made worse when I learned arrow functions implicitly bind to global context. Gah!)

These were just the three that I thought worth ranting about. Did I mention I didn’t care for JavaScript? But it isn’t all bad. JavaScript did give the web a very useful tool in the form of the JavaScript Object Notation (JSON) which became a de facto standard for transmitting structured data in a much less verbose way than XML which originally promised to do exactly that.

JSON has the advantage it was the native way for JavaScript to represent objects, so it was easy to go from working with a JavaScript object to transmitting it over the network and back to working object. In fact, I originally thought JSON was the data transmission serialization format for JavaScript. It took a while for me to understand that no, JSON is not the serialization format, JSON is THE format. JSON looks like a data structure for key-value pairs because JavaScript objects are a structure for key-value pairs.

Once “every object is a collection of key-value pairs” got through my head, many other JavaScript traits made sense. It was wild to me that I can attach arbitrary properties to a JavaScript function, but once I understood functions to be objects in their own right + objects are key-value pairs = it made sense I could add a property (key) and its value to a function. Weird, but made sense in its own way. Object properties are reduced to a list of key-value pairs, and object methods are special cases where the value is a function object. Deleting an entry from the list of key-value pairs allows deleting properties and methods and accessing them via brackets seem no weirder than accessing a hash table entry. It also made sense why we don’t strictly need a class constructor for objects. Any code (“factory function”) that returns a chunk of JSON has constructed an object. That’s not too weird, but the property value shorthand made me grumpy at JavaScript again. As did destructuring assignment: at first glance I hated it, after reading some examples of how it can be useful, I merely dislike it.

In an attempt to end this on a positive note, I’ll say I look forward to exploring some of the built-in utilities for JavaScript classes. This Codecademy course introduced us to:

  • Array methods .push() .pop() .shift() .unshift() .slice() .splice() etc.
  • Iterators are not just limited to .forEach(). We have .map() .filter() .reduce() .every() and more.
  • And of course stuff on object. Unlike other languages that need an entire feature area for runtime type information, JavaScript’s approach means listing everything on an object is as easy as .keys().

Following this introduction, we can proceed to Intermediate JavaScript.

Notes Of A Three.js Beginner: QuaternionKeyframeTrack Struggles

When I started researching how to programmatically animate object rotations in three.js, I was warned that quaternions are hard to work with and can easily bamboozle beginners. I gave it a shot anyway, and I can confirm caution is indeed warranted. Aside from my confusion between Euler angles and quaternions, I was also ignorant of how three.js keyframe track objects process their data arrays. Constructor of keyframe track objects like QuaternionKeyframeTrack accept (as their third parameter) an array of key values. I thought it would obviously be an array of quaternions like [quaterion1, quaternion2], but when I did that, my CPU utilization shot to 100% and the browser stopped responding. Using the browser debugger, I saw it was stuck in this for() loop:

class QuaternionLinearInterpolant extends Interpolant {
  constructor(parameterPositions, sampleValues, sampleSize, resultBuffer) {
    super(parameterPositions, sampleValues, sampleSize, resultBuffer);
  }
  interpolate_(i1, t0, t, t1) {
    const result = this.resultBuffer, values = this.sampleValues, stride = this.valueSize, alpha = (t - t0) / (t1 - t0);
    let offset = i1 * stride;
    for (let end = offset + stride; offset !== end; offset += 4) {
      Quaternion.slerpFlat(result, 0, values, offset - stride, values, offset, alpha);
    }
    return result;
  }
}

I only have two quaterions in my key frame values, but it is stepping through in increments of 4. So this for() loop immediately shot past end and kept looping. The fact it was stepping by four instead of by one was the key clue. This class doesn’t want an array of quaternions, it wants an array of quaternion numerical fields flattened out.

  • Wrong: [quaterion1, quaternion2]
  • Right: [quaterion1.x, quaterion1.y, quaterion1.z, quaterion1.w, quaternion2.x, quaternion2.y, quaternion2.z, quaternion2.w]

The latter can also be created via quaterion1.toArray().concat(quaternion2.toArray()).

Once I got past that hurdle, I had an animation on screen. But only half of the colors animated in the way I wanted. The other half of the colors went in the opposite direction while swerving wildly on screen. In a HSV cylinder, colors are rotated across the full range of 360 degrees. When I told them to all go to zero in the transition to a cube, the angles greater than 180 went one direction and the angles less than 180 went the opposite direction.

Having this understanding of the behavior, however, wasn’t much help in trying to get things working the way I had it in my head. I’m sure there are some amateur hour mistakes causing me grief but after several hours of ever-crazier animations, I surrendered and settled for hideous hacks. Half of the colors still behaved differently from the other half, but at least they don’t fly wildly across the screen. It is unsatisfactory but will have to suffice for now. I obviously don’t understand quaternions and need to study up before I can make this thing work the way I intended. But that’s for later, because this was originally supposed to be a side quest to the main objective: the Arduino color composite video out library I’ve released with known problems I should fix.

[Code for this project is publicly available on GitHub]

Notes Of A Three.js Beginner: Euler Angles vs. Quaternions

I was pretty happy with how quickly I was able to get a static 3D visualization on screen with the three.js library. My first project to turn the static display into an interactive color picker also went smoothly, giving me a great deal of self confidence for proceeding to the next challenge: adding an animation. And this was where three.js put me in my place reminding me I’m still only a beginner in both 3D graphics and JavaScript.

Before we get to details on how I fell flat on my face, to be fair three.js animation system is optimized for controlling animations created using content creation tools such as Blender. In this respect, it is much like Unity 3D. In both of these tools, programmatically generated animations are not the priority. In fact there weren’t any examples for me to follow in the manual. I hunted around online and found DISCOVER three.js, which proclaimed itself as “The Missing Manual for three.js”. The final chapter (so far) of this online book talks about animations. This chapter had an ominous note on animation rotations:

As we mentioned back in the chapter on transformations, quaternions are a bit harder to work with than Euler angles, so, to avoid becoming bamboozled, we’ll ignore rotations and focus on position and scale for now.

This is worrisome, because my goal is to animate the 256 colors between two color model layouts. From the current layout of a HSV cylinder, to a RGB cube. This required dealing with rotations and just as the warning predicted that’s what kicked my butt.

The first source confusion is between Euler angles and quaternions when dealing with three.js 3D object properties. Object3D.rotation is an object representing Euler angles, so trying to use QuaternionKeyframeTrack to animate object rotation resulted in a lot of runtime errors because the data types didn’t match. This problem I blame on JavaScript in general and not three.js specifically. In a strongly typed language like C there would be an error indicating I’ve confused my types. In JavaScript I only see errors at runtime, in this case one of these two:

  1. When the debug console complains “NaN error” it probably meant I’ve accidentally used Euler angles when quaternions are expected. Both of those data types have fields called x, y, and z. Quaterions have a fourth numeric field named w, while Euler angles have a string indicating order. Trying to use an Euler angle as quaternion would result in the order string trying to fit in w, which is not a number hence the NaN error.
  2. When the debug console complains “THREE.Quaternion: .setFromEuler() encountered an unknown order:” it means I’ve done the reverse and accidentally used Quaternion when Euler angles are expected. This one is fortunately a bit more obvious: numeric value w is not a string and does not dictate an order.

Getting this sorted out was annoying, but this headache was nothing compared to my next problem: using QuaternionKeyframeTrack to animate object rotations.

[Code for this project is publicly available on GitHub]

Notes Of A Three.js Beginner: Color Picker with Raycaster

I was pleasantly surprised at how easy it was to use three.js to draw 256 cubes, each representing a different color from the 8-bit RGB332 palette available for use in my composite video out library. Arranged in a cylinder representing the HSV color model, it failed to give me special insight on how to flatten it into a two-dimension color chart. But even though I didn’t get what I had originally hoped for, I thought it looked quite good. So I decided to get deeper into three.js to make this more useful. Towards the end of three.js getting started guide is a list of Useful Links pointing to additional resources, and I thought the top link Three.js Fundamentals was as good of a place to start as any. It gave me enough knowledge to navigate the rest of three.js reference documentation.

After several hours of working with it, my impression is that three.js is a very powerful but not very beginner-friendly library. I think it’s reasonable for such a library to expect that developers already know some fundamentals of 3D graphics and JavaScript. From there it felt fairly straightforward to start using tools in the library. But, and this is a BIG BUT, there is a steep drop if we should go off the expected path. The library is focused on performance, and in exchange there’s less priority on fault tolerance, graceful recovery, or even helpful debugging messages for when things go wrong. There’s not much to prevent us from shooting ourselves in the foot and we’re on our own to figure out what went wrong.

The first exercise was to turn my pretty HSV cylinder into a color picker, making it an actually useful tool for choosing colors from the RGB332 color palette. I added pointer down + pointer up listeners and if they both occurred on the same cube, I change the background color to that color and display the two-digit hexadecimal code representing that color. Changing the background allows instant comparison to every other color in the cylinder. This functionality requires the three.js Raycaster class, and the documentation example translated across to my application without much fuss, giving me confidence to tackle the next project: add the ability to switch between HSV color cylinder and RGB color cube, where I promptly fell on my face.

[Code for this project is publicly available on GitHub]

HSV Color Wheel of 256 RGB332 Colors

I have a rectangular spread of all 256 colors of the 8-bit RGB332 color cube. This satisfies the technical requirement to present all the colors possible in my composite video out library, built by bolting the Adafruit GFX library on top of video signal generation code of rossumur’s ESP_8_BIT project for ESP32. But even though it satisfies the technical requirements, it is vaguely unsatisfying because making a slice for each of four blue channel values necessarily meant separating some similar colors from each other. While Emily went to Photoshop to play with creative arrangements, I went into code.

I thought I’d look into arranging these colors in the HSV color space, which I was first exposed to via Pixelblaze I used in my Glow Flow project. HSV is good for keeping similar colors together and is typically depicted as a wheel of colors with the angles around the circle corresponding to the H or hue axis. However, that still leaves two more dimensions of values: saturation and value. We still have the general problem of three variables but only two dimensions to represent them, but again I hoped the limited set of 256 colors could be put to advantage. I tried working through the layout on paper, then a spreadsheet, but eventually decided I need to see the HSV color space plotted out as a cylinder in three dimensional space.

I briefly considered putting something together in Unity3D, since I have a bit of familiarity with it via my Bouncy Bouncy Lights project. But I thought Unity would be too heavyweight and overkill for this project, specifically because I didn’t need a built-in physics engine for this project. Building a Unity 3D project takes a good chunk of time and imposes downtime breaking my trains of thought. Ideally I can try ideas and see them instantly by pressing F5 like a web page.

Which led me to three.js, a JavaScript library for 3D graphics in a browser. The Getting Started guide walked me through creating a scene with a single cube, and I got the fast F5 refresh that I wanted. In addition to rendering, I wanted a way to look around a HSV space. I found OrbitControls in the three.js examples library, letting us manipulate the camera view using a pointer device (mouse, touchpad, etc.) and that was enough for me to get down to business.

I wrote some JavaScript to convert each of the 256 RGB values into their HSV equivalents, and from there to a HSV coordinate in three dimensions. When the color cylinder popped up on screen, I was quite disappointed to see no obvious path to flatten that to two dimensions. But even though it didn’t give me the flash of insight I sought, the layout is still interesting. I see a pattern, but it is not consistent across the whole cylinder. There’s something going on but I couldn’t immediately articulate what it is.

Independent of those curiosities, I decided the cylinder looks cool all on its own, so I’ll keep working with it to make it useful.

[Code for this project is publicly available on GitHub]