Notes on Codecademy “Learn CSS: Browser Compatibility”

Web developers must keep in mind not everyone can see and interact with a site in the same way. This is not just a fundamental of web accessibility, but also the fact there are many different web browsers out there running on a huge range of hardware platforms. They won’t all have the same capabilities, so with that fact in mind we proceed to “Learn CSS: Browser Compatibility“.

The first and most obvious topic is CanIUse.com, our index for features supported in various browsers and an invaluable resource. After that, the course talks about how different browsers have different default style sheets. There are reference resources to see what they are, and there exist tools to mitigate differences with a “normalized” or “reset” stylesheet. But if we simply can’t (or won’t) stay within standardized subset, we can use “prerelease” features via vendor prefixes. Either manually or via automated toolsl like Autoprefixer. This is the first of many tools within a whole category called “polyfill” that implements newer features in a backwards compatible way, making user experience more consistent across older/less supported browsers. I knew there were polyfill tools for transpiling new JavaScript features to run on browsers with older JavaScript engines, but I hadn’t know there were CSS polyfill tools as well. Perhaps I needed to pull in one of them to run Angular on Windows Phone? Codecademy pointed to Modernizr and Polyfill.io as starting points to explore the world of polyfills.

Polyfill tools run at development time to decide what to do. But what if we can adjust CSS at runtime? This is the intent of CSS Feature Query, whose syntax looks a lot like CSS Media Query but for browser CSS feature support. Using this feature, however, is a bit of a Catch-22. Not all browsers support CSS Feature Query and obviously those browsers wouldn’t be able to tell us as such via the very mechanism that we’d use to ask for such information. This impacts how we’d write CSS using feature queries, and a likely place for bugs. Finally: we’d need some ability to debug feature query issues via browser developer tools, but a quick web search came up empty.


Completion of this course also gave me Codecademy completion credit for their “Learn Intermediate CSS” course, which is a packaged compilation of the following set of courses I took individually:

  1. Learn CSS: Flexbox and Grid
  2. Learn CSS: Transitions and Animations
  3. Learn CSS: Responsive Design
  4. Learn CSS: Variables and Functions
  5. Learn CSS: Accessibility
  6. Learn CSS: Browser Compatibility

But Codecademy had even more web styling classes beyond this package! I proceeded onward to learn color design.

Notes on Codecademy “Learn CSS: Accessibility”

During Codecademy’s “Learn CSS: Variables and Functions” course, I was mildly disappointed to learn that CSS custom properties and fixed calculation functions were far less powerful than I had mistakenly thought from the course title. No matter, they are sufficient for their intended purpose, and they are valuable tools for building pages that are inclusive to all users. Which is why the next class in Codecademy’s CSS curriculum is “Learn CSS: Accessibility.

First up is the size of text, the most significant factor in typographical design. This is a deep field with centuries of research from the print world, Codecademy can’t cover them all and correctly doesn’t try. We do get a bit of coverage on font-size, line-height, and letter-spacing. One oddity: a conversation on paragraph width introduces the CSS unit “ch” with 1ch being width of the zero character. (“0”) This allows specifying lengths in a way that scales with the font, but that sounds like what “em” does as well. A quick search didn’t find why we would use “ch” vs. “em”.

Another topic was color and contrast. The expected color wheel discussion talks about selecting different colors with high contrast, but it didn’t talk about color blindness. The course linked to a contrast checker tool by WebAIM, but it didn’t mention color blindness, either. A quick search on WebAIM site found a page for color blindness, but it is incomplete. The page ends with an example: a train map where routes are represented by colors. The page tells us that the different routes would be difficult to tell apart for the color-blind, but it doesn’t give us a solution to the problem! My guess? Maybe we can use different line styles?

This section also taught me there is a semantic HTML tag for denoting abbreviations.

<abbr title='Cascading Style Sheets'>CSS</abbr>

I don’t think this should be classified as an accessibility measure because it would be helpful for all audiences. This information is certainly available to screen readers, but there is some visual indication on a normal rendering. My browser renders it as dotted underlined text by default. (“user agent stylesheet”)

abbr[title] {
    text-decoration: underline dotted;
}

Another topic I didn’t think should be classified as accessibility is a discussion on changing rendering for print media. This should have been part of the media queries course, but at least the concepts make sense. When the user is printing out a web page, we can do things like hiding a navigation bar that wouldn’t have done anything on a sheet of paper. This course also talks about printing the URL of a link, because you can’t click on a sheet of paper.

@media print {
  a[href^='http']:after {
    content:  ' (' attr(href) ')';
  }
}

However, this would only work for short URLS that can be reasonably typed in by hand. Many links of the modern web have long URLS that are impossible to accurately type in by hand. But on this vein, I think it would be worthwhile to print out the content of <abbr> tags as well.

Anyway, back to accessibility and screen readers. A section asserts that there are frequently items that are not useful for a screen reader and should be hidden. I think I understand the concept, but I thought the example was poor: In case of duplicate text, one of them could be hidden from screen readers with the aria-hidden='true' attribute. But it seems like the real fix is to remove duplicate text for all audiences! Another problem is this introduction of aria-hidden without saying anything more about it. A quick web search found this is just the tip of an iceberg called the Accessible Rich Internet Applications specification.

After the lecture, we are encouraged to read “Start Testing for Accessibility.” A short overview of several accessibility testing tools: Lighthouse, WAVE, Axe, and Accessibility Insights.

I know I have a lot to learn about web accessibility, not the least of which is learning about other ARIA properties. But there’s another side to the “not everyone sees the same web page as you do” problem: browser compatibility.

Notes on Codecademy “Learn CSS: Variables and Functions”

Following responsive design in Codecademy’s CSS curriculum is “Learn CSS: Variables and Functions“. I was intrigued by the title of this course, as “variables” and “functions” were features unexpected in a visual appearance style description syntax. I associate them with programming languages! Does their presence mean CSS has evolved to become a programming language in its own right? After this class, I now know the answer is a definite “no”.

First of all, CSS “variables” are far less capable than their programming language counterparts. The name isn’t even really accurate, as a value does not vary (hence not variable) after it is set. A value declared by one CSS rule can be overridden by a more specific CSS rule declaring something with the same name, but that’s more analogous to variable scope than actually varying its value. This mechanism is also known as CSS custom properties, and I think that is a much more accurate name than “variable” which oversells its capabilities.

That said, CSS custom properties have their uses. This course showed several cases where custom properties can be overridden by CSS rules to good effect. The first example I found very powerful is combining custom properties and media queries. It makes for a far cleaner and more maintainable way to adjust a stylesheet based on screen size changes. The second significant example is in the exercise project, where we could create multiple themes on a web page (Light Mode/Dark Mode/Custom Mode) using custom properties.

Moving on to “CSS functions”, I was disappointed to learn that we’re talking about a fixed set of data processing/calculation functions implemented by the runtime environment. We can’t actually declare our own functions in CSS. CSS functions are nowhere as powerful as C++ functions. They’re not even as powerful as Microsoft Excel functions, as CSS style sheets are not spreadsheets.

Like almost every standard associated with HTML, we see a specification that tried to rationalize a lot of ad-hoc evolution and failing to hide all of the seams. The one inconsistency that annoys me is the fact some function parameters must be separated by commas, and other functions require parameters to be separated by spaces. It is especially jarring when the two styles are used together, as in this example used in the course:

filter: drop-shadow(10px 5px 0.2rem rgba(236, 217, 203, 0.7));

The drop-shadow() parameters are separated by spaces, while the rgba() parameters are separated by commas. What a mess! How can we be expected to keep this straight? Still, these functions should suffice to cover most of the intelligence we’d want to put in a style sheet. I was most impressed that calc() tries to handle so much, including the capability to mix and match CSS units. I know the intent is to allow things like subtracting a pixel count from a percentage of width, but I see a debugging nightmare. The course covered only a subset of the set of functions, but of those that were covered my favorite is clamp(). I expect to use that a lot.

One unanswered mystery is the class recommends that we set global custom properties using :root pseudo-class as the specifier. The instruction said this is usually the <html> tag, without explaining exceptions to that rule. When is :root not <html>? The MDN documentation unfortunately doesn’t answer that question either. I’ll have to leave it as a mystery as I proceed to learn about web accessibility.

Notes on Codecademy “Learn CSS: Responsive Design”

Following a quick tour through CSS transition animations, next course on the Codecademy CSS curriculum is “Learn CSS: Responsive Design“. Up until this point, Codecademy CSS curriculum had primarily demonstrated concepts using unit measurement of pixels. This course brings us into the real world where pages have to adjust themselves to work well on devices ranging from 4.5″ cell phone screens to 32” desktop monitors.

The first focus is on typography, since most pages are concerned with delivering text for reading. Two measurements work based on font size: em is relative to the base font size of a particular point in the HTML tree, varying relative to its hierarchy. rem ignores the hierarchy and goes directly to root <html>.

Next, we move on to percentages, which are calculated based on the parent width of an element. There were a few paragraphs dedicated to how height calculations are based on parent width instead of parent height, but I didn’t understand why. There was an example of how parent height could be affected by child height so using percentages would cause a circular reference problem, which I understood. But wouldn’t the same problem occur on the width dimension as well? We can specify child width as dependent on parent width, and parent width dependent on contents (child width.) Why wouldn’t this cause the very same circular dependency? I don’t understand that part yet.

When I first started trying to write a HTML-based UI for controlling SGVHAK and Sawppy rovers, I learned of a chunk of HTML that I needed to put into phone-friendly HTML. I didn’t understand what it meant, though, it was just “copy and paste this into your HTML”.

<meta name="viewport" content="width=device-width, initial-scale=1">

With this class I finally got this item checked off the “need to go and learn later” list. Codecademy even included a pointer to a reference for viewport control. This viewport control reference mentioned a few accessibility concerns, but I don’t really understand them yet. I expect to learn more HTML accessibility soon as it is listed later in Codecademy CSS curriculum.

Viewport led into media queries, another essential tool for pages to dynamically adjust between desktop, tablet, and phone usage. Most of the examples reacted to viewport width, which I understand to be the case most of the time in the real world. But this barely scratched the surface of media features we can query for. The one most applicable to my rover UI project is the “orientation” property, because that will let me reposition the virtual joystick when the phone is held in landscape mode.

Most of this course consisted of making changes in CSS then manually resizing our browser window to change viewport width and observe its effects. But again, width is but one of many media features that can be queried, and it’s not obvious how we can explore most of the rest. I wish this course included information on how to test and debug media query issues using browser developer tools. Such a walkthrough was quite informative in the JavaScript course and its absence here is sorely missed. A quick web search found the Media Query Inspector which looked promising at first glance, I’ll have to dig into that later. For now, it’s onwards to Codecademy’s CSS Variables and Functions.

Notes on Codecademy “Learn CSS: Transitions and Animations”

After an interesting and occasionally challenging time with Codecademy’s “Learn CSS: Flexbox and Grid” course, I followed their recommended CSS curriculum onward to “Learn CSS: Transitions and Animations“. It was an interesting and worthwhile unit, and thankfully shorter than the previous course on flexbox + grid. This is in part helped by a narrower scope: the CSS animation system does not attempt to cover all possible animations anyone may want to put on screen. It only animates a specified subset of CSS properties, but doing so covers majority of animation effects useful for adding visual interest and improving the user experience. As opposed to early HTML <BLINK>, a design misstep which definitely did not improve user experience.

Every animation system shares some fundamentals: what to change, where+when to start, where+when to stop, and how to move between start and stop points. That’s quite a few parameters to change and managing them quickly becomes cumbersome as the number of adjustable knobs increase. But CSS animation has a narrow scope, limiting the complexity somewhat. It also has a benefit: the start and stop state are specified by existing CSS mechanisms, leaving the transition system to deal with the animation-specific parameters.

This actually plays into an advantage of using CSS shorthand, which usually annoys me as presenting multiple ways to describe the same thing. Here multiple transition parameters can be collapsed into a single transition. It has all the annoyances of shorthands: it is another way to describe the same thing, and it requires memorizing an arbitrary order of parameters. But here transition has an advantage. We have the option to list multiple transitions in a comma-separated list and they will all execute simultaneously when triggered. The course called this “chaining” animations. I don’t know if the word “chain” was part of CSS specification or just a word choice by the author of this Codecademy course, but I found it misleading because “chain” implies animations that run consecutively (one after the other) instead of all animations running simultaneously in parallel.

Another way to specify multiple animations that run simultaneously is to specify “all” as the animated property. (The “what to change” parameter.) This would apply the same animation duration, easing function, and timing delay to all transitions that apply to an element. This is possible because the start & stop states were specified elsewhere in CSS. I see this as a very powerful mechanism in an “easy to shoot myself in the foot” kind of power. By specifying “all” we commit to animating all applicable property changes now and in the future. Which certainly means surprising behavior in the future when we add another change then surprised to see it animate. Followed by a debugging session that ends with “Oh, this transition applied to ‘all’.”

One thing I did not see is a way to create animated transitions between different layouts, for example when triggered by mechanisms covered in the CSS Responsive Design course. Perhaps that is not a scenario the CSS people deemed important enough?

Notes on Codecademy “Learn CSS: Flexbox and Grid”

After I took Codecademy’s “Learn CSS” course, I figured out it packaged together four shorter courses. Some of these short lessons were all material I’ve learned before and are just a review, so I flew through them quickly enough that putting together in a single “Learn CSS” made sense to me. This situation changed dramatically for the next item on the list: “Learn Intermediate CSS” is again a packaging of several shorter courses, except now the material is new to me and started with CSS flexbox. This was a new and challenging concept for me to grasp, so now I’m in favor of going by the shorter courses instead of the (now quite hefty) package.

Learn CSS: Flexbox and Grid covers the first two lessons of Learn Intermediate CSS. As soon as we started with CSS flexbox I could tell it was a powerful tool. However, this particular course fell short of the normal Codecademy standards. For each function of flexbox, we run through simple illustrative examples, which is good. Unfortunately, these small examples were too simplified for me to understand how they would be applied. In other words, we see a lot of small pieces but the course never pulled them together in a bigger picture.

My personally attempt at summarizing flexbox is: a CSS construct that attempts to capture rules for laying out child elements in a way that is flexible to changes in screen size and aspect ratio. Since there’s no way to capture every possible way to lay things out, the subset supported by flexbox thus becomes a de-facto prescription of best-practice layout from the people who designed this CSS feature. Nevertheless, there are more than enough (possibly too many) knobs we can adjust to build a layout.

While going through the simple examples, I did see certain values that appealed to my personal style. They were, however, usually not the default value. For example, when covering justify-content I thought space-between (evenly distribute available space between elements, spread across available width, leaving no wasted space at left and right edges) looks best, but the default value is actually flex-start (pack elements to the start of flow (left) and leaving empty space to the end (right)). I understand choosing a default value is challenging and I’m ignorant of many of the concerns that went into a final decision. Sometimes the default behavior to “makes sense” are actually inconsistent. For example, the default flex-shrink value is 1, but its counterpart flex-grow has a default value of 0.

After the chaos of JavaScript shorthands, I thought I was freed from that thinking in CSS. No such luck. To specify dynamic scaling behavior in a flexbox, somebody thought it would be useful to combine flex-grow, flex-shrink, and flex-basis parameters into a single flex shorthand. As a beginner, I am not a fan of memorizing an arbitrary order of numbers, but maybe I’ll grow to love it.

The feature of flexbox that most surprised me is ability to flip its two axis with flex-direction. Not only would this let us build right-to-left layouts (for Hebrew, Arabic, etc.) this even hints at the ability to build layout for traditional Chinese layout. (Words top-to-bottom in a column, then columns laid out right-to-left.)

The Codecademy project for flexbox “Tea Cozy” is simultaneously really great and really awful. They gave us a set of images, most of which are pictures to use in a layout except one image which is actually the specification for laying out a page. So we are given a picture of what the result should look like, including some key dimensions, and told to go make it happen. I understand this is highly representative of the workflow of a web developer in charge of laying out a page according to a design specification, so that is really great. But given this course has been short on the “why” of things (each lesson is “do X and Y” instead of the much more useful “Do X and Y to accomplish Z.”) it was difficult to figure out which of the tools we just learned were applicable. I faced a wide knowledge gap between course material and building “Tea Cozy”. Still, I enjoyed the challenge of this exercise. And for better or worse, I did come out of “Tea Cozy” with a better understanding of flexbox.


Outside of Codecademy, I was told about Flexbox Froggy, an interactive game for teaching flexbox. It was fun and the frogs were cute. I doubt anyone could learn the full power of flexbox just from playing this game but seeing how some of the concepts were applied was good review after my course.


The next lesson introduces CSS grid, for layouts where regular spacing makes more sense than the dynamic rules of flexbox. Laying out elements in a grid was very familiar to me, it reminded me of the early days of HTML where I would hack a layout using <TABLE>. The bad news for me is that shorthand-itis is even worse for grid. We started with grid-template-rows and grid-template-columns, then we learn the two can be shortened to grid-template. Similarly, row-gap and column-gap become gap.

Aside: I was annoyed we didn’t have consistency on whether column/row are used as prefix or suffix for these parameters.

Back to the shorthand infestation: Not only do we have shorthand, now we have multiple levels of shorthand. We start with grid-row-start and grid-row-end, which are shortened in grid-row. But that’s not all! Their was the expected counterparts grid-column-start and grid-column-end shortened to grid-column. Then grid-row and grid-column — already shorthand themselves — are shortened into grid-area. Gah!

But CSS grid has more to offer over <TABLE> than just annoying shorthand, it also has ability to control justification and alignment of elements within the grid. But the names of these control knobs are pretty unintuitive and confusing. While I can see how justify-item controls justification of items inside a grid, I don’t understand why justify-content moves the entire grid. Why is the entire grid considered ‘content’? Adding to this confusion is justify-self, which is how an item inside the grid can override justification settings from the container grid. I look at all of this and think “here be dragons.”

I had hoped the project associated with grid would be as hands-on as “Tea Cozy” but it was “merely” laying out the grid in a to-do list, far more straightforward than “Tea Cozy”. Ah well.


Outside of Codecademy: the people who made Flexbox Froggy also made Grid Garden.


After this instructive though occasionally challenging course, CSS Transitions and Animations felt a lot easier.

Monoprice Mini Deluxe SLA Arrives

When I was given a FormLabs Form 1+ resin 3D printer, I had dreams of high detail resin printing. Alas, I eventually found that printer to be broken beyond the point of practical repair. It was free and worth every penny as I enjoyed taking it apart. But it put the thought of resin printing in my mind. So, when Monoprice cleared out one of their resin printers (item number 30994) I grabbed one. I could never resist a clearance on something I wished for, like what I did earlier for a graphic drawing display.

As far as support is concerned, I understand this device to be a rebranded Wanhao Duplicator 7 Plus. There are a few changes, such as the lack of orange acrylic windows into the print volume. For the price, I can probably deal with it.

The product was shipped directly in its brown product box. At least it didn’t have fancy graphics, which reduced its attraction to thieves.

At the top of the box is the manual (good), FEP film (eh?) and a “please don’t make us restock” flyer.

Under them is a handle. Should I lift it? Looks very tempting.

I decided not to lift the handle and pulled out top foam block instead.

As it turned out, lifting the handle probably would have been fine. Because now the handle is all I have to pull on.

It appears to the print volume lid, with a red strip around its base. Later I determined the red strip was not packing material, it is intended to seal against light.

Once that lid was lifted, I had no obvious handles to pull on.

I carefully set the box on its side and slide out whatever’s in the bag.

The bag contained everything else, as there was only a block of packing foam left in the box once the bag was removed.

Everything else is packed within what will eventually be the print volume.

They were held by these very beefy zip ties.

At the top of the stack is the build plate. Looks like there is a bit of manufacturing residue that I have to clean off before I use it.

A plastic bag held a few large accessories.

A cardboard held some of the remaining accessories.

Contents of the box. I see a small bottle of starter resin, a container (for leftover resin?) A few gloves, fasteners, and a USB flash drive presumably holding software.

Under that cardboard box is the resin build tank.

A sheet of FEP film was already installed in the frame. The bag at the top of the box is apparently an extra sheet.

Initially I was not sure if tape around the screen is intended to be peeled off. I left it on and that was a good thing. Later research found that it holds the screen in place and is a factory-applied version of what some early Duplicator 7 owners did manually to resolve a design flaw.

The manual hints that the box used to include additional accessories. Playing with image contrast, I can read them to be “HDMI Cable” (why would there be one?) “USB Cable” (I have plenty) and “Print removal tool” (But there is a spatula?)

Icon for slicing operation is a slice of cake. Looking almost exactly like the icon in Portal. I hope this is not a lie.

Here’s everything in the box laid out on the floor.

Here’s a closeup of the base of Z-axis. It uses an ACME leadscrew and optical endstop as the Form 1+ did, but here we see a shaft coupler between the motor and leadscrew. The Form 1+ had a motor whose output shaft is the leadscrew, eliminating issues introduced by a shaft coupler at higher manufacturing cost. This printer also used the precision ground shafts for guidance instead of a linear bearing carriage as used by Form 1+. Another tradeoff for increased precision at higher cost.

Plugged it in and turned it on: It lives!

Here is the machine status screen. Looks like the printer itself is up and running, but this is just the beginning. A resin printer needs more supporting equipment before I can start printing (good) parts.

Angular on Window Phone 8.1

One of the reasons learning CSS was on my to-do list was because I didn’t know enough to bring an earlier investigation to conclusion. Two years ago, I ran through tutorials for Angular web application framework. The experience taught me I needed to learn more about JavaScript before using Angular, which uses TypeScript which is a derivative(?) of JavaScript. I also needed to learn more about CSS in order to productively utilize the Material style component libraries that I had wanted to use.

One side experiment of my Angular adventure was to test the backwards compatibility claims for the framework. By default, Angular does not build support for older browsers, but it could be configured to do so. Looking through the referenced browserlist project, I see an option of “ie_mob” for Microsoft Internet Explorer on “Other Mobile” devices a.k.a. the stock web browser for Windows Phone.

I added ie_mob 11 to the list of browser targets in an Angular project. This backwards compatibility mode is not handled by the Angular development server (ng serve) so I had to run a full build (ng build) and spin up an nginx container to serve the entire /dist project subdirectory.

Well now, it appeared to work! Or at least, more of this test app showed up on screen than if I hadn’t listed ie_mob on the list of browser targets.

However, scrolling down unveiled some problems with elements that did not get rendered, below the “Next Steps” section. Examing the generated HTML, it didn’t look very different from the rest of the page. However, these elements did use different CSS rules not used by the rest of the page.

Hypothesis: The HTML is fine, the TypeScript has been transpiled to Windows Phone friendly dialects, but the page used CSS rules that were not supported by Windows Phone. Lacking CSS knowledge, that’s where my investigation had to stop. Microsoft has long since removed debugging tools for Windows Phone so I couldn’t diagnose it further except by code review or trial and error.

Another interesting observation on this backwards-compatible build is vendor-es5.js. This polyfill performing JavaScript compatibility magic is over 2.5 MB all by itself (2,679,414 bytes) and it has to sit parallel with the newer and slightly smaller vendor-es2015.js (2,202,719 bytes). While a few megabytes are fairly trivial for modern computers, this combination of the two would not fit in the 4MB flash on an ESP32.


Initial Chunk Files | Names                |      Size
vendor-es5.js       | vendor               |   2.56 MB
vendor-es2015.js    | vendor               |   2.10 MB
polyfills-es5.js    | polyfills-es5        | 632.14 kB
polyfills-es2015.js | polyfills            | 128.75 kB
main-es5.js         | main                 |  57.17 kB
main-es2015.js      | main                 |  53.70 kB
runtime-es2015.js   | runtime              |   6.16 kB
runtime-es5.js      | runtime              |   6.16 kB
styles.css          | styles               | 116 bytes

                    | Initial ES5 Total    |   3.23 MB
                    | Initial ES2015 Total |   2.28 MB

For such limited scenarios, we have to run the production build. After doing so (ng build --prod) we see much smaller file sizes:

node ➜ /workspaces/pie11/pie11test (master ✗) $ ng build --prod
✔ Browser application bundle generation complete.
✔ ES5 bundle generation complete.
✔ Copying assets complete.
✔ Index html generation complete.

Initial Chunk Files                      | Names                |      Size
main-es5.19cb3571e14c54f33bbf.js         | main                 | 152.89 kB
main-es2015.19cb3571e14c54f33bbf.js      | main                 | 134.28 kB
polyfills-es5.ea28eaaa5a4162f498ba.js    | polyfills-es5        | 131.97 kB
polyfills-es2015.1ca0a42e128600892efa.js | polyfills            |  36.11 kB
runtime-es2015.a4dadbc03350107420a4.js   | runtime              |   1.45 kB
runtime-es5.a4dadbc03350107420a4.js      | runtime              |   1.45 kB
styles.163db93c04f59a1ed41f.css          | styles               |   0 bytes

                                         | Initial ES5 Total    | 286.31 kB
                                         | Initial ES2015 Total | 171.84 kB

Notes on Codecademy “Learn CSS”

After going through Codecademy courses on HTML and JavaScript, the next obvious step for me was to visit the third pillar of the modern web: CSS. There’s a bit of weirdness with the class listing, though. It seems Codecademy is reorganizing offerings in this area. I took the “Learn CSS” class listed at the top of their HTML/CSS section and afterwards, I discovered it was actually a repackaging of several smaller CSS courses.

Either that, or the reverse: Learn CSS course is being split up into these multiple smaller courses. I’m not sure which direction it is going, but I expect this is a temporary “pardon our dust while we reorganize” situation. The good news is that their backend tracks our progress through this material, so we get credit for completing them no matter which course was taken.

Anyway, onward to the material. CSS determines how things look and lets us make our pages look better even if we don’t have professional design skills. My first nitpick had nothing to do with appearance at all: in the very first introductory example, they presented a chunk of HTML with DIV tags for header, content, and footer. My first thought: Hey, they didn’t use semantic HTML and they should!

Most of the introductory material was vaguely familiar from my earlier attempts to learn CSS. I hadn’t known (or had forgotten) that attribute selectors had such a wide range of capabilities, almost verging on regular expressions. The fancier pattern matching ones seem like they could become a runtime performance concern. Selector chaining (h1.destination covers all H1 tags with class=”destination”) is new to me, as was descendant combinator (.description h5 covers all H5 tags under something (not the H5) with class=”description”). I foresee lots of mistakes practice keeping them straight alongside multiple selectors (.description,h5 covers H5 tags and tags with class=”description” equally, without requiring any relation between them.)

In the realm of box model and layout, I have never found the default HTML content box model to be intuitive to my brain. I always thought of the border and padding as part of the element, so I expect I’ll have better luck using box-sizing: border-box; instead for my own layouts. What I don’t expect to have a luck with is positioning. I’ve played with this before and relative/absolute/fixed/sticky always felt like voodo magic I didn’t fully understand. It’ll take a lot more practice. And a lot more examining of behavior in browser development tools. And if that fails, there’s always the fallback layout debug tool of putting a border around everything:

* { border: 1px solid red !important; }

And finally, I need more practice to get an intuitive grasp of how to specify values in CSS. Most of this course used pixels, but it would be nice to design style sheets that dynamically adjust to content using other units like em or rem. (Unrelated: this URL is in the MDN “Guides” section which have other helpful-looking resources.) I also want to make layouts dynamic to the viewport. We have a few tools to obtain such values: vw is viewport width, vh is viewport height, vmin is the smaller of the two, vmax is the larger of the two. On paper vmin+vmax can tell me if the screen is portrait or landscape, but I haven’t figured out the logic for a portrait layout vs. landscape layout. Perhaps that is yet to come in later Codecademy CSS courses.

Digital Ink and the Far Side Afterlife

A few weeks ago I picked up a graphical drawing display to play with. I am confident in my skills with software and knowledge of electronics, but I was also fully aware none of that would help me actually draw. That will take dedication and practice, which I am still working on. Very different from myself are those who come at this from the other side: they have the artistic skills, but maybe not in the context of digital art. Earlier I mentioned The Line King documentary (*) showed Al Hirschfeld playing with a digital tablet climbing the rather steep learning curve to transfer his decades of art skills to digital tools. I just learned of another example: Gary Larson.

Like Al Hirschfeld, Gar Larson is an artist I admired but in an entirely different context. Larson is the author of The Far Side, a comic that was published in newspapers via syndication. If you don’t already know The Far Side it can be hard to explain, but words like strange, weird, bizarre, and surreal would be appropriate. I’ve bought several Far Side compilations, my favorite being The PreHistory of The Far Side (*) which included behind-the-scenes stories from Gary Larson to go with selected work.

With that background, I was obviously delighted to find that the official Far Side website has a “New Stuff” section, headlined by a story from Larson about new digital tools. After retirement, Larson would still drag out his old tools every year to draw a Christmas card. A routine that has apparently been an ordeal dealing with dried ink on infrequently used pen. One year instead of struggling with cleaning a clogged pen, Larson bought a digital drawing tablet and rediscovered the joy of artistic creation. I loved hearing that story and even though only a few comics have been published under that “New Stuff” section, I’m very happy that an artist I admired has found joy in art again.

As for myself, I’m having fun with my graphical drawing display. The novelty has not yet worn off, but neither have I produced any masterpieces. The future of my path is still unknown.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Notes on Codecademy “Learn Intermediate JavaScript”

After going through Codecademy’s “Learn JavaScript” course, the obvious follow-up was their “Learn Intermediate JavaScript” course, so I did and I liked it. It had a lot of very useful information for actually using JavaScript for projects. The first course covered the fundamentals of JavaScript but such knowledge sort of floated in an abstract space. It wasn’t really applicable until this intermediate course covered the two most common JavaScript runtime environments: browser on the client end, and Node.js for the server end. Which, of course, added their own ways of doing things and their own problems.

Before we got into that, though, we expanded the first class’s information on objects in general to classes in particular. Now I’m getting into an objected-oriented world that’s more familiar to my brain. This was helped by the absence of multiple different shorthand for doing the same thing. I don’t know if the course just didn’t cover them (possible) or the language has matured enough we no longer have people dogpiling ridiculous ideas just to save a few characters (I hope so.)

Not that we’re completely freed from inscrutable shorthand, though. After the excitement of seeing how JavaScript can be deployed in actual settings, I got angry at the language again when I learned of ES6 “Default Exports and Imports”.

// This will work...
import resources from 'module.js'
const { valueA, valueB } = resources;
 
// This will not work...
import { valueA, valueB } from 'module.js'

This is so stupid. It makes sense in hindsight after they explained the shorthand and why it breaks down, but just looking at this example makes me grumpy. JavaScript modules is so messed up this course didn’t try to cover everything, just pointing us to Mozilla.org documentation to sort it out on our own.

After modules, we covered asynchronous programming. Another very valuable and useful aspect of actually using JavaScript on the web. Starting with JavaScript Promises, then async/await which is an ES8 syntax for writing more readable code but still using Promises under the hood. My criticism here is JavaScript’s lack of strong typing, making it easy to make mistakes that wouldn’t fall apart until runtime. This is so bad we even have an “Avoiding Common Mistakes” section in this course, which seems like a good idea in every lesson but apparently only important enough here.

Once async/await had been covered, we finally had enough background to build browser apps that interact with web APIs using browser’s fetch() API. The example project “Film Finder” felt a lot more relevant and realistic than every other Codecademy class project I’ve seen to date. It also introduces me to The Movie Database project, which at first glance looks like a great alternative to IMDB which has become overly commercialized.

After the Film Finder project, this course goes into errors and error handling mechanisms, along with debugging JavaScript. I can see why it’s placed here: none of this would make sense unless we knew something about JavaScript code, but a lot of these lessons would have been very helpful for people struggling with earlier exercises. I’m sad to think of the possibility that there might exist people who would benefit from this information, but never got this far because they got stuck in an earlier section because they needed help debugging.

The best part of this final section is a walkthrough of browser developer tools to profile memory and CPU usage. There are a lot of knobs and levers in these tools that would easily overwhelm a beginner. It is very useful to have a walkthrough that focused just on a few very common problems, and how to find them. Once we know a few places to start, it gives a starting point for exploring the rest of developer tools. This was fantastic, my only regret is that only applies to browser-side JavaScript. We’d have to learn an entirely different set of tools for server-side Node.js code.

But that’s enough JavaScript fun for now, onward to the third pillar of web development: CSS.

Notes on Codecademy “Introduction to Javascript”

After reviewing HTML on Codecademy, I proceeded to review JavaScript with their Introduction to Javascript course (also titled Learn Javascript in some places, I’m using their URL as the definitive name.) I personally never cared for JavaScript but it is indisputably ubiquitous in the modern world. I must have a certain level of competency in order to execute many project ideas.

The history of JavaScript is far messier than other programming languages. It evolved organically, addressing the immediate need of the next new browser version from whoever believes they had some innovation to offer. It was the wild west until all major players agreed to standardize JavaScript in the form of ECMAScript 6. (ES6) While the language has continued to evolve, ES6 is the starting point for this course.

A standard is nice, but not as nice as it might look at first glance. In the interest of acceptance, it was not practical for ES6 to break from all the history that proceeded it. This, I felt, was the foundation of all of my dissatisfaction with JavaScript. Because it had to maintain compatibility, it has to accept all the different ways somebody thought to do something. I’m sure they thought they were being clever, but I see it as unnecessary confusion. Several instances came up in this course:

  • Somebody thought it was a good idea for comparison to perform automatic type conversion before performing the comparison. It probably solved somebody’s immediate problem, but the long-term effect is that “==” operator became unpredictable. The situation is so broken that this course teaches beginners to use the “===” operator and never mentions “==”.
  • The whole concept of “truthy” and “falsy” evaluations makes code hard to understand except for those who have memorized all of the rules involved. I don’t object a beginner course covering such material “this is a bad idea but it’s out there so you need to know.” However, this course makes it sound like a good thing “Truthy and falsy evaluations open a world of short-hand possibilities!” and I strongly object to this approach. Don’t teach beginners to write hard-to-understand code!
  • JavaScript didn’t start with functions, but the concept was so useful different people each hacked their own ways to declare functions. Which means we now have function declaration (function say(text) {}) function expression (const say = function(text) {}) arrow function (const say = (text) => {}) and the concise body arrow function (const say = text => {}). I consider the latter inscrutable, sacrificing readability for the sake of saving a few characters. A curse inflicted upon everyone who had to learn JavaScript since. (An anger made worse when I learned arrow functions implicitly bind to global context. Gah!)

These were just the three that I thought worth ranting about. Did I mention I didn’t care for JavaScript? But it isn’t all bad. JavaScript did give the web a very useful tool in the form of the JavaScript Object Notation (JSON) which became a de facto standard for transmitting structured data in a much less verbose way than XML which originally promised to do exactly that.

JSON has the advantage it was the native way for JavaScript to represent objects, so it was easy to go from working with a JavaScript object to transmitting it over the network and back to working object. In fact, I originally thought JSON was the data transmission serialization format for JavaScript. It took a while for me to understand that no, JSON is not the serialization format, JSON is THE format. JSON looks like a data structure for key-value pairs because JavaScript objects are a structure for key-value pairs.

Once “every object is a collection of key-value pairs” got through my head, many other JavaScript traits made sense. It was wild to me that I can attach arbitrary properties to a JavaScript function, but once I understood functions to be objects in their own right + objects are key-value pairs = it made sense I could add a property (key) and its value to a function. Weird, but made sense in its own way. Object properties are reduced to a list of key-value pairs, and object methods are special cases where the value is a function object. Deleting an entry from the list of key-value pairs allows deleting properties and methods and accessing them via brackets seem no weirder than accessing a hash table entry. It also made sense why we don’t strictly need a class constructor for objects. Any code (“factory function”) that returns a chunk of JSON has constructed an object. That’s not too weird, but the property value shorthand made me grumpy at JavaScript again. As did destructuring assignment: at first glance I hated it, after reading some examples of how it can be useful, I merely dislike it.

In an attempt to end this on a positive note, I’ll say I look forward to exploring some of the built-in utilities for JavaScript classes. This Codecademy course introduced us to:

  • Array methods .push() .pop() .shift() .unshift() .slice() .splice() etc.
  • Iterators are not just limited to .forEach(). We have .map() .filter() .reduce() .every() and more.
  • And of course stuff on object. Unlike other languages that need an entire feature area for runtime type information, JavaScript’s approach means listing everything on an object is as easy as .keys().

Following this introduction, we can proceed to Intermediate JavaScript.

Notes on Codecademy “Learn HTML”

Almost seven years ago, I went through several Codecademy courses on web-related topics including their Learn HTML & CSS course, long since retired. The knowledge I learned back then were enough for me to build rudimentary web UI for projects including Sawppy Rover, but I was always aware they were very crude and basic. And my skill level was not enough to pull off many other project ideas I’ve had since. Web development being what they are, seven years is long enough for several generations of technologies to rise in prominence then fade into obscurity. Now I want to take another pass. Reviewing what I still remember and learn something new. And the most obvious place to start is their current Learn HTML course.

As the name made clear, this course focuses on HTML. Coverage of CSS has been split off to its own separate course, which I plan to take later, but first things first. I’m glad to see that basics of HTML haven’t changed very much. Basic HTML elements and how to structure them are still fundamental. The course then moves on to tables, which I had learned for their original purpose and also as a way to hack page layout in HTML. Thankfully, there are now better ways to perform page layout with CSS so <table> can revert to its intended purpose of showing tabulated data. Forms is another beneficiary of such evolution. I had learned them for their original purpose and also as a way to hack client/server communication. (Sawppy rover web UI is actually a form that repeatedly and rapidly submits information to the server.) And again, technologies like web sockets now exist for client/server communication and <form> can go back to just being forms for user-entered data.

The final section “Semantic HTML” had no old course counterpart that I could remember. HTML tags like <article> and <figure> are new to me. They add semantic information to information on the page, which is helpful for machine parsing of data and especially useful for web accessibility. This course covers a few elements, the full list can be found at other resources like W3Schools. I’m not sure my own projects would benefit much from sematic HTML but it’s something I want to make a natural habit. Learning about semantic HTML was a fun new addition to my review of HTML basics. I had originally planned to proceed to a review of CSS, but I put that on hold in favor of reviewing JavaScript.

Hobbyist Level CNC Tool Change Support (M6)

In our experiments so far, the project CNC machine used Bart Dring’s ESP32 port of Grbl to translate G-code into stepper motor step+direction control pulses. It offers a lot of neat upgrades over standard Grbl running on an Arduino, and both are fantastically affordable way to get into CNC. The main issue with Grbl running on microcontrollers is the fact they are always limited by the number of input/output pins available. Some of Bart Dring’s ESP32 enhancements were only possible because the ESP32 had more pins than an ATmega328.

But like all tinkerers, we crave more. Grbl (& derivatives) are understandably lacking support for features that are absent from majority of hobbyist grade CNC. The wish list items in the local maker group mostly center around the capability to use multiple tools in a single program.

Tool change is the most obvious one. Grbl recognizes just enough to support a manual tool change operation: stop the spindle, move to a preset tool change position, and wait before proceeding. Automated tool changing is out of scope.

Which explains the next gap in functionality: tool length offset. Not all tools are of the same length and the controller needs to know each tool length to interpret G-code correctly. Grbl doesn’t seem to have a tool length table to track this information. It is a critically important feature to make automated tool change useful, but the lack of latter means the lack of former is not surprising.

And following the cascade of features, we’d also love to have cutter radius compensation for individual tools. Typically used in industrial machinery to gradually adjust for tool wear, it usually doesn’t matter in the type of tolerances involved in the context of hobbyist machines. But it is useful and nice to have if multiple tools come into the picture, each with their own individual idiosyncracies.

These capabilities get into the domain of industrial controllers well beyond a hobbyist budget. Or at least, they used to be. People are experimenting with hardware builds to implement their own automatic tool changing solutions. And on the software side, Grbl derivatives like GrblHAL have added support for the M6 (automatic tool change) code allowing multiple tools in a single CNC program. Is it a practical short-term goal for my project? Heck no! I can’t even cut anything reliably yet. But it’s nice to know the ecosystem is coming together to make hobbyist level tool-changing CNC practical.

MageGee Wireless Keyboard (TS92)

In the interest of improving ergonomics, I’ve been experimenting with different keyboard placements. I have some ideas about attaching keyboard to my chair instead of my desk, and a wireless keyboard would eliminate concerns about routing wires. Especially wires that could get pinched or rolled over when I move my chair. Since this is just a starting point for experimentation, I wanted something I could feel free to modify as ideas may strike. I looked for the cheapest and smallest wireless keyboard and found the MageGee TS92 Wireless Keyboard (Pink). (*)

This is a “60% keyboard” which is a phrase I’ve seen used two different ways. The first refers to physical size of individual keys, if they were smaller than those on a standard keyboard. The second way refers to the overall keyboard with fewer keys than the standard keyboard, but individual keys are still the same size as those on a standard keyboard. This is the latter: elimination of numeric keypad, arrow keys, etc. means this keyboard only has 61 keys, roughly 60% of standard keyboards which typically have 101 keys. But each key is still the normal size.

The lettering on these keys are… sufficient. Edges are blurry and not very crisp, and consistency varies. But the labels are readable so it’s fine. The length of travel on these keys are pretty good, much longer than a typical laptop keyboard, but the tactile feedback is poor. Consistent with cheap membrane keyboards, which of course this is.

Back side of the keyboard shows a nice touch: a slot to store the wireless USB dongle so it doesn’t get lost. There is also an on/off switch and, next to it, a USB Type-C port (not visible in picture, facing away from camera) for charging the onboard battery.

Looks pretty simple and straightforward, let’s open it up to see what’s inside.

I peeled off everything held with adhesives expecting some fasteners to be hidden underneath. I was surprised to find nothing. Is this thing glued together? Or held with clips?

I found my answer when I discovered that this thing had RGB LEDs. I did not intend to buy a light-up keyboard, but I have one now. The illumination showed screws hiding under keys.

There are six Philips-head self-tapping plastic screws hidden under keys distributed around the keyboard.

Once they were removed, keys assembly easily lifted away to expose the membrane underneath.

Underneath the membrane is the light-up subassembly. Looks like a row of LEDs across the top that shines onto a clear plastic sheet acting to diffuse and direct their light.

I count five LEDs, and the bumps molded into clear plastic sheet worked well to direct light where the keys are.

I had expected to see a single data wire consistent with NeoPixel a.k.a. WS2812 style of individually addressable RGB LEDs. But label of SCL and SDA implies this LED strip is controlled via I2C. If it were a larger array I would be interested in digging deeper with a logic analyzer, but a strip of just five LEDs isn’t interesting enough to me so I moved on.

Underneath the LED we see the battery, connected to a power control board (which has both the on/off switch and the Type-C charging port) feeding power to the mainboard.

Single cell lithium-polymer battery with claimed 2000mAh capacity.

The power control board is fascinating, because somebody managed to lay everything out on a single layer. Of course, they’re helped by the fact that this particular Type-C connector doesn’t break out all of the pins. Probably just a simple voltage divider requesting 5V, or maybe not even that! I hope that little chip at U1 labeled B5TE (or 85TE) is a real lithium-ion battery manage system (BMS) because I don’t see any other candidates and I don’t want a fiery battery.

The main board has fewer components but more traces, most of which led to the keyboard membrane. There appears to be two chips under blobs of epoxy, and a PCB antenna similar to others I’ve seen designed to work on 2.4GHz.

With easy disassembly and modular construction, I think it’ll be easy to modify this keyboard if ideas should strike. Or if I decide I don’t need a keyboard after all, that power subsystem would be easy (and useful!) for other projects.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Google AIY Vision Bonnet & Accessories

The key component of a Google AIY Vision kit is the “Vision Bonnet”, a small circuit board to sit atop the Raspberry Pi Zero WH bundled in the kit. In addition to all the data interface available via the standard Raspberry Pi GPIO pins, this peripheral also gets “first dibs” on raw camera data. The camera itself is a standard Raspberry Pi Camera V2.1 but instead of connecting directly to the Raspbery Pi Zero, the camera cable connects directly to the vision bonnet. There is then a second cable connecting from vision bonnet to the Raspberry Pi camera connector, for the bonnet to forward camera data to the Pi after Vision Bonnet is done processing it. This architecture ensures the Vision Bonnet will never be constrained by data interface limitations onboard the Pi. It can get raw camera feed and do its magic before camera data even gets into a Pi.

The vision coprocessor on this Vision Bonnet circuit board is a Movidius Myriad MA2450, launched in 2016 and discontinued in 2020. Based on its application here, I infer the chip can accelerate inference operations for vision-based convolutional neural networks that fit within constraints outlined in the AIY Vision documentation. I don’t know enough about the field of machine vision to judge whether these constraints are typical or if they pose an unreasonable burden. What I do know is that, now that everything has been discontinued, I probably shouldn’t spend much more time studying this hardware.

My interest in commercially available vision coprocessors has since shifted to Luxonis OAK-D and related products. In addition to a camera array (two monochrome cameras for stereoscopic vision and one color camera for object detailed) it is built around Luxonis OAK SoM (System on Module) built around the newer Movidius Myriad MA2485 chip. Luxonis has also provided far more software support and product documentation on their OAK modules than Google ever did for their AIY Vision Bonnet.

I didn’t notice much of interest on the back side of AIY Vision Bonnet. The most prominent chip is marked U2, an Atmel (now Microchip) SAM-D.

The remainder of hardware consists of a large clear button with three LEDs embedded within. (Red, green, and blue.) That button hosts a small circuit board that connects to the vision bonnet via a small ribbon cable. It also hosts connectors for the piezo buzzer and the camera activity (“privacy”) LED. The button module appears identical to the counterpart in AIY Voice kit (right side of picture for comparison) but since voice kit lacked piezo buzzer or LED, it lacked the additional circuit board.

Google AIY Vision Kit

A few years ago, I tried out the Google AIY “Voice” and “Vision” kits. They featured very novel hardware but that alone was not enough. Speaking as someone not already well-versed in AI software of the time, there was not enough documentation support to get people like me onboard to do interesting things with that novel hardware. People like me could load the default demo programs, we could make minor modifications to it, but using that hardware for something new required climbing a steep learning curve.

At one point I mounted the box to my Sawppy rover’s instrument mast, indicating my aspirations to use it for rover vision, but I never got much of anywhere.

The software stack also left something to be desired, as it built on top of Raspberry Pi OS but was fragile and easily broken by Raspberry Pi updates. Reviewing my notes, I realized I published my notes on AIY Voice but the information on AIY Vision was still sitting in my “Drafts” section. Oops! Here it is for posterity before I move on.

Google AIY Vision Kit

The product packaging is wonderful. This was from the era of Google building retail products from easily recycled cardboard. All parts were laid out and neatly labeled in a cardboard box.

Google AIY online instructions.jpg

There was no instruction booklet in the box, just a pointer to assembly instructions online. While fairly easy to follow, note that instructions were written for people who already know how to handle bare electronic circuit boards. Handle the circuit boards by the edges and avoid touching components (especially electrical contacts) and such. Complete beginners unaware of basics might ruin their AIY kit.

Google AIY Vision Kit major components

From a hardware architecture perspective, the key is the AIY Vision bonnet that sat on top of a Raspberry Pi Zero WH. (W = WiFi, H = with presoldered header pins.) In addition to connection with all Pi Zero GPIO pins, it also connects to the Pi camera connector for direct access to camera feed. (Normal data path: Camera –> Pi Zero. AIY Vision data path: Camera –> Vision Bonnet –> Pi.) In addition to the camera, there is a piezo buzzer for auditory feedback, a standalone green LED to indicate camera is live (“Privacy LED”), and a big arcade-style button with embedded LEDs.

Once assembled, we could install and run several visual processing models posted online. If we want to go beyond that, there are instructions on how to compile trained TensorFlow models for hardware accelerated inference by the AIY Vision Bonnet. And if those words don’t mean anything (it didn’t to me when I played with the AIY Vision) then we’re up a creek. That was bad back then, and now that a few years have gone by, things have gotten worse.

  1. The official Google AIY system images for Raspberry Pi hasn’t been updated since April 2021. And we can’t just take it and pick up more recent updates, because that breaks bonnet functionality.
  2. The vision bonnet model compiler is only tested to work on Ubuntu 14.04, whose maintenance updates ended in 2019.
  3. Example Python code is in Python 2, whose support ended January 1st, 2020.
  4. Example TensorFlow information are for the now-obsolete TensorFlow 1. TensorFlow 2 was a huge breaking change, and it takes a lot of work — not to mention expertise — to migrate from TF1.x to TF2.

All of these factors together tell me the Google AIY Vision bonnet has been left to the dusty paths of history. My unit has only ever ran the default “Joy Detection” demo, and I expect this AIY Vision Bonnet will never run anything else. Thankfully, the rest of the hardware (Raspberry Pi Zero WH, camera, etc.) should have better prospects of finding another use in the future.

Creality Ender-3 Motion Axis Rollers

After resolving an initial issue with Z-axis movement, my Creality Ender-3 V2 enters my 3D printing pool. I mainly got it because it was becoming a hassle to switch my MatterHackers Pulse XE between PLA and PETG printing. My intent is to leave the Ender-3 V2 loaded with PLA and the Pulse XE loaded with PETG. I’m sure that’ll only last for as long as it takes for one of these printers to develop a problem, but I’ll figure out how to cross that bridge when I come to it.

The best news is that the extra cost for V2 was worthwhile: this printer operates without the whiny buzz of older stepper motor drivers. It’s not completely silent, though. Several cooling fans on the machine have a constant whir, and there are still some noises as a function of movement. Part of that are the belts against motor pulley, and part of that are roller wheels against aluminum.

These rollers are the biggest mechanical design difference between Creality’s Ender line and all of my prior 3D printers. Every previous printer constrained movement on each axis via cylindrical bearings traversing over precision-ground metal rods. One set for X-axis, one set for Y-axis, and one set for Z-axis. To do the same job, the Ender design replaces them with hard plastic rollers traversing over aluminum extrusion beams.

The first and most obvious advantage to this design is cost. Precision ground metal rods are more expensive to produce than aluminum extrusions, and we need them in pairs (or more) to constrain motion along an axis. In contrast, Ender’s design manages to constrain motion by using rollers on multiple surfaces of a single beam. In addition to lower parts cost, I guess the assembly cost is also lower. Getting multiple linear bearings properly lined up seems more finicky than bolting on several hard plastic rollers.

Rollers should also be easier to maintain, as they roll on ball bearings that keep their lubrication sealed within. Unlike the metal guide rods that require occasional cleaning and oiling. The cleaning is required because those rods are exposed and thus collect dust, which then stick because of the oil, and then the resulting goop is shoved to either end of range of travel. Fresh oil then needs to be applied to keep up with this migration.

But using rollers also means accepting some downsides. Such a design is theoretically less accurate, as hard plastic rollers on aluminum allow more flex than linear bearings on precision rods. Would lower theoretical accuracy actually translate to less accurate prints? Or would that flex be completely negligible for the purpose of 3D printing? That is yet to be determined.

And finally, I worry about wear and tear on these roller wheels. Well-lubricated metal on metal have very minimal wear, but hard plastic on aluminum immediately started grinding out visible particles within days of use. I expect the reduced theoretical accuracy is minimal when the printer is new but will become more impactful as the wheels wear down. Would it affect proper fit of my 3D printed parts? That is also yet to be determined. But to end on a good note: even if worn wheels cause problems, they should be pretty easy to replace.

Creality Ender-3 V2 Z-Axis Alignment

The first test print on my assembled Creality Ender-3 V2 showed some artifacts. General symptoms are that some layers look squished and overall height is lower than it should be.

These two parts were the same 3D file on two separate print jobs. The piece on the right was printed before my modification, showing many problematic layers and the overall height is lower than it should be. Hypothesis: my Z-axis is binding, occasionally failing to move the specified layer height. Verification: With motor powered off, it is very hard to turn the Z-axis leadscrew by hand.

This put the Z-axis motor mount under scrutiny.

Looking closer, I saw it was not sitting square. There is an uneven gap forced by the motor that is slightly fatter around the black midsection than its silvery ends. This means when the motor mounting block is tightened against the vertical Z-axis extrusion beam, that motor rotates and output shaft tilts off vertical.

A tiny gap would not have caused a problem, because the shaft coupler could translate motion through a small twist. However, this gap is larger than the shaft coupler could compensate for, causing it to bind up. I saw two ways to reduce this gap. (1) Grind the side of Z-axis stepper motor to eliminate the bulge, or (2) insert a spacing shim into the motor mount. I don’t have a precision machining grinder to do #1, but I do have other 3D printers to do #2. I printed some shims of different thicknesses on my MatterHackers Pulse 3D XE.

I tried each of those to find the one that allowed smoothest Z-axis leadscrew turning by hand.

After this modification, the print visual quality and Z-axis dimensional accuracy improved tremendously. The piece on the left was printed after this modification.

So that’s my first problem solved, and I think this printer will work well for the immediate future. However, looking at how it was built, I do have some concerns about long-term accuracy.

Creality Ender-3 V2 Assembly

From what I can tell, Creality’s Ender-3 is now the go-to beginner 3D printer. It works sufficiently well in stock form out of the box. And if anyone wants to go beyond stock, the Ender 3 has a large ecosystem of accessories and enhancements. These are exactly the same reasons I bought the Monoprice Select V2 several years ago. So when Creality held one of their frequent sales, I picked up an Ender-3 V2. The key feature I wanted in V2 is Creality’s new controller board, which uses silent stepper motor drivers. A nice luxury that previously required printer mainboard replacement.

When the box arrived, I opened it to find all components came snugly packaged in blocks of foam.

The manual I shall classify as “sufficient”. It has most of the information I wanted from a manual, and the information that exists is accurate. However, it is missing some information that would be nice, such as a recommended unpack order.

And this is why: the Ender-3 came in many pre-assembled components, and when they are all tightly encased in foam it’s not clear which ones were already attached to each other. Such as the fact the printhead was already connected to the base. I’m glad I didn’t yank too hard on this!

That minor issue aside, it didn’t take long before all pieces were removed from the box laid out.

The Z-axis leadscrew is something to be careful with, vulnerable to damage in the sense that the slightest problem would cause Z-layer issues in the print. It was cleverly packed inside a bundle of aluminum extrusion beams, protected by a tube of what looks and feels like shrink wrap tubing.

All fasteners are bagged separately by type and labeled. This is very nice.

As far as I can tell, all of the tools required for assembly are bundled in the box. The stamped-steel crescent wrenches weren’t great, so I used real wrenches out of my toolbox. In contrast the hex keys were a pleasant surprise, as they had ball-ends for ease of use. I considered that a premium feature absent from most hex keys.

I was initially annoyed at the instructions for the filament holder spool, because it looked like the channel was already blocked by some bolts.

But then I realized the nuts are not perfectly rectangular. Their shape gives them the ability to be inserted into the slot directly, without having to slide in from the ends of the beams. As the fastener is tightened, they will rotate into place within the channel. These are “post-assembly nuts” because they allow pieces to be added to an extrusion beam after the ends have been assembled. These are more expensive than generic extrusion beam nuts and a nice touch for convenience.

Here is the first test print. It’s pretty good for a first print! But not perfect. Uneven vertical wall indicates issues with Z-axis movement.