HP Split X2 (13-r010dx) SSD Upgrade: Round 2

I now have a M.2 SATA SSD available for experimentation, mounted on a Sintech ST-NG8784 adapter circuit board that lets me plug a M.2 SSD into a SSF-8784 connector. This unusual slim connector is used by a HP Pavilion Split X2 (13-r010dx) tablet/laptop convertible computer, which foiled an earlier attempt at SSD upgrade. This time I am better prepared.

Here’s the “Before” picture, with the stock SFF-8784 hard drive in the center. From the factory the interior of this device had a lot more tape and foil, including foil completely wrapping the hard drive. They’ve all been pulled off in earlier adventures.

In addition to disconnecting the AC adapter, the connector at the center of the image (with black wires, not white ribbon cable) is for the battery and should be disconnected before working on this machine.

Four screws held the drive in place using some metal brackets, which were in turn mounted to the drive via some screws on the side. Removing them were trivial, but that exposed the next problem: the PCB could match bottom mounting holes, but we need horizontal ones, so there’s no good way to fasten the drive.

I decided not to worry about proper fastening for the moment, because I had no idea if the computer would even accept this drive with adapter. Some temporary painter’s tape is enough to make sure the board doesn’t flop around while I experiment.

Examining Sintech M.2 to SFF-8784 SATA Adapter (ST-NG8784)

It took a second try before I received an adapter card that looked good enough to proceed. The objective of the exercise is to put a common M.2 2280 SATA SSD into an old HP Pavilion Split X2 (13-r010dx) convertible laptop which came with an unusually thin (5mm) spinning platter hard drive in the super rare SFF-8784 form factor. The form factor foiled my first attempt at an SSD upgrade for this computer, but now I have a M.2 SATA SSD available for experimentation and now I have the adapter card I bought (*) for the project.

This card is made and sold by Sintech, which has a product page for this item where I learned it is designated model ST-NG8784. I was fascinated by how simple the adapter is. There are only a few surface mount components and very few traces on the circuit board. C1 and C2 are obviously capacitors, but I’m not sure what U1 is. Searching on “84-33 2012DC” didn’t result in anything enlightening, but by its general shape and arrangement of nearby capacitors I guess it is a voltage regulator.

The M.2 connector has many, many pins but the SFF-8784 plug has significantly fewer, resulting in a superficially simple layout. I guess that makes sense, after all the S in SATA stands for Serial, so it wouldn’t need many pins to do its thing. I count just two differential pairs on top for data. Most of the other connections are either power or ground. But it does highlight the fact there is no active signal conversion on this adapter: this would only work for SATA M.2 SSDs and I would not expect it to work with NVMe M.2 SSDS.

Mechanically, this adapter card has provisions for several of the popular M.2 card lengths. A threaded standoff has been press-fit into the spot corresponding to the longest supported size M.2 2280. If the user has a SATA SSD in one of the shorter form factors, there is a small Ziploc bag with a screw-on standoff to be installed in the appropriate slot. Since my M.2 SATA SSD is in the 2280 format, I did not need the Ziploc bag. I installed my SSD into this adapter and turned attention to the laptop.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

SSD Upgrade Project Delayed By Shipping Damage

I’ve been aware that the performance of an old HP Pavilion Split X2 (13-r010dx) is constrained by its hard drive to some degree. But when I tried to remove that constraint by upgrading it to a commodity SATA SSD, I found that it did not use the connector type I was familiar with. Rather, it used a much thinner and rarer variant called SFF-8784. Native SFF-8784 drives are expensive due to their low volume, but I found an Amazon vendor selling SFF-8784 adapter circuit boards to use mSATA or M.2 SATA SSDs. I resolved to come back to this project later, when I have a spare M.2 SATA SSD to try.

It is now later. Thanks to some end-of-year sales, my computers received upgrades and the cascade of hand-me-down freed up a M.2 SATA SSD for this experiment. I proceeded to order the M.2 to SFF-8784 adapter board I found earlier (*) eager to see how it might improve the old HP’s responsiveness.

Unfortunately, the first adapter arrived damaged. It was shipped in an anti-static bag and enclosed in a padded envelope. The padding was apparently not enough, because the M.2 connector was crushed out of shape. I doubted it would accept a M.2 SATA SSD and I didn’t want to risk a perfectly functioning SSD to try.

I contacted Sintech and they sent a replacement. When the replacement arrived, I noticed a modification. It was still in an anti-static bag in a padded envelope, but there was the additional padding of a block of pink foam to protect the M.2 connector.

With the help of this pink foam block, the onboard M.2 connector survived shipping and looked good enough for this SSD upgrade project to begin.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

I Started Learning Jamstack Without Realizing It

My recent forays into learning about static-site generators, and the earlier foray into Angular framework for single-page applications, had a clearly observable influence on my web search results. Especially visible are changes in the “relevant to your interests” sidebars. “Jamstack” specifically started popping up more and more frequently as a suggestion.

Web frameworks have been evolving very rapidly. This is both a blessing when bug fixes and new features are added at a breakneck pace, and a curse because knowledge is quickly outdated. There are so many web stacks I can’t even begin to track of what’s what. With Hugo and Angular on my “devise a project for practice” list I had no interest in adding yet another concept to my to-do list.

But with the increasing frequency of Jamstack being pushed on my search results list, it was a matter of time before an unintentional click took me to Jamstack.org. I read the title claim in the time it took for me to move my mouse cursor towards the “Back” button on my browser.

The modern way to build [websites & apps] that delivers better performance

Yes, of course, they would all say that. No framework would advertise they are the old way, or that they deliver worse performance. So none of the claim is the least bit interesting, but before I clicked “Back” I noticed something else: the list of logos scrolling by included Angular, Hugo, and Netlify. All things that I have indeed recently looked at. What’s going on?

So instead of clicking “Back”, I continued reading and learned proponents of Jamstack are not promoting a specific software tool like I had ignorantly assumed. They are actually proponents of an approach to building web applications. JAM stands for (J)avaScript, web (A)PIs, and (M)arkup. Tools like Hugo and Angular (and others on that scrolling list) are all under that umbrella. An application developer might have to choose between Angular and its peers like React and Vue, but no matter the decision, the result is still JAM.

Thanks to my click mistake, I now know I’ve started my journey down the path of Jamstack philosophy without even realizing it. Now I have another keyword I can use in my future queries.

Sawppy Documentation: Change Preview and Other Notes

I am optimistic that one of the popular static site generators can help me reach my goals for an improved Sawppy documentation site. But the site generator itself is not enough, there are a few other details I’ll have to investigate. The primary one being ability to preview changes to pages earlier rather than later in the pipeline.

Today’s flow of editing markdown files can deliver immediate feedback because GitHub has a “Preview” tab on its built in Markdown editor. But once we’re no longer directly using GitHub’s simple Markdown transformation, we’ll lose that ability. Contributors should not be expected to set up a SSG environment on their computer in order to see the results of their work, and asking maintainers to review every change on their own SSG environment would not scale. (I say this using plural as if Sawppy has a big maintenance staff… it’s actually just me.)

The answer is to bring in tools from the continuous integration world, where tools exist to preview changes to a website before deploying live. Some use services like Netlify, which is not itself open source but there is a free tier available.

One example: look at the repository for Write the Docs website. Open one of the pending pull requests and click on “Show all checks”. One of them is “Read the Docs build succeeded!” and clicking “Details” will bring up a version of the site built with changes in the pull request. This is an interesting venue of investigation to learn more about.

This was the point where I ran out of steam, and the Write the Docs meeting ran out of time, but I have a big treasure trove of pointers to investigate and keep me busy for a while.

Other miscellaneous notes:

Sawppy Documentation Suggestion: Static Site Generators

I’m glad I had the chance to learn about terminology and tools of industrial-strength documentation. They are great for their respective niches, but adopting any of them would require major changes and I’m not ready to take such a big leap just yet. Which brings us to static site generators. (SSG) This category of software tools see a lot of open source development, giving us many options to choose from.

Background: As input, SSGs take content in various formats. A set of rules and templates are applied to that content, generating as output a set of HTML and CSS (and maybe JavaScript) files that can be served by any web server. “Static” in this context means the web server does not have to run any code to modify the files, they are transmitted directly to users’ web browsers as-is. (As opposed to a dynamic systems like PHP.)

A large number of SSGs accept Markdown as a text content input format, so Sawppy’s existing Markdown documentation could be used with small modifications rather than complete rewrites. This also preserves the advantages of using Markdown, meeting the ease-of-use challenges A and B.

Every SSG offers customization of the rules and templates that it applies to content. At the minimum there are themes for cosmetic appearance, but plugins and extensions allow more extensive functionality. This is where I hope to create something that meets my challenge C including a lightweight BOM tool. Around this point Eric spoke up that the JPL Open Source Rover documentation system has a script that generates parts list from document content, but the generated dependency tree is not exposed to the viewer. I want to build upon that precedent and also make this kind of information available to rover builders.

To be pedantic, Sawppy documentation is already using a SSG because GitHub does not display raw Markdown files. A simple transformation to HTML has been applied for display. However, the reason I started this investigation is because the simple GitHub transformation is very limited. GitHub is aware of this, and as an upgrade, has built-in support for a SSG called Jekyll for generating GitHub Pages.

As another example, most of us had experience reading technical software documentation on some subdomain of ReadTheDocs. All of these content were generated by a SSG. MkDocs and Sphinx are two popular SSGs for ReadTheDocs, and a lot of the default functionality (automatic indexing, references, etc.) useful for technical software documentation would be useful for Sawppy as well.

But features like a lightweight BOM tool would be outside the scope of software documentation, so there were several recommendations for investigating Hugo. It is apparently the current darling for flexibly transforming content (including Markdown text) into varying different presentations. Associated with Hugo is Docsy, a Hugo theme focused on technical documentation.

Hugo and Docsy would be a bigger step up in complexity than something like Jekyll, but I’m optimistic that the benefit will justify the cost. I plan to use that as my starting point and expand my experimentation from there. But they’ll only be a part of the solution, because no matter the static generator I use I will still want a way to preview changes.

Sawppy Documentation Suggestion: BOM and UML

I had the dream of a documentation system for Sawppy that doesn’t seem to fit anything out-of-the-box, but I had the chance to ask a group of documentation experts for what they thought might apply. It looks like DITA is the super flexible Swiss army knife for documentation. It is a free open standard, but the only freely available DITA software tool I could find works at a low level and I would have to put in a lot of time to build up the system I dream of. In addition, some of the problems I wanted to solve edge into other well-established problem areas.

Bill of Materials

Similar to documentation, the challenge of tracking parts and components is well-tread ground for engineering projects. People at the meeting with industry experience suggested a few terms. MRP or Materials Resource & Planning was one, plus several variants on BOM or Bill of Materials. This area is big business! Sadly free open source tools are scarce. Someone gave a pointer to OpenBOM.com but upon further research I was disappointed to find that despite the name, this tool is not actually open.

That leaves us with few choices between “use a spreadsheet” and big ticket enterprise software. Even if numerous choices were available, such tools are focused on a very small subset of the overall problem. I do want to set up some kind of parts management, but bringing in a full fledged BOM tool adds a lot of complexity I’m not sure is justified for a small scale project like Sawppy.

Model All The Things

I had the same concern about another series of suggestions: fully model everything about Sawppy using a modeling language like SysML or PlantUML. I agree doing so will result in a complete picture, breaking down every part of the rover project. Such data could then feed into software packages that generate visualizations plotting dependencies between components. That sounds good, but the amount of work also felt disproportionate to the benefit it would bring to a project of this scale.


What I hoped would be a better fit for a project of Sawppy’s scale are the documentation systems already available for open-source software projects. While they would not have some of the hardware-focused features — such as BOM or UML above — they are more approachable than DITA and promise to be amenable to customizations. Perhaps even to give a lightweight subset of big BOM and UML tools.

Sawppy Documentation Suggestion: DITA

I outlined my Sawppy project and the challenges I want to tackle to the combined Write the Docs LA/SGVLUG meetup on the evening of October 8th. Sawppy’s current system of a loose set of Markdown files score highly on ease of contribution (challenge A) and ease of management (challenge B), but fall flat on querying dependencies (challenge C). What can I look into that helps improve information presentation without giving up the rest?

Fundamentally speaking, challenge C is not new, as it would be desirable in any large scale engineering project. The novel twist here is the desire to do it all with a system that is inviting for public contribution. As a general rule, documentarians for large engineering projects are professionals who have undergone training and have licenses for proprietary software tools.

DITA

Most such tools are excluded from consideration due to cost, but many of them deal with DITA, an open XML-based data model for authoring and publishing under the custody of the OASIS consortium. It is the standard answer to reassure customers wary of being locked in to proprietary file formats. And since it is an open format, there exists a DITA Open Toolkit to transform DITA data to desired output formats… HTML, PDF, even Markdown! There are learning resources at https://learningdita.com/

As a XML (and thus text) based format, DITA would be GitHub friendly for branching and merging. It is very flexible for creating any organization we want (creating a “Subject Schema” for DITA) but taking advantage of that power will take work. DITA Open Toolkit functions at a lower level relative to the other tools discussed later. A quick web search found many commercial offerings built on DITA (example: https://easydita.com/) but failed to find free open source counterpart.

So DITA is powerful, but that power comes at a cost, and I’ll have to decide if the cost/benefit analysis comes out in favor. This also applies to several other professional documentation concepts.

Sawppy Documentation System Challenges

I want to improve the usability of Sawppy documentation. Keeping in mind some example problems I want solved I started my quest. To find a system to document and track these types of relationships, without losing too much of the advantages of my current system. This means every candidate system must meet the following challenges:

Challenge A: Easy to Contribute

When a Markdown file is hosted on GitHub, a potential contributor can click the “Edit” button to create a fork of the repository, make a quick fix, and create a pull request all via GitHub website. This presents an extremely low barrier to entry for contributors which is a feature I want to preserve. If contributors were required to install and learn some piece of documentation software, that would discourage many people from participating before we even talk about the learning curve for that software.

Challenge B: Easy to Manage

When using GitHub’s web interface to edit a Markdown file, visualizing the change is as simple as clicking over to the “Preview” tab of the editor. Sadly such ease can’t be matched by any system external to GitHub, but it would be nice to have some way to let a contributor see what the end results look like. Failing that, I must have a way to visualize the final impact of changes before I merge a pull request. It is unacceptable to not see changes until after merging into the main branch.

Challenge C: Easy to Query

The desired documentation system will take metadata written by document author and build an interactive presentation. This way rover builders can find information without being constrained by the current linear narrative. Here are some examples of questions I’ve received for Sawppy, rephrased in terms of wheel axle.

  • (Easy: positive query) I’m on the wheel hub assembly page, and it needs the wheel axle which I guess I forgot to build. Where do I need to go?
  • (Hard: negative query) My order of 8mm shafts got delayed. What can I work on now that’s not dependent on having these shafts?
  • (Both of above) How can a teacher most effectively divide up rover tasks so multiple students teams can build the rover in parallel?

Challenge D: Free

It would be nice for the system to be free as in freedom, but at the very least it must be free as in beer. Design for Sawppy is given away at no cost to rover fans everywhere, there is no profit to cover monetary expense of a commercial documentation system.

Once I laid out these challenges to the group and opened the meeting to discussion, people started offering suggestions. Some professional documentarians brought up DITA as a venue for investigation.

Sawppy Documentation Shortcoming Example: Wheel Axles

To illustrate problems with Sawppy’s documentation, I’ll use a single component as example: Sawppy wheel axle. There are at least three entirely separate pages relating to the wheel axle:

  1. The page of parts list, telling a builder to buy 8mm metal shaft.
  2. The page for 8mm shaft modification, where I describe how to cut the long shaft into shorter segments. Followed by steps to turn these segments into wheel axles. Other segments were turned into steering shafts, plus those turned into rocker-bogie suspension pivots.
  3. The page for wheel hub assembly, which incorporates a single segment of the 8mm wheel axle shaft.

While these three files were all linked from the index page, there’s no obvious way to retrieve the relationship between them in the context of wheel axles. I can manually add links between them, but this is time consuming and perpetually incomplete. Even worse, as the number of relationships grew, it will quickly become a maintenance nightmare.

Thus I started my quest to find a system to document and track these types of relationships without losing (too much) of the advantages of my current system.

Sawppy Documentation Could Be Better

When I decided to release Sawppy to the world, I thought briefly about how to best organize all the information I want to convey for rover assembly. I quickly fell into a state of Analysis Paralysis and, as a path out of that state, decided that it was better to have something written down whatever the format. No matter how unorganized, is still better than keeping it all in my head.

I first tried putting it in the “Build Instructions” section of Sawppy’s Hackaday.io page, but that feature has some strange and unpredictable limitations that became annoying as the length of instructions grew. The final straw was when I noticed that images and instructions for earlier steps were disappearing as I added later steps. That made me… unhappy, so I went to something else.

The second attempt is what I have as of today: a loose collection of Markdown files on a Github repository. Edited in a code editor rather than a word processor, I struggled with typos and grammatical errors as I lacked the usual automated proofreading tools present in a word processor. Still, with a large helping of assembly pictures, it was just barely enough to help other people build their own rovers.

I was painfully aware of the fact there is a ton of obvious room for improvement. This was just the “get it written down” first stage and at some point I need to revisit the various problems still open. The most significant of which is lack of structure beyond an index page with links to all the other pages. The index suggested a relative ordering that matched my personal assembly order, but that doesn’t necessarily work for anyone else. And worse, they would be stuck if they wanted to ask some specific questions my layout is unable to answer.

Cardboard Absurdity: Sexy Minion

I abandoned the first draft of my cardboard Mike Wazowski for another attempt later, but I did not abandon my other Hallowing. An idea to put it to use came courtesy of Emily’s reply to my cardboard minion tweet.

This “sexy minion” is certainly not something I would have found on my own, and my initial reaction was probably what Emily intended: vaguely disturbed and resignation to the fact I can’t un-see that image.

But it’s on Twitter now, and it’s also in my brain now, and I do have an extra Hallowing on hand, so I decided to play along with the joke with a minimum-effort project. I imported the image into a photo editor and scaled it so the eye is the right size to use with a Hallowing. Fortunately it fit on a sheet of standard letter-sized paper so I didn’t have to crop any part of it off.

I tried to print it on my color inkjet printer, but that thing hasn’t printed anything in months (possibly years) so naturally its print nozzles were clogged. A standard unclog routine did not fix it and I didn’t want to spend time troubleshooting. (Remember: minimal effort.) So I printed on my monochrome laser printer and colored it in manually afterwards using markers.

Given the minimal effort I didn’t try to trace the outline curves with my new favorite cardboard tool the Canary knife, just a rectangular piece of cardboard and the marker-colorized paper taped on top. My X-Acto blade made quick work of the eyehole, and the second Hallowing was taped in place. I set it up my convertible photo studio for a quick video and threw it up on Twitter.

This silly little project couldn’t have taken more than half an hour (even less if I subtract fussing with the clogged inkjet) and it turned out to be unexpectedly (or is that disturbingly?) popular. As a result I have the sinking feeling this is not the last I’ve seen of “sexy minion”.

I also felt a bit bad that I didn’t put in the time to research where that drawing came from. It wasn’t a big deal when I thought it’d just be a throwaway joke between friends, but with thousands of views the artist’s name should have been attached. I can’t edit my original tweet, but I could at least credit the artist @nicoisesalade here:

Cardboard Companion: Mike Wazowski

My trial run using a Canary cardboard cutter was far more successful than I had expected, resulting in a little cardboard companion minion perched on my shoulder. I was extremely happy and joined this month’s (Virtual) Wearables Wednesdays event at CRASHSpace to show off my minion as a wearable electronic project. And also to thank Barb (who usually attends the event) for telling me about the Canary cutter.

Barb immediately (and correctly) recognized the minion’s eye as the Adafruit HalloWing default program. She had several sets of similar eyes on hand, some incorporated into projects, but all the units within reach came as pairs so there were no immediate advice on how to get my two units to synchronize. But by now I didn’t really want to synchronize them anyway, because that would mean taking apart my minion which I’m not ready to do just yet.

So I asked the attendees what I should do with the other eye. People started brainstorming and tossing out ideas. They were fine ideas but they didn’t capture me as much as when Liz said “Mike Wazowski”. I said “Yes!” and got started immediately while the meeting was still underway. This is falling back on old patterns, as it is pretty typical for work to happen during non-virtual Wearables Wednesdays meet.

I found a picture of Mike Wazowski on the internet and traced out a rough outline on cardboard. For the minion I wanted to keep the eyehole small so none of the electronics are visible. For Mike I thought I’d explore how things looked if the eye hole was larger.

Once I had Mike cut out and popped my second HalloWing into the eyehole, I decided I did not like how it looked. I much preferred the minion approach where the circuit board was hidden. If I wanted to build a Mike Wazowski with a properly obscured HalloWing eye hole while still maintaining proportions, I will need to cut a smaller Mike. There’s also a second reason to want a smaller Mike: this one is too wide to sit properly on my shoulder. Maybe someone with much broader shoulders can pull it off, but this Mike’s butt is too wide for me to carry around.

I will abandon this cardboard cutout and stick “try again with smaller Mike” on the to-do list. This is the beauty of experimenting with cardboard: cost of failure is low, and speed of iteration is fast. I could very quickly follow up this abandoned project with an absurd project.

A Canary Corrugated Cardboard Cutter Convert

One of the bonus motivations for building my cardboard companion minion was a test run of the Canary Corrugated Cardboard Cutter (*). After my experience in that project, I am now a big fan of this tool.

I learned of the Canary cutter via CRASHSpace, a longstanding maker community in the greater LA area. As they are on the opposing side of downtown LA it is not trivial for me to visit. But now that everything is virtual, I actually have more interaction with members of that community than I would have otherwise.

One of the recent discoveries started by watching Barb Noren‘s session “Tinkering @ Home” for Virtually Maker Faire 2020. One of the topics was their Tinkering Toolkit and the Canary cutter in that kit caught my eye. Given the popularity of home delivery in these times, many of us are going through a large number of corrugated cardboard boxes. We could throw them in the recycle bin, but Barb Noren asserts that is a waste: they are useful raw material for projects! And the Canary cutter is how reDiscover Center can set children loose on cardboard, as young as seven years old, under adult supervision.

I’ve built many projects with corrugated cardboard, using X-Acto blades for fine detail and large box cutter knives for large cuts. And yes, I’ve had my share of accidental cuts and so I was immediately interested in the idea of a much safer cutting tool. I was willing to trade off some cutting effectiveness if it would gain me more safety. And after asking Barb a few questions about it at a virtual CRASHSpace event, I ordered one of my own to try.

When my Canary cutter arrived, I saw a well built tool with a plastic handle for manipulating the metal cutting blade, which was edged with fine serrations. It looked fine but did not inspire great expectations. That attitude changed as soon as I took a test cut. I had expected the serrated teeth to tear rough edges in the cardboard, and I had expected the less-scary blade to also be less effective than a sharp blade at cutting.

I was wrong on both counts.

The Canary cutter cut through corrugated cardboard amazingly quickly, with less effort than box cutter blades, and left a pretty clean edge. Yes, if I compare it side-by-side with something cut by a sharp knife I can see a difference, but when we’re working with corrugated cardboard we’re not exactly working with precision tolerances anyway. And the serrated edge cuts enough clearance that the blade does not get stuck, which my box cutter knives tend to do. Freeing a stuck sharp knife is the major cause of my crafting injuries, so just by eliminating that scenario, things became a whole lot safer.

However, it is still a cutting knife that demands respect, as I’ve already managed to draw my own blood once. But it is much less dangerous than putting big box cutter knives in the hands of children. Since Barb’s session video, reDiscover Center has posted another video about using the Canary cutter.

I’m pretty amazed at how well the Canary cutter worked. This reduces the barrier of entry for corrugated cardboard projects in the future. As the above video stated, it is not suitable for all cuts. We’d still need to have scissors and our old friend the X-Acto blade for fine detail, but for large cuts the Canary cutter is pretty amazing. Anyone who wants to unleash their creativity on corrugated cardboard should get one. (*)

Naturally, with my hands on such a fun new tool, I didn’t stop at just one project and found another cardboard project to start on.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Cardboard Companion: Minion

I’ve long admired the robot companions built by Alex Glow and Odd Jayy but never dedicated the time and effort to build a good one of my own. I still haven’t done so… but I’ve spent roughly an hour or two to build a low-effort companion out of cardboard.

This project was kicked off when I was moving a few boxes around and noticed the Hallowing I received at Supercon 2018 almost two full years ago. When I wrote about it earlier I thought it had full of promise and should be a lot of fun to play with. That is still true, it just never came to the top of my priority list. I actually have two of them now, as Emily gave me hers saying she’d never do anything with it. I said I would definitely find something fun to do but nothing had happened since.

So when I saw them again, I had an urge to do something with them right now. Today. The pair of Hallowing deserved to be dusted off, literally and figuratively. If I can’t do something unique and cool, I can at least do something to verify at least they still function.

When I plugged them into a USB power bank, they started right up. A good start!

I thought I’d use them both by following Adafruit’s instructions to synchronize two of them. Unfortunately I made a mistake somewhere and the two eyes remained stubbornly independent. So I switched to a backup plan: what do I know that has a single eye? The first one that came to my mind is a minion from the movie Despicable Me.

I made a rough sketch and cut out the shape of a minion. I wanted the minion to sit on my shoulder, so the outline was placed such that the existing fold for this box lid is roughly at the (not terribly well defined) waist of the minion. The cutting tool visible in this picture is a Canary corrugated cardboard cutter. This was my first time using it and I am now a big fan.

After I cut out the eyehole, a quick size comparison test confirmed it was in the ballpark. I decided to stop cutting at this point. A hole that’s slightly too small like this will obscure a portion of the eye, not a big deal. In contrast a hole that’s slightly too big will show the wires at the edge of this LCD module or the circuit board underneath, either of which would spoil the look and thus something I wanted to avoid.

A black marker helped make the cardboard look more like a minion.

The minion’s work overalls courtesy of blue highlighter marker.

I used cardboard to build a tripod to help the minion sit on my shoulders, but it is top-heavy with the Hallowing and prone to falling over. I decided to tape some magnets to the bottom of the minion.

Once I set the minion on my shoulder, I could install matching magnets inside my shirt. The two magnets pinch fabric of my shirt, holding the minion in place.

Voila! A low effort cardboard companion.

It only scratches the surface of what the Hallowing can do, but far better than just letting it gather dust. Will it find a place in a cooler and more sophisticated project? Check back in two years!

Angular CLI as WSL 2.0 Test Case

I’ve been fascinated by the existence of Windows Subsystem for Linux (WSL) ever since its introduction. I’ve played with it occasionally, such as trying to run ROS on it. And this time I thought I’d try installing the Angular CLI on a WSL instance. But this time with a twist: this is now WSL 2.0, a big revamp of the concept. Architecturally, there’s now much more of an actual Linux distribution running inside the environment, which promises even better Linux compatibility and performance in Linux-native scenarios. The tradeoff is a reduction in performance of Windows-Linux interoperations, but apparently the team decided it was worthwhile.

But first, I have to run through the installation instructions which, on my 2004 build, encountered the error that required a Linux kernel update.

WSL 2 requires an update to its kernel component. For information please visit https://aka.ms/wsl2kernel

Then I can install it from Microsoft store followed by an installation of Ubuntu. Then I installed Node.JS for Ubuntu followed by Angular CLI tools. The last step ran into the same permissions issue I saw on MacOS X with node-modules ownership. Once I took ownership, I got an entirely new error:

Error: EACCES: permission denied, symlink '../lib/node_modules/@angular/cli/bin/ng' -> '/usr/bin/ng'

The only resolution I found for this was “Run as root”. Unsatisfying, and I would be very hesitant if this was a full machine but I’m willing to tolerate it for a small virtual machine.

Once I installed Angular CLI, I cloned by “Tour of Heroes” tutorial repository into this WSL instance and tried ng serve. This triggered the error:

Cannot find module '@angular-devkit/build-angular/package.json'

Which turned out to be a Node.JS beginner mistake. Looking up the error I found this StackOverflow thread where I learned that cloning the repository was not enough. I also need to run “npm install” in that directory to set up Node dependencies.

Once those issues were resolved, I was able to run the application where I found two oddities. (1) I somehow didn’t completely remove the mock HEROES reference on my Mac? And (2) package.json and package-lock.json had lots of changes I did not understand.

But neither of those issues were as important as my hopes for more transparent networking support in WSL 2. Networking code running in WSL was not visible from elsewhere in my local network unless I jumped through some hoops with Windows Firewall, which was what made ROS largely uninteresting earlier for multi-node robots. WSL 2 claimed to have better networking support, but alas my Angular application’s “ng serve” was similarly unreachable from another computer on my local network.

Even though this test was a failure, judging by the evolution of WSL to WSL2 I’m hopeful that work will continue to make this more seamless in the future. At the very least I hope I wouldn’t have to use the “run as root” last resort.

Until the next experiment!

Notes on Angular Architecture Guide

After completing two beginner tutorials, I returned for another pass through the Angular Architecture Guide. These words now make a lot more sense when backed by the hands-on experience of the two tutorials. There are still a few open question marks, though, the long standing top of my list are Angular Modules. I think each of the tutorial is contained into a single module? That would explain why I haven’t seen it in action yet. It doesn’t help that JavaScript also has a “Module” concept, makes things confusing to a beginner like myself. And as if that’s not confusing enough, there are also “Libraries” which are different from modules… how? I expect these concepts won’t become concrete until I tackle larger projects that incorporate code from multiple different sources.

In contrast, Angular Components have become very concrete and these pages even use excerpts from the “Tour of Heroes” tutorial to illustrate its concepts. I feel I have a basic grasp of components now, but I’m also aware I still need to read up on a few things:

  • Component metadata so far has been a copy-and-paste affair with limited explanation. It’ll be a challenge to strike out on my own and get the metadata correct. Not just because I don’t know what I should do, but also I haven’t seen what the error messages are like.
  • Data binding with the chart of four types of binding looks handy, I hope it’s repeated and expanded in more detail elsewhere in documentation so I can get some questions answered. One example: I read “Angular processes all data bindings once for each JavaScript event cycle” but I have to ask… what’s a JavaScript event cycle? The answer lies elsewhere.
  • Pipes feel like something super useful and powerful, looking forward to playing more with the concept and maybe even create a few of my own.
  • Directives seem to be a generic concept whose umbrella is fuzzy, but I’ve been using task-specific directives in the tutorial. *ngFor and *ngIf are structural directives. Attribute directive was used in two-way data binding. Both types have their own specific guide pages.

Reading the broad definition of services I started thinking “Feels like my UWP Logger, maybe I can implement a logger service as exercise.” Only to find out they’re way ahead of me – the example is a logger (to console) service! I had hoped this section would help clear up the concept of service providers, but it is still fuzzy in my head. I understand the default is root, which is a single instance for everything, and this is what was used in the tutorials. I will need to go find examples of providers at other granularities, which apparently can be as specific as per component where each instance of component gets its own instance of that service. When would that be useful? I have yet to learn the answer. But at least I’ve written these questions down so I’m less likely to forget.

Fixing Warnings of TypeScript Strict Mode Violation After “Tour of Heroes” Tutorial

Once I’ve reached the end of the Angular “Tour of Heroes” tutorial, I went back to address something from the beginning. When I created my Angular project, I added the --strict flag for the optional “Strict Mode“. My motivation was the belief that, since I’m starting from scratch, I might as well learn it under enforcement of best practices. I soon realized this was also adding an extra burden on myself for following the tutorial, because the example code does not always follow strict mode. This means when something goes wrong, I might have copied the tutorial code incorrectly, but it’s also possible I copied correctly but it’s just doesn’t conform to strict mode.

As a beginner, I really didn’t need this extra work, since I had enough problems with basics on my own. But I decided to forge onward and figure it out as I went. During the tutorial, I fixed strict mode errors in two categories:

  1. Something so trivial that I could fix quickly and move on.
  2. Something so serious that compilation failed and I had to fix it before I could move on.

In between those two extremes were errors that were not trivial, but only resulted in compilation warnings that I could ignore and address later. They are visible in this GitHub commit:

I first had to understand what the code was doing, which was why I imported MessageService to gain access to logging. The compiler warnings were both about uninitialized variables, but the correct solution was different for the two cases.

For the hero input field, the tutorial code logic treats undefined as a valid case. It is in fact dependent on undefined hero to know when not to display the detail panel. Once I understood this behavior, I felt declaring the variable type as a Hero OR undefined was the correct path forward.

For the Observable collection of Heroes, the correct answer is different. The code never counts on undefined being a valid value, so I did not need to do the if/else checks to handle it as an “OR undefined” value. What I needed instead was an initial value of a degenerate Observable that does nothing until we can get a real Observable. Combing through RxJS documentation I saw this was recognized as a need and was actually done in a few different ways that have since been deprecated. The current recommendation is to use the constant defined as EMPTY, which is what I used to resolve the strict mode errors.

Notes on “Tour of Heroes” Tutorial: Other Web Server Interactions

The Angular “Tour of Heroes” tutorial section 6 “Get Data From Server” covered the standard interactions: Create, Read, Update, and Delete commonly referred to as CRUD. But it didn’t stop there and covered a few other server operations. I was intrigued the next section was titled “Search by name” because I was curious how a client side single page application could search data on a static web server.

It turns out the application does not actually perform the search, the “search by name” example is built around sending a GET URL with a “name=” query parameter and processing the results. So the example here isn’t actually specific to searching, it’s just a convenient example to demonstrate a general server query/response outside of the simplified CRUD mode. It can be argued that this fits under the umbrella of R of CRUD, but this is as far as I’ll pick that nit.

Lucky for us, the search query parameter appears to be part of the feature set of the in-memory web API we’re using to stand in for a web server. Like the rest of the tutorial CRUD operations, none of this will actually work on a static web server. But that’s fine, we get to see the magic of Observable used in a few different ways. Like a new convention of ending an observable name with the $ character and asynchronously piping into a *ngFor directive.

For me, the most interesting new concept introduced in this section is the rate-limiting functionality of debounceTime() filter from RxJS. Aside from being a piece of commonly needed (and thus commonly re-implemented) functionality, it shows the power of using RxJS for asynchronous operations where we can stick these kind of filters in the pipeline to accomplish goals. I don’t fully understand how it works or how I might reuse it in other contexts. I think I have to learn more about Subject<T> first? But anyway, what we’ve seen here is pretty cool and worth following up with more reading later.

Notes on “Tour of Heroes” Tutorial: C and D of CRUD

This Angular tutorial on server interactions started with (R)ead and (U)pdate operations, then we moved to C(reate) of CRUD by adding a hero to the Tour of Heroes. I found it interesting that content creation was not the first thing to be covered, even though it is the obvious first step in life cycle of a piece of information which I supposed was why it was the first letter of CRUD. The authors of this tutorial gave us the luxury of a stub set of data so we can explore other concepts before we worry about data creation.

Even with that background, creation was still confusing to me. For example, our input element has a hash #heroName. I assume the hash prefix tells some piece of Angular infrastructure to do something… but what? That was completely unexplained and I have no idea how to use it myself later. Even worse, they didn’t even give me a keyword to search for, so I’ll start with input element documentation and hunt from there.

Another piece of auto-magic is in generation of hero ID. I felt that was a mistake because the identifier will be the first piece of information we’ll need to understand in any debugging task. The tutorial authors may not think these details are important, but I do, so I’ll have to chase down details later.

And finally we have D(elete) of CRUD. The mind-boggling part for me was learning that RxJS only cares about delivering information to subscribers. If there are no subscribers, RxJS will decide it is not important and won’t do the thing. This had to be called out because in this context it meant we must hang a subscriber on a delete operation, even if we don’t do anything with the response, or else RxJS will not perform the operation. On the one hand, emphasizing that an Observable does nothing unless there are subscribers is a very valuable point to bring up. On the other hand, this feels like an inelegant hack.

I can accept this as an oddity of a system that we just have to learn to live with in the world of Angular development. Even though I can see it biting me in the future if I ever forget, I can see how it is a worthwhile tradeoff to get everything else RxJS offers to make server interaction easier.