Learned About Home Assistant From ESPHome Via WLED

I thought Adafruit’s IO platform was a great service for building network-connected devices. If my current project had been something I wanted to be internet-accessible with responses built on immediate data, then that would be a great choice. However, my current intent is for something locally at home and I wanted the option to query and analyze long term data, so I started looking at Home Assistant instead.

I found out about Home Assistant in a roundabout way. The path started with a tweet from GeekMomProjects:

A cursory look at WLED’s home page told me it is a project superficially similar to Ben Hencke’s Pixelblaze: an ESP8266/ESP32-based platform for building network-connected projects that light up individually addressable LED arrays. The main difference I saw was of network control. A Pixelblaze is well suited for standalone execution, and the network interface is primarily to expose its web-based UI for programming effects. There are ways to control a Pixelblaze over the network, but they are more advanced scenarios. In contrast, WLED’s own interface for standalone effects are dominated by less sophisticated lighting schemes. For anything more sophisticated, WLED has an API for control over the network from other devices.

The Pixelblaze sensor board is a good illustration of this difference: it is consistent with Pixelblaze design to run code that reacts to its environment with the aid of a sensor board. There’s no sensor board peripheral for a WLED: if I want to build a sensor-reactive LED project using WLED, I would build something else with a sensor, and send commands over the network to control WLED lights.

So what would these other network nodes look like? Following some links led me to the ESPHome project. This is a platform for building small network-connected devices using ESP8266/ESP32 as its network gateway, with a library full of templates we can pick up and use. It looks like WLED is an advanced and specialized relative of ESPHome nodes like their adaptation of the FastLED library. I didn’t dig deeper to find exactly how closely related they are. What’s more interesting to me right now is that a lot of other popular electronics devices are available in the ESPHome template library, including the INA219 power monitor I’ve got on my workbench. All ready to go, no coding on my part required.

Using an inexpensive ESP as a small sensor input or output node, and offload processing logic somewhere else? This can work really well for my project depending on that “somewhere else.” If we’re talking about some cloud service, then we’re no better off than Adafruit IO. So I was happy to learn ESPHome is tailored to work with Home Assistant, an automation platform I could run locally.

Unity DOTS = Data Oriented Technology Stack

Looking over resources for Unity ML-Agents toolkit for reinforcement learning AI algorithms, I’ve come across multiple discussion threads about how it has difficulties scaling up to take advantage of modern multicore computers. This is not just a ML-Agents challenge, this is a Unity-wide challenge. Arguably even a software development industry-wide challenge. When CPUs stopped getting faster clock rates and started gaining more cores, games have had problem taking advantage of them. Historically while a game engine is running, there is one CPU core running at max. The remaining cores may help out with a few auxiliary tasks but mostly sit idle. This is why gamers have been focused on single-core performance in CPU benchmarks.

Having multiple CPUs running in parallel isn’t new, nor are the challenges of leveraging that power in full. From established software toolkits to leading edge theoretical research papers, there are many different approaches out there. Reading these Unity forum posts, I learned that Unity is working on a big revamp under the umbrella of DOTS: Data-Oriented Technology Stack.

I came across the DOTS acronym several times without understanding what it was. But after it came up in the context of better physics simulation and a request for ML-Agents to adopt DOTS, I knew I couldn’t ignore the acronym anymore.

I don’t know a whole lot yet, but I’ve got the distinct impression that working under DOTS will require a mental shift in programming. There were multiple references to Unity developers saying it took some time for the concepts to “click”, so I expect some head-scratching ahead. Here are some resources I plan to use to get oriented:

DOTS is Unity’s implementation of Data-oriented Design, a more generalized set of concepts that helps write code that will run well on modern machines with many cores and multiple levels of memory caches. An online eBook for Data-oriented Design is available, which might be good to read so I can see if I want to adopt these concepts in my own programming projects outside of Unity.

And just to bring this full circle: it looks like the ML-Agents team has already started DOTS work as well. However it’s not clear to me how DOTS will (or will not) help with the current gating performance factor: Unity environment’s communication with PyTorch (formerly TensorFlow) running in a Python environment.

Today I Learned: MuJoCo Is Now Free To Use

I’ve contemplated going through OpenAI’s guide Spinning Up in Deep RL. It’s one of many resources OpenAI made available, and builds upon the OpenAI Gym system of environments for training deep reinforcement learning agents. They range from very simple text-based environments, to 2D Atari games, to full 3D environments built with MuJoCo. Whose documentation explained that name is a shorthand for the type of interactions it simulates: “Multi Joint Dynamics with Contact”

I’ve seen MuJoCo mentioned in various research contexts, and I’ve inferred it is a better physics simulation than something that we would find in, say, a game engine like Unity. No simulation engine is perfect, they each make different tradeoffs, and it sounds like AI researchers (or at least those at OpenAI) believe MuJoCo to be the best one to use for training deep reinforcement learning agents with the best chance of being applicable to the real world.

The problem is that, when I looked at OpenAI Gym the first time, MuJoCo was expensive. This time around, I visited the MuJoCo page hoping that they’ve launched a more affordable tier of licensing, and there I got the news: sometime in the past two years (I didn’t see a date stamp) DeepMind has acquired MuJoCo and intend to release it as free open source software.

DeepMind was itself acquired by Google and, when the collection of companies were reorganized, it became one of several companies under the parent company Alphabet. At a practical level, it meant DeepMind had indirect access to Google money for buying things like MuJoCo. There’s lots of flowery wordsmithing about how opening MuJoCo will advance research, what I care about is the fact that everyone (including myself) can now use MuJoCo without worrying about the licensing fees it previously required. This is a great thing.

At the moment MuJoCo is only available as compiled binaries, which is fine enough by me. Eventually it is promised to be fully open-sourced at a GitHub repository set up for the purpose. The README of the repository made one thing very clear:

This is not an officially supported Google product.

I interpret this to mean I’ll be on my own to figure things out without Google technical support. Is that a bad thing? I won’t know until I dive in and find out.

Today I Learned About Flippa

I received a very polite message from Jordan representing Flippa who asked if I’d be interested in selling this site https://newscrewdriver.com. Thank you for the low-pressure message, Jordan, but I’m keeping it for the foreseeable future.

When I received the message, I didn’t know what Flippa was so I went and took a cursory look. At the surface it seems fairly straightforward: a marketplace to buy and sell online properties. Anything from just a domain to e-commerce sites with established operational history. The latter made sense to me because a valuation can be calculated from an established revenue stream. The rest I’m less confident about. Such as domain names, whose valuation are a speculation on how it might be monetized.

But there’s a wide spectrum between those two endpoints of “established business” and “wild speculation”. I saw several sites for sale set up by people that started with an idea, set up a site to maximize search engine traffic over a few months, then sell the site based on that traffic alone. Prices range wildly. At time of my browsing, auction for one such site is about to close at $25. But they seem to range up to several thousand dollars, so I guess it’s possible to make a living doing this if your ideas (and SEO skills) are good.

Mine are not! But money was not the intent when I set up this site anyway. It is a project diary of stuff I find interesting to learn about and work on. I made it public because there’s no particular need for privacy and some of this information may be useful to others. (Most of it are not useful to anybody, but that’s fine too.) So it’s all here in written text format for easy searching. Both by web search engines, and with browser “find text” once they arrive.

I haven’t even tried to put ads on this page. (Side note: I’m thankful modern page ads have evolved past the “Punch the Monkey” phase.) I also understand if my intent is to generate advertising revenue, I should be doing this work in video format on YouTube. But video files are hard to search and skim through. Defeating the purpose of making this project diary easily available for others to reference. I had set up a New Screwdriver YouTube channel and made a few videos, but even my low effort videos took more far more work than typing some words. For all these reasons I decided to primarily stay with the written word and reserve video for specific topics best shown in video format.

The only thing I’ve done to try monetizing this site is joining the Amazon Associates program, where my links to stuff I’ve bought on Amazon can earn me a tiny bit of commission. The affiliate links don’t add to the cost to buy those things. And even though I’ve had to add a line of disclosure text, that’s still less jarring of an interruption than page ads. So far Amazon commissions have been just about enough to cover the direct costs of running this site (annual domain registration fee and site hosting fee) and I’m content to leave it at that.

But hey, that is still revenue associated with this site! Browsing Flippa for similar sites based on age, traffic, and revenue, my impression is that market rate is around $100. (Low confidence with huge error margins.) Every person has their price, but that’s several orders of magnitude too low to motivate me to abandon my project diary.

Shrug. New Screwdriver sails on.

TIL Some Video Equipment Support Both PAL and NTSC

Once I sorted out memory usage of my ESP_8_BIT_composite Arduino library, I had just one known issue left on the list. In fact, the very first one I filed: I don’t know if PAL video format is properly supported. When I pulled this color video signal generation code from the original ESP_8_BIT project, I worked to keep all the PAL support code intact. But I live in NTSC territory, how am I going to test PAL support?

This is where writing everything on GitHub paid off. Reading my predicament, [bootrino] passed along a tip that some video equipment sold in NTSC geographic regions also support PAL video, possibly as a menu option. I poked around the menu of the tube TV I had been using to develop my library, but didn’t see anything promising. For the sake of experimentation I switched my sketch into PAL mode just to see what happens. What I saw was a lot of noise with a bare ghost of the expected output, as my TV struggled to interpret the signal in a format it could almost but not quite understand.

I knew the old Sony KP-53S35 RPTV I helped disassemble is not one of these bilingual devices. When its signal processing board was taken apart, there was an interface card to host a NTSC decoder chip. Strongly implying that support for PAL required a different interface card. It also implies newer video equipment have a better chance of having multi-format support, as they would have been built in an age when manufacturing a single worldwide device is cheaper than manufacturing separate region-specific hardware. I dug into my hardware hoard looking for a relatively young piece of video hardware. Success came in the shape of a DLP video projector, the BenQ MS616ST.

I originally bought this projector as part of a PC-based retro arcade console with a few work colleagues, but that didn’t happen for reasons not important right now. What’s important is that I bought it for its VGA and HDMI computer interface ports so I didn’t know if it had composite video input until I pulled it out to examine its rear input panel. Not only does this video projector support composite video in both NTSC and PAL formats, it also had an information screen where it indicates whether NTSC or PAL format is active. This is important, because seeing the expected picture isn’t proof by itself. I needed the information screen to verify my library’s PAL mode was not accidentally sending a valid NTSC signal.

Further proof that I am verifying a different code path was that I saw a visual artifact at the bottom of the screen absent from NTSC mode. It looks like I inherited a PAL bug from ESP_8_BIT, where rossumur was working on some optimizations for this area but left it in a broken state. This artifact would have easily gone unnoticed on a tube TV as they tend to crop off the edges with overscan. However this projector does not perform overscan so everything is visible. Thankfully the bug is easy to fix by removing an errant if() statement that caused PAL blanking lines to be, well, not blank.

Thanks to this video projector fluent in both NTSC and PAL, I can now confidently state that my ESP_8_BIT_composite library supports both video formats. This closes the final known issue, which frees me to go out and find more problems!

[Code for this project is publicly available on GitHub]

Jumper Wire Headaches? Try Cardboard!

My quick ESP32 motor control project was primarily to practice software development for FreeRTOS basics, but to make it actually do something interesting I had to assemble associated hardware components. The ESP32 development kit was mounted on a breadboard, to which I’ve connected a lot of jumper wires. Several went to a Segger J-Link so I had the option of JTAG debugging. A few other pins went to potentiometers of a joystick so I could read its position, and finally a set of jumper wires to connect ESP32 output signals to a L298N motor control module. The L298N itself was connected to DC motors of a pair of TT gearboxes and a battery connector for direct power.

This arrangement resulted in an annoying number of jumper wires connecting these six separate physical components. I started doing this work on my workbench and the first two or three components were fine. But once I got up to six, things to start going wrong. While working on one part, I would inadvertently bump another part which tugs on their jumper wires, occasionally pulling them out of the breadboard. At least those pulled completely free were clearly visible, the annoying cases are wires only pulled partially free causing intermittent connections. Those were a huge pain to debug and of course I would waste time thinking it was a bug in my code when it wasn’t.

I briefly entertained the idea of designing something in CAD and 3D-print it to keep all of these components together as one assembly, but I rejected that as sheer overkill. Far too complex for what’s merely a practice project. All I needed was a physical substrate to temporarily mount these things, there must be something faster and easier than 3D printing. The answer: cardboard!

I pulled a box out of my cardboard recycle bin and cut out a sufficiently large flat panel using my Canary cutter. The joystick, L298N, and TT gearboxes had mounting holes so a few quick stabs to the cardboard gave me holes to fasten them with twist ties. (I had originally thought to use zip ties, but twist ties are more easily reused.) The J-Link and breadboard did not have convenient mounting holes, but the breadboard came backed with double-sided adhesive so I exposed a portion for sticking to the cardboard. And finally, the J-Link was held down with painter’s masking tape.

All this took less than ten minutes, far faster than designing and 3D printing something. After securing all components of this project into a single cardboard-backed physical unit, I no longer had intermittent connection problems with jumper wires accidentally pulled loose. Mounting them on a sheet of cardboard was time well spent, and its easily modified nature makes it easy for me to replace the L298 motor driver IC used in this prototype.

I Started Learning Jamstack Without Realizing It

My recent forays into learning about static-site generators, and the earlier foray into Angular framework for single-page applications, had a clearly observable influence on my web search results. Especially visible are changes in the “relevant to your interests” sidebars. “Jamstack” specifically started popping up more and more frequently as a suggestion.

Web frameworks have been evolving very rapidly. This is both a blessing when bug fixes and new features are added at a breakneck pace, and a curse because knowledge is quickly outdated. There are so many web stacks I can’t even begin to track of what’s what. With Hugo and Angular on my “devise a project for practice” list I had no interest in adding yet another concept to my to-do list.

But with the increasing frequency of Jamstack being pushed on my search results list, it was a matter of time before an unintentional click took me to Jamstack.org. I read the title claim in the time it took for me to move my mouse cursor towards the “Back” button on my browser.

The modern way to build [websites & apps] that delivers better performance

Yes, of course, they would all say that. No framework would advertise they are the old way, or that they deliver worse performance. So none of the claim is the least bit interesting, but before I clicked “Back” I noticed something else: the list of logos scrolling by included Angular, Hugo, and Netlify. All things that I have indeed recently looked at. What’s going on?

So instead of clicking “Back”, I continued reading and learned proponents of Jamstack are not promoting a specific software tool like I had ignorantly assumed. They are actually proponents of an approach to building web applications. JAM stands for (J)avaScript, web (A)PIs, and (M)arkup. Tools like Hugo and Angular (and others on that scrolling list) are all under that umbrella. An application developer might have to choose between Angular and its peers like React and Vue, but no matter the decision, the result is still JAM.

Thanks to my click mistake, I now know I’ve started my journey down the path of Jamstack philosophy without even realizing it. Now I have another keyword I can use in my future queries.

Randomized Dungeon Crawling Levels for Robots

I’ve spent more time than I should have on Diablo III, a video game where our hero adventures through endless series of challenges. Each level in the game has a randomly generated layout so it’s not possible to memorize where the most rewarding monsters live or where the best treasures are hidden. This keeps the game interesting because every level is an exploration in an environment I’ve never seen before and will never see its exact duplicate again.

This is what came to my mind when I learned of WorldForge, a new feature of AWS RoboMaker. For those who don’t know: RoboMaker is an AWS offering built around ROS (Robot Operating System) that lets robot builders leverage the advantages of AWS. One example most closely relevant to WorldForge is the ability to run multiple virtual robot simulations in parallel across a large number of AWS machines. It’ll cost money, of course, but less than buying a large number of actual physical computers to run those simulations.

But running a lot of simulations isn’t very useful whey they are all running the same robot through the same test environment, and this is where WorldForge comes in. It’s a tool that accepts a set of parameters, then generate a set of simulation worlds that randomly place or replace features according to those given parameters. Then virtual robots can be set loose to do their thing across AWS machines running in parallel. Consistent successful completion across different environments builds confidence our robot logic is properly generalized and not just memorizing where the best treasures are buried. So basically, a randomized dungeon crawler adventure for virtual robots.

WorldForge launched with ability to generate randomized residential environments, useful for testing robots intended for home use. To broaden the appeal of WorldForge, other types of environments are coming in the future. So robots won’t get bored with the residential tileset, they’ll also get industrial and business tilesets and more to come.

I hope they appreciate the effort to keep their games interesting.

The Great Webcam Shortage of 2020

Sometimes a project idea comes up and is hampered by the most unexpected of problems. The visual dimension measuring machine project (I need to come up with a better name) is mechanically speaking a camera bolted to the carriage of a former 3D printer. I wanted to explore how precisely controlled camera movements can help capture precise dimensions of an object in view.

I’ve been working on the former 3D printer, bringing a retired Geeetech A10 back up and running. I then started looking into the webcam side. I wanted something better than the average camera built into the screen bezel of a laptop. Ideally this meant a good optical lens assembly with auto focus capability, and not the more common fixed-focus cameras with mediocre lenses barely better than a pinhole camera.

And… all the good webcams are sold out! I had not known we were in the middle of the Great Webcam Shortage of 2020, but it made sense. A lot of officers workers have switched to working at home, and I’m not the only one dissatisfied with the camera built in to our laptops. Thus the demand for an upgraded camera shot up well past historical norms, and manufacturers are scrambling to meet demand.

So the first draft of my project will have to make do with a webcam I already had on hand, which is probably a good idea for a prototype anyway. I have a HP Webcam HD 4310 I can draft for the purpose. I had been using it to monitor my 3D prints via OctoPi, but that printer is currently offline anyway so the camera is available.

The print on the outside proclaims “1080P HD” and “Auto Focus”. I’m not sure the former is true – it does have the option to output 1080P but the visual quality is rather less than I expected at that resolution. I strongly suspect that 1080P is not the native sensor resolution, but an upscaled image. The alternate explanation is that the 1080P sensor is hampered by poor lens. Either way, it’ll have to do, and at least it does offer auto focus! Let’s find out what’s inside.

Learning DOT and Graph Description Languages Exist

One of the conventions of ROS is the /cmd_vel topic. Short for “command velocity”, it is commonly how high-level robot planning logic communicates “I want to move in this direction at this speed” to lower-level robot chassis control nodes of a robot. In ROS tutorials, this is usually the first practical topic that gets discussed. This convention helps with one of the core promises of ROS: portability of modules. High level logic can be created and output to /cmd_vel without worrying about low level motor control details, and robot chassis builders know teaching their hardware to understand /cmd_vel allows them to support a wide range of different robot modules.

Sounds great in theory, but there are limitations in practice and every once a while a discussion arises on how to improve things. I was reading one such discussion when I noticed one message had an illustrative graph accompanied by a “source” link. That went to a Github Gist with just a few simple lines of text describing that graph, and it took me down a rabbit hole learning about graph description languages.

In my computer software experience, I’ve come across graphical description languages like OpenGL, PostScript, and SVG. But they are complex and designed for general purpose computer graphics, I had no idea there were entire languages designed just for describing graphs. This particular file was DOT, with more information available on Wikipedia including the limitations of the language.

I’m filing this under the “TIL” (Today I Learned) section of the blog, but it’s more accurately a “How did I never come across this before?” post. It seems like an obvious and useful tool but not adopted widely enough for me to have seen it before. I’m adding it to my toolbox and look forward to the time when it would be the right tool for the job, versus something heavyweight like firing up Inkscape or Microsoft Office just to create a graph to illustrate an idea.

Change Is Only Possible If People Have Hope

It’s been over six weeks since United States added “Widespread Civil Unrest” to the list of everything else going wrong with the year 2020. I personally chose to reduce my workshop activities and make time to read up on some things that were left out of my school history textbooks. There were a lot of important events missing! I was a good student that paid attention and did well in tests, but that only covered what was in the book.

On the national stage, I’m glad to see this wasn’t “just another thing” getting brushed aside (as much as some people in positions of leadership tried) but the majority of immediate positive response are just symbolic gestures. Painting “Black Lives Matter” across a street won’t do anything to actually make Black lives matter.

But that doesn’t mean such symbolic gestures are useless. They set a low bar that is easy to clear, a basic floor for discussion on how we can move forward. When that fails to establish common ground, when that becomes controversial, it is really informative. If people can’t even agree on the basic premise that Black lives matter, it really lowers the chances we can have productive discussion on how to provide liberty and justice for all. If some people aren’t even willing to support symbolic gestures, how will they react to real and meaningful changes?

And real and meaningful changes will be required, because ignoring all the underlying problems won’t make them go away. The bad news is that real change takes time, meaning it’s too early to declare either victory or success. There are a lot of policy decisions, legislation either enacted or revoked, and court decisions made, before we can point to any real change in direction. And that is far too slow to be noticeable in this age of instant gratification and fleeting social media exposure, so we’ll just have to wait and see. But as long as people hold on to hope for a better society where Black lives do matter, change is possible.


Notes from workshop tinkering will resume tomorrow, starting with previously scheduled backlog.

Words of Hope, Words of Change

Notes from workshop tinkering are on hold, reading words by others instead.

How to Make this Moment the Turning Point for Real Change by Barack Obama. I think he’s qualified to say a few words, based on his firsthand experience with politics in the United States.

With a decades long career in journalism, Dan Rather has seen some shit. His recent essay posted on Facebook acknowledges things are pretty bad now, but things have been really bad before, too. He wants to remind us that every time before, people holding on to the ideals of of this nation carried it through, and that can happen again.

NASA R5 Valkyrie Humanoid Robot

When I was researching my Hackaday post about DARPA Subterranean Challenge, I learned that there’s a virtual track to the competition using just digital robots inside Gazebo. I also learned it was not the first time a virtual competition with prize money took place within Gazebo, there was also the NASA Space Robotics Challenge where competitors submit software to control a humanoid robot on a Mars habitat.

What I didn’t know at the time was that the virtual humanoid robot was actually based on a physical robot, the NASA R5. Also called Valkyrie, this robot is the size of a full human adult with a 7-digit price tag putting it quite far out of my reach. This robot was originally built for the 2013 DARPA Robotics Challenge. It appeared the robot had no shortage of ingenious mechanical design (I like the pair of series elastic actuator for that ankle joint.) It was not lacking in sophisticated sensors, and it was not lacking in electric power. What it lacked were the software to tie them all together, and an unfortunate network configuration issue hampered performance on actual day of DARPA competition.

After the competition, Valkyrie visited several research institutions interested in advancing the state of the art in humanoid robotics. I assume some of that research ended up as published papers, though I have not yet gone looking for them. Their experience likely fed into how the NASA Space Robotics Challenge was structured.

That competition was where Valkyrie got its next round of fame, albeit in a digital form inside Gazebo. Competitors were given a simulation environment to perform the list of tasks required. Using a robot simulator meant people don’t need a huge budget and a machine shop to build robots to participate. NASA said the intent is to open up the field to nontraditional sources, to welcome new ideas by new thinkers they termed “citizen inventors”. This proved to be a valid approach, as the winner was an one-person team.

As for the physical robot, I found a code repository seemingly created by NASA to support research institutions that have borrowed Valkyrie, but it feels rather incomplete and has not been updated in several years. Perhaps Valkyrie has been retired and there’s a successor (R6?) underway? A writer at IEEE Spectrum noticed a job listing that implied as such.

(Image source: NASA)

Old School Engraving With Gravoply

There’s a certain aesthetic I associate with older labels, signs, and equipment control panels. They have two contrasting colors and a three dimensional feel. It has mostly faded away by now, replaced by crisp flat printing with multiple solid colors or even full color halftone. I hadn’t thought much about those old panels until I had the opportunity to look over some dusty retired equipment for making them.

This particular material was “Gravoply” and it is still available for order. We can specify from a wide selection of colors, though the core (rear) color selection is a little more limited than selection for surface (front) color. The dimensional feel is a function of how they are used to create signage: a rotary engraving tool cuts away the surface layer and expose the core layer to produce a display with two contrasting colors.

This rotary tool was held in a pantograph to trace templates on Gravoply. Laying out a particular design meant working with individual letter templates in a simplified version of how past typesetters did their jobs. While in concept a pantograph could allow arbitrary scaling, it appears scaling is limited in this particular implementation. Otherwise there wouldn’t need to be multiple sizes of letter templates.

Gravoply Templates

This technique is unforgiving of mistakes. If the rotary tool went off track, it would cut portions of surface material that was not intended to be cut away. When this happens, the user has no choice but to start over. Which was the explanation for why these pieces haven’t been used in years: they moved away from this system as soon as a cost effective and less frustrating alternative was available.

Visiting the web site of Gravograph today, I see their products on the front page are computer motion controlled machines,. Though they still make pantographs for doing things the old fashioned way, materials for mechanical removal like Gravoply are typically cut with small CNC vertical mills. Plus they also have material designed for engraving by laser. Technology has moved on, and the company behind Gravoply has evolved with it.

I found the pantograph an interesting mechanism and I might ask to use it in the future for the sake of getting some hands on time with a mechanical anachronism. But I’m not likely to actually create something significant using a pantograph, at least not as long as I have a CNC engraver at my disposal.

Eyoyo EM15H USB-C Portable Monitor Actually Worked The Way I Hoped It Would

PEMv3DPOnce upon a time I decided it was a good idea to turn an old laptop screen into a portable external monitor. That was a fun project and I learned a lot, but technology has advanced and now there isn’t much point in doing the same thing again. The final nail in the coffin was the opportunity to play with an Eyoyo EM15H Portable USB-C Monitor. (*) It is just one example of a now-prolific product category that barely existed when I started my project.

The key enabling technology is the growing maturity of USB-C. Yes, it’s still something of a mess, but engineers have continued working away at chasing the dream of an universal connector. For the purpose of portable monitors, the most useful feature is the ability to carry data and power on a single cable. That makes a portable monitor much easier to set up and use than my project, where I had to wrangle both power and data cables.

Another technological evolution is how thin screens have become, driven primarily by the quest for ever thinner laptop computers. This particular monitor, complete inside its plastic enclosure, is thinner than the display I used for my project without its enclosure. I know the move from CFL to LED for backlighting has something to do with it, but I’m sure that’s only part of the story. The modern product is a fraction of the size and weight of my project.

The final piece of the puzzle is a standardized way to communicate data to the monitor. Early USB external monitors worked by presenting themselves to the system as unique video devices. This required their own specific drivers, and all video processing would be done by the USB monitor. The cheap low-powered models are only useful for mostly-static use such as PowerPoint presentations. They could not handle full screen video, and provide no 3D acceleration for games.

USB-C allows a better way. Supporting alternate modes like Thunderbolt means a USB-C display can leverage all graphics processing power on the computer and display just the rendered results. However, since USB-C is backwards compatible with old USB, it’s hard to be sure how a particular monitor is implemented until we test it firsthand. I connected this monitor to the USB-C port of my Dell 7577. I then loaded up a few video games with the graphics detail turned up high.

If the monitor is a dumb frame buffer video device, graphics performance would plummet or possibly not display at all. There’s no way a cheap external monitor can match the graphics performance of the NVIDIA GTX 1060 GPU inside the laptop.

But we had full graphics performance: full detail running at 60 frames per second. This is convincing proof the monitor is showing images rendered by the GPU inside the laptop. A lightweight, portable, single-cable easy-to-set-up external monitor with full performance is now a reality for about $150. (*) At that price point, I’m unlikely to build another external monitor of my own.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Hypocycloid Drive Calculator by Otvinta

The best part of maker/hacker gatherings is the opportunity to meet and chat with people who introduce me to ideas and resources. At Sparklecon 2020 I met Allen Phuong who saw Sawppy roaming around and wanted to learn more. Sadly he had missed my Sawppy presentation because he was busy participating in the battle bot competition taking place at the same time, but I gave him an abbreviated version and we talked about many projects on our respective to-do lists, robotic and more.

Allen got me interested in hypocycloid gears again. It was something I briefly examined while looking for ways to build a gearbox to obtain low speed and high torque but without the backlash present in typical gearboxes. Right now the standard solution in robotics is the harmonic drive, which is an expensive solution that has specific requirements on the material used to build the flexible spline. 3D printer plastic does not meet all the requirements and hence 3D-printed harmonic drives always involve trade-offs that made me less interested.

Cycloidal drives do not have a flexible component with strict material behavior requirements, all parts remain rigid while in operation. For (near) zero backlash operation, however, it requires high dimensional accuracy. I dismissed it for this reason as 3D printing is not very precise. However, Allen asserted that 3D printers can reach the required levels so maybe it’s worth a second look. And even if I can’t get my 3D printer to meet my dimensional accuracy goals, I now have access to a few tools that I didn’t have before. Ranging from a laser cutter, to my project CNC mill, to a resin printer. All capable of far higher accuracy than my 3D printer.

There are a few tools available online to help generate profiles based on parameters I specify. Allen pointed me to the Hypocycloid Gear Calculator on Otvinta, which looks like a worthwhile starting point. The author of this site has decided to focus on Blender as the 3D tool, so if I want to make use of the results, I’ll have to learn how to translate it into Onshape or Fusion 360. But first, I can get a taste via a ready-made project.

One Amazon Order, Three Identical Units, Three Shipping Boxes

Earlier I shared a tale of wasteful packaging from McMaster-Carr: using their standard box and bag system to ship a single little spacer. It’s not great, but there was a reason for the situation: the single part replaced a flawed component in an earlier (less wastefully packaged) order.

And now a different story from Amazon, whose business success is dependent on the efficiency of their logistics system. So when I ordered six Traxxas remote control monster truck wheels, I had expected them to be packed in a single box.

This looks reasonable, right?

Traxxas tire shipping expectation

That is, unfortunately, not what happened. These wheels are sold in pairs, so my order for six wheels is an order for three identical pairs, and they came in three separate boxes (each with copious packing material) as show at the top of this post.

Thinking this was bizarre, I looked for clues as to how this situation might have come to be. Examining the labels on those boxes, I saw they originated from three different distribution centers. Did Amazon’s stocking system decide to keep a single pair of these wheels at every warehouse? That seems very strange, but that is the least strange explanation I can think of for the latest episode of unnecessary packaging. The second place guess is I ordered this product at the end of its stocking period, and just happened to catch the time when there’s a lone unit waiting at each of three nearby distribution centers. That seems quite unlikely, but the potential guesses are even less likely as we move down the list.

I doubt I’ll ever know the real answer, but it will continue to puzzle me.

Toyota Mirai Water Release Switch

I have always been a fan of novel engineering and willing to spend my own money to support adventurous products. This is why, back in 2003, I was cross shopping two cars that had nothing in common except novel engineering: the Mazda RX-8 with its Wankel rotary engine and the Toyota Prius gas-electric hybrid.

Side note: it is common for car salesman to ask what other cars a particular shopper is also considering. When I tell them, it was fun to watch watch their faces as they work to process the answer.

Eventually I decided on a Mazda RX-8, which I still own. Since then I have also leased a Chevrolet Volt plug-in hybrid for three years. In fact, the exact Volt shown at the top of my Hackaday post memorializing the car. Both of those cars are no longer being manufactured. Meanwhile Toyota’s gas-electric hybrids have become mainstream, making them less personally interesting to me.

But Toyota has an entirely different car to showcase novel engineering: the hydrogen fuel cell Mirai. I had the chance to join a friend evaluating the car. He was serious about getting one, I just wanted to check it out and was not contemplating one of my own. While we were waiting for his appointment, we got in the showroom model and started looking around.

And since we were engineers, this also included digging into the owner’s manual sitting in the glovebox. The Mirai ownership experience is a fascinating blend of the familiar and the unusual, the strangest item that caught our attention was this water release switch. The manual only said it was for ‘certain situations’ but did not elaborate. We asked the sales rep and learned it was so water can be dumped before entering places where water could cause problems.

Two potential examples were actually in front of us: the Mirai parked in their showroom was sitting on a carpeted surface, where water could leave a stain. Elsewhere in the showroom, cars are parked on tile or polished concrete where water could leave a slippery surface causing people to fall. The button allows a Mirai to drain its water before moving into the showroom.

Right now commercially the Mirai is in a tough spot. It is at the end of the current product cycle, where three year old units from the same generation can be purchased off lease at significant depreciation while a far better looking next generation is on the horizon. Toyota has a lot of incentives on offer for potential Mirai shoppers. When leasing for three years, in addition to discount up front, all regular checkup and maintenance is free (no oil and filter changes here, but things like checking for hydrogen leaks instead) and a $12,000 credit for hydrogen fuel.

It was not enough to entice my friend, and I was not interested either. I believe my next car will be a battery electric vehicle.

Undersized Spacer Promptly Replaced By McMaster-Carr

Living in the Los Angeles area has its ups and downs. As a maker tinkerer, one of the “up” is close proximity to a major McMaster-Carr distribution facility. When introducing McMaster-Carr to friends who are not already aware of them, I say “they sell everything you’d need to set up a factory.” It is a valuable resource that becomes even more valuable when deadlines loom because of their quick service and willing to ship orders of any quantity. I receive my orders the next day, and in case of a real crunch, I can fight LA traffic to get same-day satisfaction at their will-call pickup window.

Selection, speed, and customer service are their strengths, but that comes with tradeoff in cost and efficiency. Nothing illustrated this more clearly than a recent experience with one of my McMaster-Carr orders. My shipment included a number of small aluminum spacers of a particular inner/outer diameter. And the length is obviously the most critical dimension for a spacer… but one of them was too short. It appears these were cut on automated CNC lathes and an incomplete end piece of stock fell into the pile of finished products.

I reported this to McMaster-Carr and they immediately sent out a replacement spacer delivered the next day.

One.

Single.

Spacer.

As a customer I can’t complain: I reported my problem and they fixed it immediately at their expense. It does make me happy that I only had to wait an extra day and I plan to continue buying from McMaster-Carr for my hardware needs. I don’t have an alternative to propose, so this was probably the best possible outcome.

All that said, it still feels incredibly wasteful.

Wasteful McMaster Carr packaging

Looping Video Advertisement Player Module

While I was at Costco for grocery shopping and checking out rechargeable batteries, I walked through the electronics section. For certain items, the actual merchandise is not available in the shopper-accessible warehouse. Instead the warehouse pallet hold sheets of cardboard that shoppers take to the cashier. Once paid, the receipt is shown to a secure caged area attendant who delivers the actual merchandise.

Familiar with this system, I was not surprised to see pallets stacked full of cardboard sheets in the camera section and didn’t think much of it until my peripheral vision reported unexpected motion. GoPro camera packaging always advertise with beautiful people having amazing adventures, but one of these was moving. Cardboard doesn’t do that.

Bluefin Ad Player 20-3000-1232

Stopping to investigate, I found one of the cardboard sheets has been modified. A rectangular hole was cut, and a video-playing LCD screen complete with associated electronics was inserted. A USB flash drive presumably held the GoPro promotional video, and that was the extent of the modification. There was no rear enclosure so it was easy for me to take a picture for further research once I returned home.

Given the information visible, I searched for Bluefin Technology “Ad Player” Model 20-3000-1232. This led to the manufacturer’s website and some minimal specifications. While the product label clearly labeled the device as made in China, the web site lists an office in Georgia that I presume was their USA distributor. So I was surprised that I couldn’t seem to find this module for purchase online, the only units I found for sale were secondhand on eBay. Most surprisingly, typing the model number into Alibaba and AliExpress also came up empty! I infer this to mean the company only sells to other businesses and there’s no retail sales channel.

I had thought this device would make a promising platform for hacks depending on price. Second hand eBay Buy-It-Now price of $70 is not terribly promising, I had been hoping for something closer to $30. But until I find a retail source or decide to buy in bulk directly from the manufacturer, none of that matters.