Window Shopping Budibase

Occasionally I’ve had project ideas that require a database to keep track of everything, and then I would shelve the idea because I didn’t want to spend the time. Or even worse, try to do it with a spreadsheet and suffer all the ills of a square peg in a round hole. I want an easier way to use databases. The good news is that I’m not the only one with this problem and many solutions exist under the marketing umbrella of “low-code” or “no-code” web app platforms. Skimming available information, I think Budibase is worth a closer look.

Budibase is one of the “low-code” platforms and I understand this to mean I don’t have to write any code if my project stays within the lane of a supported usage pattern. But if I must go outside those boundaries, I have the option of inserting my own code to customize Budibase instead of being left on my own to re-implement everything from scratch. Sounds good.

On the business side, Budibase is a free open source project with source code hosted on GitHub. Compiled binaries for self-managed deployments are available a few different ways, including Docker containers which is how I’d test things out. Users are allowed to run their own apps on cloud infrastructure of their choice. However, the licensing agreement forbids reselling Budibase itself as a cloud-hosted service. That privilege is reserved exclusively to Budibase themselves and serves as an income stream alongside consultancy contracts to build apps for businesses. I can support this arrangement as a tradeoff between proprietary commercial software and totally free open-source projects without a reliable income stream. This way Budibase is less likely to shut down and, even if it does, I should be able to limp along on a self-hosted setup until I can find a replacement.

Which brings me to the final point: the database. For the sake of simplicity, some of these low-code/no-code app platforms are built around their own database and assume full control over that database from beginning to end. In some cases this puts my data into a black box I might not be able to port to another platform. In contrast, I can build a Budibase interface on top of an existing database. And that database will still be available independently if Budibase goes away, or if I just want to explore something else. I like having such options and the data security it implies.

I like what I see so far, more than good enough for me to dive into Budibase documentation. Learn how I can translate its theoretical benefits to reality. Starting with an excellent quick start guide.

Window Shopping Plex Alternative Jellyfin

I’ve been using Plex for a few years to run my home local network media server, with vast majority of usage centered around my music CD collection in the form of MP3 files. It’s a fairly simplistic usage pattern so I haven’t encountered any problems. Most of the problems I had with running Plex server was my own fault, because I also used Plex as a test case to explore various home server technologies. That’s not Plex’s fault unless you want to say “Plex makes so many different deployment methods available” is a fault. At the moment I am not motivated to move off Plex. But if I ever do, I recently learned of Jellyfin as an alternative.

I used Plex at its free tier for a while before choosing to pay the one-time lifetime subscription fee to unlock a few features that looked interesting. I ended not using most of those features, but that was fine. A company needs revenue to operate and I paid the company for a product I found useful so the current situation is fine. But like most businesses, Plex is constantly trying to expand its revenue stream. So far that’s been in the form of paid features like video rentals but it does make me a bit suspicious. Tech industry history has many tales of companies alienating their existing customers in pursuit of money. I would be disappointed (but not surprised) if one day Plex decide my one-time lifetime subscription should become more than one-time or less than lifetime.

I have my media on my TrueNAS server, and I am running Plex Media Server software on a virtual machine of my Proxmox server. All capability is on my home local network, but I have to periodically sign in to my Plex account to verify my subscription status. If something happens to be offline at this critical time I would be locked out of my own Plex server. This has actually happened a few times, but not (yet) often enough to motivate me to ditch Plex.

Jellyfin improves on both of those problems. It is a free software project so as long as it stays on that ethos, I shouldn’t fear my account status changing and getting charged more money. In fact, I should be able to run everything on my home network server and never have to authenticate against an account server on the internet. Jellyfin is a less mature project with fewer features than Plex, but it has the “keeping MP3 collection organized” feature already and that’s most of what I need. If Plex (either the business or the software) gives me enough grief, I will likely migrate to Jellyfin for my home local network media server needs.


Jellyfin logo is from their UX repository https://github.com/jellyfin/jellyfin-ux

Window Shopping Unofficial Firmware for Canon Cameras

My project turning an Adafruit Memento camera into a thermal camera was fun, but it didn’t utilize any of Memento’s programmable photography features. This is fine because I felt the advantage of Adafruit hardware is the ability to mix in more hardware for fun as I did. Besides, if a project’s goal lies strictly within within the realm of photography, they might be better served by using a commercial camera product running an unofficial firmware project.

Modern digital cameras run on microcontrollers and motivated enthusiasts have found ways into those systems. I have historically purchased Canon cameras, so unofficial firmware projects most relevant to me are CHDK for Canon PowerShot point-and-shoot cameras and Magic Lantern for Canon’s EOS interchangeable lens cameras. I read through FAQ for both and learned that’s not exactly correct. Each project is actually specific to the operating system Canon runs on their cameras, and occasionally Canon would release an interchangeable lens camera running their point-and-shoot operating system (or is that vice versa?) for reasons known only to Canon.

Magic Lantern builds on the work pioneered by CHDK and Magic Latern’s author fully credits the CHDK team for paving the way. Apparently CHDK bootstrapped starting from obtaining control over a camera status LED and using it to blink out data that told the team where to go from there. That’s an amazing tale of hardware hacking. This relationship also means both projects use the same software architecture. Canon’s camera firmware is not overwritten by these projects, the downloaded file lives on the memory card. And that CHDK/ML download has to match a specific version of Canon camera firmware.

These facts tell me CHDK/ML found a way to run their code on the memory card. And once they obtained execution control, they call routines in Canon’s firmware to perform actual tasks. The upside of this approach means actual hardware interfacing is handled by Canon driver code, reducing the chance of a bug causing irreversible damage. And this approach is also how CHDK/ML can expose features not present in original Canon firmware, by calling Canon routines with parameters that the factory firmware chose not to utilize. The downside of this approach is that CHDK/ML is limited to features that can be composed from existing building blocks. A hypothetical example: I can snap a picture and save it with custom JPEG compression parameters, but I can’t save to an entirely different image format Canon doesn’t support. Like WEBP, not that I’d want WEBP anyway.

And to bring the discussion full circle: both CHDK and ML offer scripting engines so we can write and run small programs. CHDK supports uBASIC and LUA, while ML seems to have just Lua. Skimming through the documentation, the scripting API is focused on photo and video and little else. It’s no CircuitPython, but that’s expected and perfectly fine for projects in that domain. This all looks interesting enough for me to see if I have a CHDK/ML-compatible camera.

Window Shopping Hugging Face LeRobot

Recently there have been a lot of excitement around several fields of machine learning that broke out of their research paper phase. Meaning they got good enough for everyday people to see what they could do without having to chew through dense research lingo. Most of the hype centered around large language models powering ChatGPT and friends. Generative systems have also come of age. I played around with image generators briefly, but there are also audio and video generators out there.

None of those systems are specifically applicable to a rover brain, so while technically interesting I didn’t see anything I wanted to pursue yet. I knew it was a matter of time, though, for those technologies to pioneer enough infrastructure leading to something that would produce rover-applicable machine learning systems. This might have just happened, with the launch of LeRobot by Hugging Face.

According to Wikipedia, Hugging Face company has a few related divisions. One of them is Hugging Face Hub, a repository of machine learning models and datasets. Sort of like how GitHub is a repository of source code, but at a higher conceptual level and focused on machine learning. The index of models on Hugging Face Hub are tagged by topic. If I click on any of the “Natural Language Processing” tags today, they return tens of thousands of models. But if I click on “Robotics”, there are only 27 models. LeRobot is an effort by Hugging Face to jumpstart this field and I’ll be watching to see if it takes off or fizzles out.

Rerun.io

Regardless of LeRobot’s success or failure, a quick skim through its documentation taught me a lot and I didn’t even follow all its links to other projects. Out of those I looked at, the most promising gem is data visualization tool Rerun.io. Skimming through its documentation and examples, Rerun sounded like something right out of my dreams: a time-based data visualization tool not just on 2D graphs like Grafana, but also visualize data in 3D space. From simple spinning LIDAR to depth cameras, Rerun claims to put all data into a single time-indexed visualization. I will definitely take a closer look in the near future.

My Desired Hyundai Doesn’t Exist (Yet?)

I had contemplated buying a Hyundai IONIQ 5, but poor dealership experience turned me off of that path. Which was just as well, because my ideal Hyundai isn’t available for purchase anyway. An IONIQ 5’s SUV form factor is not my first choice as it is far larger than I want in a car. What I would love is to have that design language in a 2-door sports car.

We got a preview of that idea’s greatness with Hyundai’s N Vision 74 concept. It has a hydrogen-electric hybrid powertrain featuring fuel cells and batteries feeding electric motors. Optimized for high performance on the race track, complete with aerodynamic assists like a deep front splitter and a huge rear wing. But most importantly, it applied IONIQ 5 school of design to a sports car form factor. This has been very well received with many calling for Hyundai to put the concept into production. I’ve read that Hyundai is contemplating building a very limited number of them as six figure track toys for the rich, which isn’t actually what I had hoped for when I wished for a production version.

I don’t want the N Vision 74 as-is, I wanted a production car that the concept race car implied. Something without race aerodynamics and with an interior usable for daily living. Remove the hydrogen tank and fuel cell leaving a pure battery electric car. And since I’m wishing anyway, sell it around IONIQ 5 price range and make it smaller. According to numbers floating around on the internet N Vision 74 is about twenty inches longer and eight inches wider than my 2004 Mazda RX-8. That’s enormous! Some of that length might be aerodynamic bits and some of that width are race car tires, but it clearly casts a big shadow.

For an analogy, Hyundai recently released a race-ready variant of the IONIQ 5 N called the eN1 Cup car. Featuring suspension, tire, and aerodynamic aids appropriate to track duty. The relationship between that eN1 Cup car and a normal IONIQ 5 is the same relationship between the N Vision 74 and an electric two-door coupe of my dreams. Too bad that car does not exist but, if it ever comes into being, it might entice me to walk back into a Hyundai showroom.


Images from Hyundai news room

Window Shopping ESP32 Pulse Counter (PCNT)

To help me understand internal workings of a Canon Pixma MX340 multi-function inkjet, I would like some signal analysis tools. Specifically for recording data coming from a quadrature encoder attached to the paper feed motor gear train. After I found out Saleae Logic could not do this, I started reading about sigrok. Thanks to a documented protocol, sigrok could run with not-officially-supported data acquisition hardware such as an ATmega328 Arduino or ESP32.

It started to look like too much of a distraction, though, so I refocused on my specific problem at hand: I just need quadrature decoding. And for that specific purpose, ESP32 has a hardware peripheral for the job. Pulse Counter (PCNT) can certainly do what its name says and count pulses on a single input, but it can be more general than that. Espressif designers had added provision for PCNT to act in response to multiple inputs and configure their interaction. One specific configuration, demonstrated in an official example, turns PCNT into a quadrature decoder.

For my purpose I need something that can keep up with quadrature phase changes of this encoder, roughly on the order of 10-20 kHz. I found a forum thread ESP32 pulse counter speed (max frequency) which says PCNT can keep up with signals up to 40MHz. That’s plenty fast for my needs!

In fact, that might be too fast. At that high rate of sensitivity, small changes — like the little dip visible in one phase when the other phase is pulled to ground — may register with PCNT and that would spell trouble. Fortunately Espressif engineers thought of that too: PCNT includes an optional glitch filter to reject signals changing outside of its configured speed range. This may be an useful tool in my toolbox if I see spurious data.

ESP32 PCNT looks like a promising approach for me to build the tool I want. I would have to install ESP-IDF (probably in VSCode Extension form) before I could compile the official sample and start modifying it for my needs. Seems pretty easy on paper, but then I realized I had an even easier option I should try first.


This teardown ran far longer than I originally thought it would. Click here to rewind back to where this adventure started.

Notes on Arduino and sigrok

I want to better understand the motions made by motors in a Canon Pixma MX340 multi-function inkjet, and thought I might be able to gain insight by recording data output by a quadrature encoder attached to its paper feed motor. Finding my Saleae Logic analyzer’s software isn’t up for the job, I searched for alternatives and found sigrok. The open source signal analysis software.

In addition to quadrature decoding capability, sigrok has another advantage: lack of hardware tie-in. I looked for names I recognized on the list of supported hardware and found a few. Curiously the list continues onward to “Work in progress/planned” hardware, and that list included an entry for Arduino. Wow, really? A humble hobbyist-accessible microcontroller can act as data acquisition hardware for sigrok?

Reading that page, I believe the answer is “kinda… well, actually… no.” The basic idea is that (1) sigrok supports any signal acquisition hardware that can report data via SUMP protocol and (2) people hare written Arduino sketches that tell the ATmega328 chip to sample its IO pins and report their state that way. Unfortunately, based on information on that page, the situation is a mess. Sounds like an Arduino could work for certain scenarios but not others. A user would need to understand implementation details in order to know its limitations, and would need to understand ATmega328 to know workarounds. It’s not a plug-and-play solution and the Wiki page has not been edited since September 2020. I don’t think this will ever graduate to “supported” status.

Still, I was glad to see this work targeted the ATmega328. The original chip at the heart of original Arduino boards. I had half expected it to require a much newer processor with Arduino core support. Speaking of which, I searched for ESP32 + SUMP and found the esp32_sigrok project by GitHub user Ebiroll, along with a corresponding thread on ESP32 forums. This project also has known problems and the last commit was over three years ago.

Based on these findings, I got the distinct feeling building signal acquisition system from general-purpose hardware is really hard. Fortunately, for my purpose today I do not need general purpose capability. I can focus on just quadrature decoding and it turns out ESP32 was designed with a peripheral ideally suited for the task: pulse counter (PCNT).


This teardown ran far longer than I originally thought it would. Click here to rewind back to where this adventure started.

Logic Analyzer Quadrature Decoder

I would like to record internal behavior of a Canon Pixma MX340 multi-function inkjet, specifically the paper feed roller position encoder and two photo interrupter sensors. The sensors are binary on/off affairs that change relatively infrequently, so they should not be a challenge. The interesting element would be the quadrature encoder, reporting how far and how fast the paper feed roller motor is turning that shaft.

A rough calculation told me I need something that can sample two encoder outputs at a rate of at least 20 kHz. My Saleae Logic 8 hardware is advertised to sample at speeds up to 100 MHz. Maximum sampling rate drops if it needs to cover more channels, but 2 channels should be good for 50MHz for plenty of headroom. Unfortunately Saleae’s Logic 2 software does not perform quadrature decoding. I don’t know why I had assumed it would be part of the analysis toolkit, but I did, and so I was surprised to find it absent. A quick online search confirmed quadrature decoding is still on their “user requested feature” list for some undefined time in the future. And as I established earlier, Saleae’s analyzer extension framework doesn’t support writing my own decoder across multiple channels.

During this search for quadrature decoding support in logic analyzers, I came across mentions it was a capability in sigrok, forwarding to a terse page in sigrok documentation. I understand sigrok to be (very) roughly analogous to Saleae’s Logic 2 software, with support for lots of different hardware performing a wide variety of tasks. Perusing sigrok hardware support page I saw two names I recognized. Bus Pirate is listed as supported, but very limited due to its basic hardware. GreatFET One is also on the list, and it has a much more robust list of capabilities. I consider this a good reason to add GreatFET One to my investigation list for future purchase. If quadrature decoding analysis becomes a recurring need, it would be enough to motivate me to switch from Saleae’s proprietary solution to an open-source alternative.

Another advantage of sigrok’s open source nature is that, unlike Saleae’s proprietary solution, the software is not tied to the hardware. Not even officially supported hardware. It has support for documented interface protocols like SUMP, so any hardware can theoretically act as signal acquisition hardware for sigrok. However, this is apparently easier said than done.


This teardown ran far longer than I originally thought it would. Click here to rewind back to where this adventure started.

Window Shopping SerialTool

I seek tools to help me understand the communication between the main board and control panel of a Canon Pixma MX340 multi-function inkjet. The data is asynchronous serial 250000-8-E-1 which is something a Bus Pirate can interface with, but only as one end of the communication with a transmit/receive pair. I want to listening in on communication between two existing endpoints, which meant I would need to set up something with two receive and no transmit wires. Continuing my search, I found SerialTool which is very close to what I want for this purpose, but not exactly.

SerialTool’s web site claims it came from engineers for embedded devices, which puts it in the right ballpark audience for this task. Like WireShark, it is an application that can run on Windows, MacOS, or Linux PCs. The hardware side of SerialTool is any serial port device recognized by the operating system of choice. In my case, it will be USB to serial adapters like the trusty unit (*) I’ve been using for many projects.

Of course, a standard USB to serial adapter has the same limitation as a Bus Pirate: a single transmit wire and a single receive wire. For two receive wires, I would need to use two adapters. Which gets into one of the headline features of SerialTool: its ability to work across multiple serial devices simultaneously. Two ports are possible on the free version, the Professional edition raises the limit to 4 simultaneous serial ports.

SerialTool’s “Auto Answer” feature looks interesting. It is a mechanism to quickly set up “if (receive thing) then (send answer)” which is helpful to stub out one placeholder end of a serial communication link. In my current example, I could potentially set it up to a dummy control panel and acknowledge “0x20” to every command sent by the main board.

But that’s not what I need right now. I want something to examine incoming data for expected patterns and alert me if something unexpected comes through. I thought SerialTool’s “Trigger Alarm” feature was promising, but as I read the document I realized it compares incoming data and an alert fires if a match is found. This is the opposite of what I wanted: an alert in case of no match.

One of the advertised uses for SerialTool is for testing and verification of embedded devices, so I was curious my wish is absent. For testing and verification purposes, I thought it would be natural to have a mechanism to raise an alert in case of unexpected anomalous data. But if SerialTool could do such a thing, I failed to find it.

I found references to a few other pieces of software that appear to be competitors to SerialTool, but they all target professional embedded systems engineers and not hobbyists. Or more specifically, their price tags say “business expense” and there is no free version. Whether they do what I want or not is irrelevant if I can’t justify that kind of expense.

Oh well, I guess I’m going to create my own tool.


Header image: screen shot from SerialTool web site.

This teardown ran far longer than I originally thought it would. Click here to rewind back to where this adventure started.

(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Window Shopping Bus Pirate

I’m looking for tools to help me analyze the communication between the main board and control panel of a Canon Pixma MX340 multi-function inkjet. The data is asynchronous serial 250000-8-E-1 so Wireshark was the wrong tool for the job. Wireshark can perform much of the data manipulation tasks I wished for, but it works for network packets and not general serial data.

Focusing my search on serial data tools for an electronics hobbyist, I came across several mentions of something called Bus Pirate. This is something I had been aware of. In fact, I bought one several years ago upon recommendation by another hobbyist, but I have yet to put it to work for any of my own projects.

The core usage scenario for a Bus Pirate is programmatic access to some basic common electronic interfaces like SPI, I2C, and my current focus: asynchronous serial. The idea is that we can experiment with these interfaces using interactive commands, versus writing a microcontroller program that we have to compile and upload to flash on every change. That’s gives us an immediate turnaround time versus 20-30 seconds, time that will really add up when performing rapid iteration. For example, in the tentative early stages of bootstrapping a new piece of hardware.

In addition to the data interfaces, Bus Pirate also provide control over some interface-adjacent tools. The one that caught my eye is the ability to provide 5V and 3.3V DC power, and to turn them on/off on command. Again, this is very useful in early stages of working with new hardware, because if we mess up commands and accidentally put the hardware in a bad state, we can quickly turn it off and back on again with a command instead of unplugging anything and plugging it back in.

In hindsight, a Bus Pirate would have been really useful when I was experimenting with salvaged I2C LCD screens, allowing faster experimentation. At the time, I had forgotten I had a Bus Pirate gathering dust on the shelf. I don’t think a Bus Pirate would have been useful for a car audio face plate, though. Due to the fact they used Sanyo’s CCB protocol and that’s not something Bus Pirate understands.

I think Bus Pirate will be useful soon, because after I take this MX340 completely apart I want to try repurposing the control panel. I should be able to impersonate the main board by using my own hardware to play back the command sequences I’ve captured and analyzed. Getting that up and running should be a perfect opportunity to finally use my Bus Pirate for its designed purpose.

But a Bus Pirate won’t be helpful for me right now, because I want to listen in on existing communication and not interfere. Which means I want two UART receive wires, and the Bus Pirate only has a single UART with a single transmit wire and a single receive wire. The type of data filtering I want to perform isn’t part of core Bus Pirate functionality, either. As Bus Pirate itself has long been discontinued, I quickly skimmed over the spec sheet for its successors GoodFET (also discontinued) and GreatFET One (currently active product), but they don’t seem to have added the kind of features I want right now. My search continues.


This teardown ran far longer than I originally thought it would. Click here to rewind back to where this adventure started.

Window Shopping Wireshark

I’ve taken apart a Canon Pixma MX340 multi-function inkjet and I want to reverse-engineer the communication data between its main board and its control panel. After making an attempt at articulating characteristics of a tool I want, I went online to see if such a thing existed.

When searching with keywords like “reverse-engineering”, “communication protocol analysis”, “data filtering” and such, one name keeps coming up: Wireshark. It is a very powerful piece of analysis software and something on my to-do list as a tool I want to add to my toolbox. It certainly has some capabilities I want. One example is being able to annotate raw data to give it context. If I’ve determined a particular two-byte command 0x04 0x75 means “put the screen to sleep”, that translation can be automated so I don’t have to recognize it and transform in my head. Wireshark also has extensive filtering capability, so I could exclude common high traffic data and focus on the interesting unique parts. Though there seems to be a bit of a caveat: Wireshark has two separate filtering mechanisms. A less powerful mechanism that runs on live data during capture, and an entirely separate and more powerful filtering mechanism for analyzing data after capture. For the type of filtering I wish for, Wireshark might only be capable of post-processing captured data.

Regardless of possible limitations Wireshark is not the right tool for my current project, because it is designed to analyze network traffic. Wired Ethernet, Wi-Fi, and so on. I tried to exclude Wireshark from my search by requiring “+serial” but Wireshark still comes up. It took some reading through Wireshark documentation to figure out why. It turns out Wireshark supports capturing and analyzing network traffic transmitted over asynchronous serial via point-to-point protocol (PPP) or its predecessor serial line internet protocol (SLIP). Neither of which is applicable to my MX340 project, but search engines don’t know enough to understand that distinction. Oh well. I adjusted my query to explicitly exclude Wireshark from my results in order to see what else is out there.


As an aside, I was amused by this snippet:

Can you help me fill out this compliance form so that I can use Wireshark?

Please contact the Wireshark Foundation and they will be able to help you for a nominal fee.

Can you sign this legal agreement so that I can use Wireshark?

Please contact the Wireshark Foundation and they will be able to help you for a somewhat less nominal fee.

— from Wireshark FAQ

This teardown ran far longer than I originally thought it would. Click here to rewind back to where this adventure started.

Window Shopping Custom Saleae High Level Analyzer Extension

I’ve been examining the internal data communication of a Canon Pixma MX340 multi-function inkjet, trying to understand the data sent between its system main board and its control panel. Out of all the data sequences I’ve captured and analyzed, the LCD “screen saver” deactivation/reactivation sequence was the easiest to understand. Others were more difficult, though I still think I’ve picked up bits and pieces. Enough to form a foundation from which to make more detailed observations.

But how would I go about such observations? There’s enough data involved that scrolling through the timeline of my Saleae Logic 8 analyzer software and decoding things manually is not practical. I need to bring additional tools into the problem. This is not an unusual task and Saleae has some provisions for users to parse and understand data captured by their logic analyzers. It is possible to write custom extensions for their Logic 2 analyzer software, plugging in our custom processing. For my specific scenario, where I want to apply context to a stream of decoded data, the type of extension is are called High-Level Analyzer (HLA) because it sits in the processing above the low-level asynchronous serial decoder.

I like the idea of writing a bit of Python code to leverage all the infrastructure that already existed in Saleae Logic 2 software. Unfortunately, as I read into their documentation, I realized it would fall short of my needs for this specific project in two important ways.

The first shortfall is a HLA can only process data from a single channel. This means a HLA can be configured to interpret bytes from main board to control board. (“This is a burst of data to update LCD screen”.) Or it can be configured on the channel sending bytes from control board back to main board. (“This 0x80 value means no buttons are currently pressed.”) But if I want to look at both channels (“This is a LCD update, followed by a single byte 0x20 as acknowledgement sent in return”) I’m out of luck. Multi-channel HLAs are a long-standing feature request that, as of this writing, is still not available.

The second shortfall is a HLA’s output is pretty much restricted to adding text annotation on the Saleae data timeline, or logging text to the developer console. I want to be able to parse the LCD screen data into a bitmap shown on screen, and I found no such facility to do so within Saleae Logic’s Extension framework.

There must exist software tools that I can leverage to perform the analysis I want, but so far I have failed to think up the correct keywords to find them online. I may have to roll my own. In the course of researching how to do so, I expect to learn the techniques and terminologies that help me find the right software making my own project superfluous. It wouldn’t be the first time I’ve done that, but at the end I would have learned what I want. That’s what matters.

And you know what else I’ve done many times in the past? Overthinking a problem! I could get started without writing any code, by using a tool I don’t usually associate with my electronics projects: Microsoft Excel.

Window Shopping Marko JS

While I’m window-shopping open-source software like Godot Game Engine, I might as well jot down a few notes in another open-source package. Marko is a web app framework for building user interfaces. Many web development frameworks are free open source so, unlike Godot, that wasn’t why Marko got on my radar.

The starting point was this Mastodon post boosted by someone on my follow list, linking to an article “Why not React?” explaining how React (another client-side web development framework) was built to solve certain problems but solved them in a way that made it very hard to build a high performance React application.

The author asserted that React was built so teams at Facebook can work together to ship features, isolating each team from the implementation details of components created by other teams. This was an important feature for Facebook, because it also meant teams are isolated from each other’s organizational drama. However, such isolation meant many levels of runtime indirection that takes time every time some little event happens.

There was also a brief mention of Angular, the web framework I’ve occasionally put time into learning. And here the author asserted that Angular was built so Google can build large-scale desktop applications that run in a web browser. Born into a world where code lives and runs in an always-open browser tab on a desktop computer, Angular grew up in an environment of high bandwidth and plentiful processing power. Then Google realized more of the world’s internet access are done from phones than from desktop computers, and Angular doesn’t work as well in such environments of limited bandwidth and processing power.

What’s the solution? The author is a proponent of streamed HTML, a feature so obscure that it doesn’t even get called a consistent name. The underlying browser support has existed for most of the history of browsers, yet it hasn’t been commonly used. Perhaps (this is my guess) the fact it is so old made it hard to stand out in the “oh new shiny!” nature of web evolution and the ADHD behavior it forces on web developers. It also breaks the common path pattern for many popular web frameworks, roughly analogous to Unity DOTS or entity component systems.

What’s the solution? Use a framework that supports streamed HTML at its core. Marko was originally developed by eBay to help make browsing auctions fast and responsive. While it has evolved from there (here’s a history of Marko up to the publishing date in 2017) it has maintained that focus on keeping things responsive for users. One of the more recent evolutions is allowing Progressive Web Apps (PWA) to be built with Marko.

It all sounds very good, but Marko is incompatible with one of my usage scenarios. Marko’s magic comes from server-side and client-side runtime components working together. They minimize the computational requirements on the client browser (friendlier to phones) and also to minimize the amount of data transmitted (friendlier to data plans.) However, this also means Marko is not compatible with static web servers where there is no server-side computation at all. (Well, not impossible. But if Marko is built for static serving, it loses much of its advantages.) This means when I build a web app to be served from an ESP32, Marko might not be the best option. Marko grew up in an environment where the server-side resources (of eBay) is vastly greater than whatever is on the client. When I’m serving from an ESP32, the situation is reversed: the phone browser has more memory, storage, and computing power.

I’m going to keep Marko on my mind if I ever build a web app that can take benefit from Marko’s advantages, but from this cursory glance I already know it won’t be the right hammer for every nail. As opposed to today’s web browsers, which try to do it all and hence have grown to become Swiss Army Knife of software packed with capabilities like PDF document viewers.

Window Shopping Godot Engine

I think now is a good time to take a quick look at Godot Engine, and I thought I’d start by looking at how Godot measures up against Unity counterparts that have caught my attention in the past.

Platform Support

The most important value proposition of a game engine is cross-platform capability. Godot has that pretty well covered. Godot editor can run on all the desktop platforms I care about: Windows, Linux, and MacOS. There’s even a native Apple Silicon build, something Unity only started offering a few months ago.

Godot engine can export to all the target platforms I care about: All the editor platforms I listed above plus iOS, Android, and web. Android and web are actually supported editor platforms as well, but I do not expect to use them. I do not expect support for IE11 (Windows Phone browser) but neither does Unity so that is at parity.

Unlike the commercial offerings, Godot has no official solution for console games because console SDKs are all under NDA protection at odds with an open-source model. It’s possible, but not supported, but I don’t care about that myself anyway.

VR Support

Godot supports OpenXR, which is the open-source entry point to hardware like Valve SteamVR and Oculus headsets. Godot also has support for Apple’s ARKit but I didn’t see any mention of Google’s ARCore. Fundamentally, XR support in Godot is a function of whatever support they receive for it. Whether in the form of volunteer contributors, donated test hardware, or just straight up cash.

Entity Component System/Data Oriented Design

Based on customer feedback, Unity offers their DOTS “Data Oriented Technology Stack” built on an entity component system. This opens things up for those who want to adopt data-oriented design for their software architecture. (A counterpart of object-oriented programming.)

Godot explains why they’re not terribly eager to follow in these footsteps, because data-oriented design requires a different mindset not terribly intuitive for the human mind. Partially because we have to consider nuts and bolts of CPU cache behavior. We usually have the luxury of ignoring those details as an abstraction! All that said, Godot is not forbidding data-oriented development, the document even shares a few links to resources to help. But it’s not on the main path and unlikely to be.

Reinforcement Learning

I briefly played with the reinforcement learning subset of modern deep machine learning, using Unity as the learning environment. (“Gym”.) I thought the general idea was great, but the field is still quite young. Right now, an absurd amount of computational power is required to learn what a human brain would perceive as simple behavior. I’ve also learned that Unity ML-Agents was not terribly performant, bottlenecked by communication between Unity engine and machine learning frameworks.

A cursory search found several GitHub repositories of people working to bolt Godot and RL frameworks together, whether they share the same performance issues I do not know.

Educational Resources

Learning guides are where Unity has a huge head start, with their longtime investment into Unity Learn building up a huge library of examples and tutorials for all sorts of different domains for a wide range of audiences.

Godot would not be able to match or exceed that in the short term. But supposedly the current popularity of Unity exodus has led to a spike in number of Godot tutorials being published. A rise in quantity is no guarantee of quality, but I’m optimistic that there will be sufficient resources for Godot beginners to start with whenever I choose to join those ranks.

With this quick survey, I saw no deal-breakers against using Godot. The next time I have a project idea that would benefit from being built on top of a game engine, I’ll likely use that as the focus for learning Godot. In the meantime Godot will sit on the “something to learn once I have an appropriate project” list alongside Marko.

Single Cell Lithium-Ion Battery Management System Module (4056)

My solar panel power monitor project incorporated an old USB power bank for its battery and charger circuit, bypassing its broken USB power output circuit. After a year and a half of daily cycling, the charging circuit has broken down as well. I’m happy I got that extra life out of a USB power bank circuit board that would otherwise have been disposed of. The battery is still going, but I will need a replacement circuit to manage charging it.

With the widespread adoption of lithium-ion battery power, I have many solutions to choose from. The lowest bidder du jour on Amazon was this vendor selling a multipack of 40 BMS modules (*). It arrived in a single 10×4 sheet for the me to break apart as needed, similar to a batch of buck converters I bought earlier. I like this approach much better than loosely packed pieces that may damage each other in shipping.

The main chip in the center of this module had “4056H” printed on top. Many vendors on Amazon/AliExpress/etc. also single-cell BMS modules with this general design, not necessarily with “4056H” on top but all with some variation of “4056” with different prefix/suffix letters. A search for “4056” returned many chips from different companies that seemed to be interchangeable. I assume someone had a successful product that was then copied by many others, but I don’t know who the original was. The same goes for this particular breakout board module design. Looks like both the module and the chip at the center of it have become commodities.

This module also advertised protection against battery over discharge with a 3A current limit and 2.5V voltage limit, but that’s beyond the scope of a 4056 chip. This module must have additional components handling such protection. I see one chip labeled 8205A, which appears to be a dual N-channel MOSFET chip suited for output cutoff controlled by the chip at position U2. I don’t know how to dig deeper because U2 is unmarked on my purchase, but I have learned enough to put this module to work.

The module with its six soldering points is fairly straightforward to incorporate into my project:

  • There are pads on either side of the USB micro-B socket for 5V power input, one of them marked “+”. They are soldered to the existing buck converter dropping solar panel DC power down to 5V.
  • Pins labeled “B+” and “B-” connect to the positive and negative terminals of the 18650 battery cell.
  • Remaining pins labeled “OUT+” and “OUT-” are connected to the ESP8266 microcontroller module.

This configuration successfully recharged the solar monitor battery for two days, verifying everything worked as expected before I proceeded to lower the charging rate from its default of 1 Amp.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Window Shopping CadHub

While window shopping a few different projects for generating CAD models in browser, including replicad, I came across the occasional mention of CadHub and decided to take a look. I like what I see, but the project seems to have lost momentum.

The most visible component of the project is a browser-based interface for code-based CAD, much like what Cascade Studio has built, but generalized across multiple systems. It supports OCCT-based CadQuery as well as OpenSCAD with its own CSG system. But that was merely the first step on the list of ambitions. Its documentation homepage listed how code-based CAD can realize many of the items on my collaborative CAD wishlist, including automatic design verification and visual change comparison (diff) tooling. These and many other long-term ambitions are described on the “Blog” side of documentation page, along with this very informative survey of code-based CAD solutions.

So that all looks great in theory, how does the reality look? And sadly, things don’t look as rosy there. Despite all the theoretical advantages of code-based CAD in general, it appears that only OpenSCAD has found any significant adoption and I’m not a fan. (That’s a separate post I should write up.) On paper, CadHub supports CadQuery as one of several kernels, but as of this writing CadQuery capability has been disabled. The “it’s just Python” power of CadQuery became its downfall: since running CadQuery requires a Python environment, people have abused CadHub to do non-CAD things like trying to run security exploits or mine cryptocurrency using free CPU cycles. This sounds very much like the reasons Heroku free tiers went away.

Another “things didn’t work out” problem with CadHub is a consequence from the fact it presents a web-based IDE. Which is great until it tries to work with something that has its own web-based IDE like Cascade Studio. After multiple hacks trying to get the two systems to work together, CadHub threw in the towel.

These and other setbacks must have been discouraging, and probably contributed to the project losing momentum judging by its GitHub commit history. In 2021 it saw updates almost daily, sometimes multiple commits a day from multiple authors. It was still quite active going into January 2022, but the rest of 2022 saw only four commits. The most recent update was in January 2023, the lone 2023 update to date.

This is unfortunate. I really liked where this project intended to go, as it aligns with much of my own wishes. Since it is open source, I suppose I could fork the project and see if I can run with it, but I’d need to learn a lot more web development before I can even understand what’s already been done. Never mind trying to add to it. Even if I don’t use CadHub directly, though, it taught me a lot more about OpenSCAD I hadn’t known before.

Window Shopping replicad

I thought Cascade Studio was a very interesting project, providing a 3D model environment that can run entirely in the browser. Even offline if desired, as a locally-installed PWA. It is a code-based design system like CadQuery. While they all build on top of OpenCascade Technology kernel, the code-based API differences are larger than just the difference in language. (Python for CadQuery, JavaScript for Cascade Studio.) I found a lot to like, but also a few implementation details that I’m not fond of. That’s OK, there are other projects out there, including replicad. (Hackaday post.)

Both replicad and Cascade Studio run entirely within the browser thanks to OpenCascade.js, which compiled the 3D kernel into WebAssembly. And despite the fact they both wrap OpenCascade concepts with JavaScript, their API are different. Reading through replicad documentation, I learned their target scenarios are also different: Cascade Studio aims to be a full in-browser 3D model environment, presenting the JavaScript code as well as a 3D rendering. replicad is intended for people to share their designs online for others to use, by default presenting just the 3D object and the underlying code is not directly visible. But the viewer can make changes to model parameters and have the shape recomputed. This reminds me of Thingiverse Customizer, which is limited to OpenSCAD models.

Cascade Studio had the “Slider” UI option to allow customization as well, and one difference immediately jumped out at me: Cascade Studio allows the design author to specify maximum and minimum values for the slider, but replicad doesn’t seem to allow setting limits on model parameters. This seems like an oversight.

One significant advantaged I noticed in replicad API is their way of avoiding FreeCAD’s topological rename problem that Cascade Studio also seems to share. Instead of specifying entities like edges with names or numbers, replicad has a system called finders to find elements that meet a specified set of conditions. For example, it allows finding all edges at a particular Z height. Allowing us to apply a fillet without worrying about their specific names/numbers. This makes replicad closer to CadQuery, specifically with its concept of selectors.

I didn’t see any references to constraint solving. Based on some of the examples, I believe the author expects us to write JavaScript code to compute what we need directly within our 3D object design code. It’s a valid approach, but maybe not my favorite answer. I also didn’t see any references to creating multipart assemblies. Perhaps I could find an answer in a larger-scale overview like CadHub.

Window Shopping Cascade Studio

Describing 3D objects with Python code is CadQuery’s goal, something I find interesting for later exploration. Browser access is possible by running CadQuery in Jupyter Lab, making it accessible to low-end Chromebooks, but that still requires another computer serving Jupyter Lab. What if everything can run entirely standalone within the web browser? That is the laudable goal for Cascade Studio. (Hacker News post) (Hackaday post)

Projects like Cascade Studio were made possible by the OpenCascade.js project, which compiles the open-source OCCT kernel code into WebAssembly (WASM). No more hassling with separate build chains for a Windows/MacOS/Linux desktop apps like FreeCAD, now a 3D model system can run entirely within the browser no matter the underlying operating system. There must be some performance cost tradeoffs for such flexibility, but I haven’t dug deeply enough to know what they are.

Looking over how Cascade Studio was built, I see it leverages a lot of other open modules beyond OpenCascade.js. Like using Three.js for rendering the 3D model, and Monaco for the code editor. Oh right — code editor. Cascade Studio also describes 3D objects with code, except here it’s a JavaScript-based interface on top of OCCT concepts. It also leverages a lot of web technologies, like conforming to Progressive Web Apps (PWA) requirements so it can be installed locally to run entirely offline.

A valuable source of information is an unofficial Cascade Studio manual, written by a fan and not the author. (If the author wrote instructions, I have yet to find them.) It tries to cover everything a person would need to use Cascade Studio, with some basic 3D model concepts and basic JavaScript concepts. But what I really appreciated was the condensed digest of this fan’s experience with Cascade Studio, documenting many minor quirks and — even more valuable — their workarounds.

I was really enchanted by Cascade Studio possibilities until I got to the fillet edge section. Our code code needs to provide a reference to the object (obviously) and a list of edges (expected) by number (record scratch noise.) Wait, where would those numbers come from? We have to use the GUI to click on individual edges we want, the GUI will in turn display a number for each, which we can then write down to give as parameters. I inferred these numbers were generated out of the OCCT kernel and are subject to change in response to changes in the underlying geometry. If so, this would mean FreeCAD’s topological naming problem is present here, except as a topological numbering problem. Is there anything about Cascade Studio’s code-based model that would mitigate this? I don’t have an answer for that.

Constraints were a notable absence from this manual. I want a mechanism to specify things should be parallel or perpendicular, lines that should be tangent to arcs, helping to capture intent of the underlying geometry. It appears some constraint solving capability is part of OCCT, but it might be missing from Cascade Studio or at least missing from the unofficial manual.

Also absent were information on working with assemblies of parts. Onshape had the concept of “mates” to describe physical relationship between different parts. Sawppy’s suspension articulation were captured as rotate mates, with a single degree of freedom rotating about an axis. There are other types of mates, “slide” is a single degree of freedom translating along an axis, “fasten” with zero degrees of freedom, etc. I saw nothing similar here.

One item I thought was very interesting was the Slider control, which allows me to declare a user-adjustable parameter on screen. For Sawppy, the most value application of such a feature is letting a rover builder adjust the diameter of holes for heat-set inserts. This has caused grief for multiple Sawppy builds, because the outer diameter of M3 inserts is not standardized and every 3D printer prints to a slightly different tolerance. It can even be argued that most rover builders don’t care about modifying the design significantly, most would only need a few sliders to dial in a design to suit their tools and parts. If that is indeed the primary scenario, perhaps replicad would be a better tool.

Window Shopping CadQuery

When I started learning about FreeCAD, I also learned about its 3D modeling core OpenCascade Technology (OCCT). OCCT is not exclusive to FreeCAD and it forms the core of several other open-source CAD solutions, each implementing a different design intent. In the time I’ve been keeping my eyes open, I’ve come across several projects that might be interesting.

First up on this survey is CadQuery, a Python API on top of OCCT. (Hackaday post) Which is very interesting considering FreeCAD already has a Python API. From a brief look, those two APIs have different intentions on how to expose OCCT capability to code-based construction. FreeCAD’s Python API primarily enable macros, scripts, and extensions to supplement projects created in FreeCAD UI. CadQuery removes the need for graphical UI entirely.

But this is not the whole picture. It’s also possible to run FreeCAD without an UI, so I will have to dig deeper to really understand the tradeoffs between their two approaches. CadQuery actually started out as something built within FreeCAD. CadQuery became its own independent project when the team started feeling constrained by FreeCAD limitations around selection. That tells me CadQuery is working to get away from the well-known FreeCAD problem of topological naming.

Being code-centric means a CadQuery design is a Python program and can take advantage of all the software development tools available. Which satisfied my CAD wishlist item for Git-like ability to fork, pull request, etc. The problem is “diff”, which will show the Python code changes but not a visual representation of those changes. This can probably be solved by using CadQuery to process the before/after views and render the difference between them. (Such a tool may already exist.)

Since CadQuery is not dependent on any graphical user interface, there are multiple ways to play with it. CQ-editor is a native desktop application letting people use CadQuery in a similar manner to OpenSCAD. Another way is to work with CadQuery Python code in a Jupyter notebook, giving it a browser-based interface. And the one that really caught my eye: cq-directive, which runs as part of the Sphinx documentation generator. In theory this allows diagrams in documentation to stay in sync with the CadQuery design files. Keeping CAD in sync with documentation would resolve one of my recurring headaches with Sawppy documentation.

CadQuery looks like a very promising venue for investigation, but trying to go hands-on was stymied by Python versioning support. As of this writing, the latest public version of Python is 3.11 and it’s been around long enough most infrastructure like Jupyter Lab has updated. However, CadQuery is still tied to 3.10 and not expected to move up to 3.11 until later this year. Version conflict is nothing new in the Python world and can be solved with a bit of time, but I chose to put CadQuery on hold and read up on other options starting with Cascade Studio.

Window Shopping Cool Retro Term

I experimented with building a faux VFD effect on modern screens. Just a quick prototype without a lot of polish. Certainly not nearly as much as some other projects out there putting a retro look on modern screens. One of those I’ve been impressed with is cool-retro-term. (Mentioned as almost an aside in this Hackaday article about a mini PDP-11 project.) I installed it on my Ubuntu machine and was very amused to see a window pop up looking like an old school amber CRT computer monitor.

The amber color looks perfect, and the text received a coordinate transform to make the text area look like a curved surface. Not visible in a screenshot are bits of randomness added to the coordinate transform emulating the fact CRT pixels aren’t as precisely located as LCD pixels. There is also a slight visual flicker effect simulating CRT vertical refresh.

The detail I found most impressive is the fact effects aren’t limited to the “glass” area: there is even a slight reflection of text on the “bezel” area!

So how was all this done? Poking around the GitHub repository I think this was written using Qt native UI framework. Qt was something I had ambition to learn, but I’ve put more time into learning web development because of all the free online resources out there. I see a lot of files with the *.qml extension, indicating this is a newer way to create Qt interfaces: QML markup versus API calls from code. Looking around for something that looks like the core of emulating imperfect CRTs, the most promising candidate for a starting point is the file ShaderTerminal.qml. I see mentions of CRT visual attributes like static noise, curvature, flickering, and more.

It should be possible to make an online browser version of this effect. If the vertex shaders in cool-retro-term are too complex for WebGL, it should be possible to port them to WebGPU. Turning that theory into practice would require me to actually get proficient with WebGPU, and learn enough Qt to understand all the nuts and bolts of how cool-retro-term works so I can translate them. Given my to-do list of project ideas, this is unlikely to rise to the top unless some other motivation surfaces.