I Do Not (Yet?) Meet The Prerequisites For Multiple View Geometry in Computer Vision

Python may not be required for performing computer vision with or without OpenCV, but it does make exploration easier. There are unfortunately limits to the magic of Python, contrary to glowing reviews humorous or serious. An active area of research that is still very challenging is extracting world geometry from an image, something very important for robots that wish to understand their surroundings for navigation.

My understanding of computer vision says the image segmentation is very close to an answer here, and while it is useful for robotic navigation applications such as autonomous vehicles, it is not quite the whole picture. In the example image, pixels are assigned to a nearby car, but such assignment doesn’t tell us how big that car is or how far away it is. For a robot to successfully navigate that situation, it doesn’t even really need to know if a certain blob of pixels correspond to a car. It just needs to know there’s an object, and it needs to know the movement of that object to avoid colliding with it.

For that information, most of today’s robots use an active sensor of some sort. Expensive LIDAR for self driving cars capable of highway speeds, repurposed gaming peripherals for indoor hobby robot projects. But those active sensors each have their own limitations. For the Kinect sensor I had experimented with, the limitation were that it had a very limited range and it only worked indoors. Ideally I would want something using passive sensors like stereoscopic cameras to extract world geometry much as humans do with our eyes.

I did a bit of research to figure out where I might get started to learn about the foundations of this field, following citations. One hit that came up frequently is the text Multiple View Geometry in Computer Vision (*) I found the web page for this book, where I was able to download a few sample chapters. These sample chapters were enough for me to decide I do not (yet) meet the prerequisites for this class. Having a robot make sense of the world via multiple cameras and computer vision is going to take a lot more work than telling Python to import vision.

Given the prerequisites, it looks pretty unlikely I will do this kind of work myself. (Or more accurately, I’m not willing to dedicate the amount of study I’d need to do so.) But that doesn’t mean it’s out of reach, it just means I have to find some related previous work to leverage. “Understand the environment seen by a camera” applies to more than just robotics.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Notes On OpenCV Outside of Python

It was fun taking a brief survey of PyImageSearch.com guides for computer vision, and I’m sure I will return to that site, but I’m also aware there are large areas of vision which are regarded as out of scope.

The Python programming language is obviously the focus of that site as it’s right in the name PyImageSearch. However, Python is not the only or even the primary interface for OpenCV. According to official OpenCV introduction, it started as a C library which has since moved to a C++ API. Python is but one of several language bindings on top of that API.

Using OpenCV via Python binding is advantageous not only because of Python itself, it also opens access to a large world of Python libraries. The most significant one in this context is NumPy. Other languages may have similar counterparts, but Python and NumPy together is a powerful combination. There are valid reasons to use OpenCV without Python, but they would have to find their own counterparts to NumPy for their number crunching heavy lifting.

Just for the sake of exercise, I looked at a few of the other platforms I recently examined.

OpenCV is accessible from JavaScript, or at least Node.JS, via projects like opencv4nodejs.  This also means OpenCV can be embedded in desktop applications written in ElectronJS, demonstrated with this example project of opencv-electron.

If I wanted to use OpenCV in a Universal Windows Platform app, it appears some people have shared some compiled form of OpenCV up to Microsoft’s NuGet repository. As I understand it, NuGet is to .NET as PyPI is to Python. Maybe there are important differences but it’s a good enough analogy for a first cut. Microsoft’s UWP documentation describes using OpenCV via a OpenCVHelper component. And since UWP can be in C++, and OpenCV is in C++, there’s always the option of compiling from source code.

As promising as all this material is, it is merely the foundation for applying computer vision to the kind of problems I’m most interested in: helping a robot understand its environment for mapping, obstacle avoidance, and manipulation. Unfortunately that field starts to get pretty complex for a casual hobbyist to pick up.

Notes After Skimming PyImageSearch

I’m glad I learned of PyImageSearch from Evan and spent some time to sit down to look it over. The amount of information available on this site is large enough that I resorted to skimming, with the intent to revisit specific subjects later as need arise.

I appreciate the intent of making computer vision accessible to beginners, it is always good to make sure people interested in exploring an area are not frustrated by problems unrelated to the problem domain. Kudos to the guides on command line basics, and on Python’s NoneType errors that are bewildering to beginners.

That said, this site does frequently dive into areas that I felt lacked sufficient explanation for beginners. I remember the difficulty I had in understanding how matrix math related to computer graphics. The guide on rotation discussed the corresponding rotation matrix. Readers got the assurance “This matrix looks scary, but I promise you: it’s not.” but the explanation that followed would not have been enlightening to me back when I was learning the topic. Perhaps a link to more details would be helpful? Still, the effort is appreciated.

There are also bits of Python code that would be confusing to a beginner. Not just Python itself, but also when leveraging the very powerful NumPy library. I had no idea what was going on between tuple and argmin in the code on this page:

extLeft = tuple(c[c[:, :, 0].argmin()][0])

Right now it’s a black box of voodoo magic to me, a sting of non-alphanumeric operators that more closely resemble something I associated with Perl programming. At some point I need to sit down with Python documentation to work through this step by step in Python REPL (read – evaluation – print loop) to understand this syntax. It would be good if the author included footnotes with links to the appropriate Python terminology for these operations.

A fact of life of learning from information on PyImageSearch is the sales pitch for the author’s books. It’s not necessarily a good thing or a bad thing, but it is very definitely a thing. Constant and repetitive reminder “and in my book you will also learn…” on every page. This site exists to draw people in and, if they want to take it further, sell them on the book. I appreciate this obviously stated routine over the underhanded ways some other people make money online, but that doesn’t make it any less repetitive.

Likely related to the above is the fact this site also wants to collect e-mail addresses. None of the code download links takes us to an actual download, they instead take us to a form where we have to fill in our e-mail address before we are given a link to download. Fortunately the simple bits I’ve followed along so far are easy to recreate without the download but I’m sure it will be unavoidable if I go much further.

And finally, this site is focused on OpenCV in Python running on Unix-derived operating systems. Other language bindings for  OpenCV are out of scope, as is the Windows operating system. For my project ideas that involve embedded platforms without Python, or those that will be deployed on Windows, I would need to go elsewhere for help.

But what is within scope is covered well, with an eye towards beginner friendliness, and available freely online in a searchable collection. For that I am thankful to the author, even as I acknowledge that there are interesting OpenCV resources beyond this scope.

Skimming Remainder Of PyImageSearch Getting Started Guide

Following through part of, then skimming the rest of, the first section of PyImageSearch Getting Started guide taught me there’s a lot of fascinating information here. Certainly more than enough for me to know I’ll be returning and consult as I tackle project ideas in the future. For now I wanted to skimp through the rest and note the problem areas it covers.

The Deep Learning section immediately follows the startup section, because they’re a huge part of recent advancements in computer vision. Like most tutorials I’ve seen on Deep Learning, this section goes through how to set up and train a convolutional neural network to act as an image classifier. Discussions about training data, tuning training parameters, and applications are built around these tasks.

After the Deep Learning section are several more sections, each focused on a genre of popular applications for machine vision.

  • Face Applications start from recognizing the presence of human faces to recognizing individual faces and applications thereof.
  • Optical Character Recognition (OCR) helps a computer read human text.
  • Object detection is a more generalized form of detecting faces or characters, and there’s a whole range of tools. This will take time to learn in order to know which tools are the right ones for specific jobs.
  • Object tracking: once detected, sometimes we want an object tracked.
  • Segmentation: Detect objects and determine which pixels are and aren’t part of that object.

To deploy algorithms described above, the guide then talks about hardware. Apart from theoretical challenges, there’s also hardware constraint that are especially acute on embedded hardware like Raspberry Pi, Google Coral, etc.

After hardware, there are a few specific application areas. From medical computer vision, to video processing, to image search engine.

This is an impressively comprehensive overview of computer vision. I think it’ll be a very useful resource for me in the future, as long as I keep in mind a few characteristics of this site.

Skimming “Build OpenCV Mini-Projects” by PyImageSearch: Contours

Getting a taste of OpenCV color operations were interesting, but I didn’t really understand what made OpenCV more powerful than other image processing libraries until we got to contours, which covers most of the second half of PyImageSearch’s Start Here guide Step 4: Build OpenCV Mini-Projects.

This section started with an example for finding the center of a contour, which in this case is examining a picture of a collection of non-overlapping paper cut-out shapes. The most valuable concept here is that of image moments, which I think of as a “summary” for a particular shape found by OpenCV. We also got names for operations we’ve seen earlier. Binarization operations turn an image into binary yes/no highlight of potentially interesting features. Edge detection and thresholding are the two we’ve seen.

Things get exciting when we start putting contours to work. The tutorial starts out easy by finding the extreme points in contours, which breaks down roughly what goes on inside OpenCV’s boundingRect function. Such code is then used in tutorials for calculating size of objects in view which is close to a project idea on my to-do list.

A prerequisite for that project is code to order coordinates clockwise, which reading the code I was surprised to learn was done in cartesian space. If the objective is clockwise ordering, I thought it would have been a natural candidate for processing in polar coordinate space. This algorithm was apparently originally published with a boundary condition bug that, as far as I can tell, would not have happened if the coordinate sorting was done in polar coordinates.

These components are brought together beautifully in an example document scanner application that detects the trapezoidal shape of a receipt in the image and performs perspective correction to deliver a straight rectangular image of the receipt. This is my favorite feature of Office Lens and if I ever decide to write my own I shall return to this example.

By the end of this section, I was suitably impressed by what I’ve seen of OpenCV, but I also have the feeling a few of my computer vision projects would not be addressed by the parts of OpenCV covered in the rest of PyImageSearch’s Start Here guide.

Skimming “Build OpenCV Mini-Projects” by PyImageSearch: Colors

I followed through PyImageSearch’s introductory Step 3: Learn OpenCV by Example (Beginner) line by line both to get a feel of using Python binding of OpenCV and also to learn this particular author’s style. Once I felt I had that, I started skimming at a faster pace just to get an idea of the resources available on this site. For Step 4: Build OpenCV Mini-Projects I only read through the instructions without actually following along with my own code.

I was impressed that the first part of Step 4 is dedicated to Python’s NoneType errors. The author is right — this is a very common thing to crop up for anyone experimenting with Python. It’s the inevitable downside of Python’s lack of static type checking. I understand the upsides of flexible runtime types and really enjoy the power it gives Python programmers, but when it goes bad it can go really bad and only at runtime. Certainly NoneType is not the only way it can manifest, but it is certainly going to be the most common and I’m glad there’s an overview of what beginners can do about it.

Which made the following section more puzzling. The topic was image rotation, and the author brought up the associated rotation matrix. I feel that anyone who would need an explanation of NoneType errors would not know how a mathematical matrix is involved in image rotation. Most people would only know image rotation from selecting a menu in Photoshop or, at most, grabbing the rotate handle with a mouse. Such beginners to image processing would need an explanation of how matrix math is involved.

The next few sections were focused on color, which I was happy to see because most of Step 3 dealt with gray scale images stripped of their color information. OpenCV enables some very powerful operations I want to revisit when I have a project that can make use of them. I am the most fascinated by the CIE L*a*b color space, something I had never heard of before. A color space focused on how humans perceived color rather than how computers represented it meant code working in that space will have more human-understandable results.

But operations like rotation, scaling, and color spaces are relatively common things shared with many other image manipulation libraries. The second half goes into operations that make OpenCV uniquely powerful: contours.

Notes On “Learn OpenCV by Example” By PyImageSearch

Once basic prebuilt binaries of OpenCV has been installed in an Anaconda environment on my Windows PC, Step #2 of PyImageSearch Start Here Guide goes into command line arguments. This section was an introduction for people who have little experience with the command line, so I was able to skim through it quickly.

Step #3 Learn OpenCV by Example (Beginner) is where I finally got some hands-on interaction with basic OpenCV operations. Starting with basic image manipulation routines like scale, rotate, and crop. These are pretty common with any image library, and illustrated with a still frame from the movie Jurassic Park.

The next two items were more specific to OpenCV: Edge detection attempts to extract edges from am image, and thresholding drops detail above and below certain thresholds. I’ve seen thresholding (or close relative) in some image libraries, but edge detection is new to me.

Then we return to relatively common image manipulation routines, like drawing operations on an image. This is not unique to OpenCV but very useful because it allows us to annotate an image for human-readable interpretation. Most commonly drawing boxes to mark regions of interest, but also masking out areas not of interest.

Past those operations, the tutorial concludes with a return to OpenCV specialties in the form of contour and shape detection algorithms, executed on a very simple image with a few Tetris shapes.

After following along through these exercises, I wanted to try those operations on one of my own pictures. I selected a recent image on this blog that I thought would be ideal: high contrast with clear simple shapes.

Xbox One

As expected, my first OpenCV run was not entirely successful. I thought this would be an easy image for edge detection and I learned I was wrong. There were false negatives caused by the shallow depth of field. Vents on the left side of the Xbox towards the rear was out of focus and edges were not picked up. False positives in areas of sharp focus came from two major categories: molded texture on the front of the Xbox, and little bits of lint left by the towel I used to wipe off dust. In hindsight I should have taken a picture before dusting so I could compare how dust vs. lint behaved in edge detection. I could mitigate false positives somewhat by adjusting the threshold parameters of the edge detection algorithm, but I could not eliminate them completely.

Xbox Canny Edge Detect 30 175

With such noisy results, a naive application of contour and shape detection algorithms used in the tutorial returned a lot of data I don’t yet know how to process. It is apparent those algorithms require more processing and I still have a lot to learn to deliver what they needed. But still, it was a fun first run! I look forward to learning more in Step 4: Build OpenCV Mini-Projects.

Notes on OpenCV Installation Guide by PyImageSearch

Once I decided to try PyImageSearch’s Getting Started guide, the obvious step #1 is about installing OpenCV. Like many popular open source projects, there are two ways to get it on a computer system: (1) use a package manager, or (2) build from source code. Since the focus here is using OpenCV from Python, the package manager of choice is pip.

Packages that can be installed via pip are not necessarily done by the original authors of a project. If it’s popular enough, someone will take on the task of building from open source code and make those built binaries available to others, and PyImageSearch pip install opencv guide says that is indeed the case here for OpenCV.

I appreciated the explanation of differences between the four different packages, a result of two different yes/no options: headless or not, and contrib modules or not. The headless option is appropriate for machines used strictly for processing and do not need to display any visual interface, and contrib describes a set of modules that were contributed by people outside of core OpenCV team. These have grown popular enough to be offered as a packaged bundle.

What’s even more useful was an explanation of what was not in any of these packages available via pip: modules that implement patented algorithms. These “non-free” components are commonly treated as part of OpenCV, but are not distributed in compiled binary form. We may build them from source code for exploration, but any binary distribution (for example, use in commercial software product) requires dealing with lawyers representing owners of those patents.

Which brings us to the less easy part of the OpenCV installation guide: building from source code. PyImageSearch offers instructions to do so on macOS, Ubuntu, and Raspbian for a Raspberry Pi. The author specifically does not support Windows as a platform for learning OpenCV. If I want to work in Windows, I’m on my own.

Since I’m just starting out, I’m going to choose the easy method of using pre-built binaries. Like most Python tutorials, PyImageSearch highly recommends a Python environment manager and includes instructions for virtualenv. My Windows machine already had Anaconda installed, so I used that instead to install opencv-contrib-python in an environment created for this first phase of OpenCV exploration.

Trying OpenCV Getting Started Guide By PyImageSearch

I am happy that I made some headway in writing desktop computer applications controlling hardware peripheral over serial port, in the form of a test program that can perform a few simple operations with a 3D printer. But how will I put this idea to work doing something useful? I have a few potential project ideas that leverage the computing power of a desktop computer, several of them in the form of machine vision.

Which meant it was time to fill another gap in my toolbox of solving problems with software: get a basic understanding of what I can and can’t do with machine vision. There are two meanings to “can” in that sentence, both of them apply: “is this even theoretically possible” sense and also the “is this within the reach of my abilities” sense. The latter will obviously be more limiting, and the limit is something I can devote the time to learn and fix. But getting an idea of the former is also useful so I don’t go off on a doomed project trying to build something impossible.

Which meant it was time to learn about OpenCV, the canonical computer vision library. I came across OpenCV in various contexts but it’s just been a label on a black box. I never devoted the time to sit down and learn more about this box and how I might be able to leverage it in my own projects. Given my interest in robotics, I knew OpenCV was on my path but didn’t know when. I guess now is the time.

Given that OpenCV is the starting point for a lot of computer vision algorithms and education, there are many tutorials to choose from and I will probably go through several different ones before I will feel comfortable with OpenCV. Still, I need to pick a starting point. Upon this recommendation from Evan who I met at Superconference, I’ll try Getting Started guide by PyImageSearch. First step: installing OpenCV.

Simple Logging To Text File

Even though I aborted my adventures into Windows ETW logging, I still wanted a logging mechanism to support future experimentation into Universal Windows Platform. This turned into an educational project in itself, learning about other system interfaces of this platform.

Where do I put this log file?

UWP applications are not allowed arbitrary access to the file system, so if I wanted to write out a log file without explicit user interaction, there are only a few select locations available. I found the KnownFolders enumeration but those were all user data folders, I didn’t want these log files clogging up “My Documents” and such. I ended up putting the log file in ApplicationData.TemporaryFolder. This folder is subject to occasional cleanup by the operating system, which is fine for a log file.

When do I open and close this log file?

This required a trip into the world of UWP application lifecycle. I check if the log file existed and, if not, create and open the log file from three places: OnLaunched, OnActivated, and OnResuming. In practice it looks like I mostly see OnLaunched. The flipside is OnSuspending, where the application template has already set up a suspension deferral buying me time to write out and close the log file.

How do I write data out to this log file?

There is a helpful Getting Started with file input/output document. In it, the standard recommendation is to use the FileIO class. It links to a section in the UWP developer’s guide titled Files, folders, and libraries. The page Create, write, and read a file was helpful for me to see how these differ from classic C file I/O API.

These FileIO classes promise to take care of all the complicated parts, including async/await methods so the application is not blocked on file access. This way the user interface doesn’t freeze until the load or save operation completes, instead remaining responsive while file access was in process.

But when I used the FileIO API naively, writing upon every line of the log file, I received a constant stream of exceptions. Digging into the call stack of the exception (actually several levels deep in the chain) told me there was a file access collision problem. It was the page Best practices for writing to files that cleared things up for me: these async FileIO libraries create temporary files for each asynchronous action and copy over the original file upon success. When I was writing once per line, too many operations were happening in too short of a time resulting in the temporary files colliding with each other.

The solution was to write less frequently, buffer up a set of log messages so I write a larger set of them with each FileIO access, rather than calling once per log entry. Reducing the frequency of write operations resolved my collision issue.

[This simple text file logging class is available on GitHub.]

Complexity Of ETW Leaves A Beginner Lost

When experimenting with something new in programming, it’s always useful to step through the code in a debugger the first time to see what it does. An unfortunate side effect is far slower than normal execution speed, which interferes with timing-sensitive operations. An alternative is to have a logging mechanism that doesn’t slow things down (as much) so we can read the logs afterwards to understand the sequence of events.

Windows has something called Event Tracing for Windows (ETW) that has evolved over the decades. This mechanism is implemented in the Windows kernel and offers dynamic control of what events to log. The mechanism itself was built to be lean, impacting system performance as little as possible while logging. The goal is that it is so fast and efficient that it barely affects timing-sensitive operations. Because one of the primary purposes of ETW is to diagnose system performance issues, and obviously it can’t be useful it if running ETW itself causes severe slowdowns.

ETW infrastructure is exposed to Universal Windows Platform applications via the Windows.Foundation.Diagnostics namespace, with utility classes that sounded simple enough at first glance: we create a logging session, we establish one or more channels within that session, and we log individual activities to a channel.

Trying to see how it works, though, can be overwhelming to the beginner. All I wanted is a timestamp and a text message, and optionally an indicator of importance of the message. The timestamp is automatic in ETW. The text message can be done with LogEvent, and I can pass in a LoggingLevel to signify if it is verbose chatter, informative message, warning, error, or a critical event.

In the UWP sample library there is a logging sample application showcasing use of these logging APIs. The source code looks straightforward, and I was able to compile and run it. The problem came when trying to read this log: as part of its low-overhead goal and powerful complexity, the output of ETW is not a simple log file I can browse through. It is a task-specific ETL file format that requires its own applications to read. Such tools are part of the Windows Performance Toolkit, but fortunately I didn’t have to download and install the whole thing. The Windows Performance Analyzer can be installed by itself from the Windows store.

I opened up the ETL file generated by the sample app and… got no further. I could get a timeline of the application, and I can unfold a long list of events. But while I could get a timestamp for each event, I can’t figure out how to retrieve messages. The sample application called LogEvent with a chunk of “Lorem ipsum” text, and I could not figure out how to retrieve it.

Long term I would love to know how to leverage the ETW infrastructure for my own application development and diagnosis. But after spending too much time unable to perform a very basic logging task, I shelved ETW for later and wrote my own simple logger that outputs to a plain text file.

Ubuntu and ROS on Raspberry Pi

Since I just discovered that I can replace Ubunto with lighter-weight Raspbian on old 32-bit PCs, I thought it would be a good time to quickly jot down some notes about going the other way: replacing Raspbian with Ubuntu on Raspberry Pi.

When I started building Sawppy in early 2018, I was already thinking ahead to turning Sawppy from a remote-controlled toy to an autonomous robot. Which meant a quick survey to the state of ROS. At the time, ROS Kinetic was the latest LTS release, targeted for Ubuntu 16.

Unfortunately the official release of Ubuntu 16 did not include an armhf build suitable for running on a Raspberry Pi. Some people would build their own ROS from source code to make it run on Raspbian, I took one attempt and the build errors took more time to understand and resolve than I wanted to spend. I then chose the less difficult path of finding a derived released of Ubuntu 16 that ran on the platform: Ubuntu Mate 16. An afternoon’s worth of testing verified basic ROS Kinetic capability, and I set it aside for revisiting later.

Later on in 2018, Ubuntu 18 was released, followed by ROS Melodic matching that platform. By then support for running Debian (& deriviatives) on armhf had migrated to Ubuntu, and they released both the snap-based Ubuntu Core and Ubuntu ‘classic’ for Raspberry Pi. These are minimalist server images, but desktop UI components can be installed if needed. Information to do so can be found on Ubuntu wiki but obviously UI is not a priority when I’m looking at robot brains. Besides, if I wanted an UI, Ubuntu Mate 18 is still available as well. For Ubuntu 20 released this year, the same choices continue to be offered, which should match well with ROS Noetic.

I don’t know how relevant this is yet for ROS on a Raspberry Pi, but I noticed not only are 32-bit armhf binaries available, so are 64-bit arm64 binaries. Raspberry Pi 3 and 4 have CPU capable of running arm64 code, but Raspbian has remained 32-bit for compatibility with existing Pi software and with low-end devices like the Raspberry Pi Zero incapable of arm64. More than just an ability to address more memory, moving to arm64 instruction set was also a chance to break from some inconvenient bits of architectural legacy which in turn allowed better arm64 performance. Though the performances increase are minor as applied to a Raspberry Pi, ROS releases include precompiled arm64 binaries so the biggest barrier to entry has already been removed and might be worth a look.

Debian with Raspberry Pi Desktop on HP Mini (110-1134CL) and Dell Latitude X1

I went hunting for a lightweight Linux distribution for old computers. With a CPU running at about 1 GHz and 1GB of RAM, the HP Mini (110-1134CL) I had on hand was the approximate league of a modern Raspberry Pi. I wished for something like Debian-based Raspbian and was delighted to find that the Raspberry Pi foundation does release a Debian distribution for x86 that is a counterpart to Raspbian. This meant much of my knowledge about working with Raspbian on a Raspberry Pi could be applied.

Obviously all the work specific to Pi hardware are absent, such as video playback hardware acceleration and the GPIO pins. Still, I think I’m in better shape here than in many other lightweight Linux distributions, because I believe the Debian roots meant I can draw from the extensive library of drivers.

Installing on the HP Mini (110-1134CL) the installer reminded me of this fact by informing me I would need ucode15.fw. I thought I would have to install it manually but by the time installation completed and I got to Raspberry Pi desktop, WiFi was working. I guess installation was taken care of for me! A huge plus in favor of beginner friendliness of this distribution.

Generally speaking, it worked well on this HP Mini, feeling more responsive than Ubuntu Mate on the same machine. It is still no speed demon, but at least it is no longer an exercise in frustration. My general impression of user experience is on par with a Raspberry Pi 3 but with the notable exception of video playback: lacking the specially tailored hardware accelerated video engine, it can only consistently play YouTube videos at 480p. Trying to run 720p (most closely matching the screen resolution) dropped a lot of frames. This is a downside as so many instructional videos are online now, but 480p should still be enough to get the point across.

Encouraged by this result, I prepared to install on my Dell Latitude X1. Before I erased Ubuntu Mate, though, I wanted to get some objective numbers. I measured Ubuntu Mate boot on the Latitude X1, and the time from power button to desktop ready for user interaction was 2 minutes 37 seconds.

Installing on the Latitude X1 encountered similar driver issues, this time with ipw2200-bss.fw. Again, after informing me, the installer took care of installing it and setting it up without requiring any action from me. And once it was up and running I measured it took only 1 minute 26 seconds. This operating system is ready for user input in almost half the time of Ubuntu Mate.

Repeating the measurement, I found that the younger HP Mini had the performance edge, taking just 58 seconds to go from power button to desktop ready. Both of these numbers are impressive considering both are running mechanical hard drives and not modern flash storage.

With these impressive results, Debian with Raspberry Pi desktop has now become my go-to operating system for computers with old 32-bit Intel CPUs.

Debian with Raspberry Pi Desktop Promising For Old Computers

During my first pass evaluation of a HP Mini (110-1134CL) I tried a few modern graphical operating system options and failed to find anything satisfactory. Ubuntu Mate is designed to be a lighter weight alternative to mainline Ubuntu, but it still felt sluggish. Chrome OS (available as Neverware CloudReady) now only supports 64-bit CPUs, which excluded old 32-bit machines.

It works fine as a text-only command line machine, but that seems like a shame as it has a perfectly operable screen and video subsystem. All it needs is a Linux distribution even lighter weight than Ubuntu Mate. I’m sure there are many options out there — historically there has never been a shortage of options for Linux distributions, and websites like this one help sort through options.

But I’d rather not learn yet another Linux distribution. I’m already juggling through more than I strictly wanted, plus some time in FreeBSD as part of my FreeNAS explorations. If only there was a Linux variant that I’m already familiar with, optimized for minimalist low end hardware.

The poster child for minimalist low end hardware is the Raspberry Pi, which is so minimalist it doesn’t even have a power switch. Raspbian, their Debian-derived Linux distribution, has been cut down so it runs on Pi hardware less powerful than the cell phones we’re carrying around nowadays. What if someone took that work and put it in a distribution I can run on old x86 computers? At 1GB of RAM and 1GHz CPU, the hardware spec of a HP Mini is quite similar to a Raspberry Pi.

An online search quickly found that such a thing exists. Not only had “someone” done the work, that “someone” is Raspberry Pi foundation itself. This was the result of someone at the foundation thinking of the exact same “What if….?”question, but they thought of it a few years earlier and had the resources to make it happen.

Thus old computers with 32-bit Intel CPU have the option of running what they’re currently calling Debian with Raspberry Pi Desktop. A beginner-friendly super lightweight variant of Debian with almost all of the software packages that come pre-installed on Raspbian. Only Wolfram Mathematica and Minecraft are missing due to licensing. It all sounds very promising. Time to try it on some old 32-bit machines and see how they run.

ESA ISS Tracker on Nexus 5

When I tried a Nokia Lumia 520 to see if I could use it as ESA ISS Tracker display, I found its screen couldn’t quite manage. Displaying the entire map in a clear and legible way requires more than the 800×480 resolution of a Lumia 520’s screen. Which led to the next experiment: dust off an old Nexus 5.

Nexus 5 Android support was discontinued several releases ago, but when new it was quite a compelling device. One of the signature features was a full HD 1920×1080 resolution screen packed into just five inches of diagonal length. And given Google’s track record of mobile Chrome browser, I was confident it would be capable of rendering ESA’s HTML ISS tracker.

Unfortunately it proved to be even less suitable than the Lumia 520, due to the lack of hardware navigation buttons. This meant the Android navigation bar is always on screen, obscuring part of the map. This was similar to how a Kindle behaves, except the Kindle bar is across the bottom while the phone is over on the right.

Nexus 5 sleep timeouts

Another problem shared with the Kindle was the inability to keep the screen on. Screen inactivity sleep timeout could be set anywhere from 15 seconds to 30 minutes, but there isn’t a “Never” option like there is on Windows tablets or Windows Phone. It seems to be a persistent trend in Android devices, which is reasonable for portable personal electronics but annoying when I want to repurpose one as an around-the-clock status display. Android being Android, there’s probably a way around that limitation, but that’s not a very interesting project right now when I already have more cooperative devices at my disposal.

ESA ISS Tracker on Nokia Lumia 520

While the unfortunate Samsung 500T will be dropped from Windows 10 support in 2023, I don’t need to wait that long for a Microsoft end-of-life product. I have several old Windows Phone 8 devices on hand, and they’ve already ventured beyond the bounds of supported systems which is bad for security if I wanted use these devices for general internet activities. But if I have only a specific web property in mind that I trust to be safe, then all I care about is if it works. ESA’s online HTML ISS Tracker fits this bill.

The version of Internet Explorer built into Windows Phone 8 is far more compatible with web than IE of old, though it still had enough incomplete/missing features to make its web experience a little bumpy. It’s fine for most sites and a quick test on a Nokia Lumia 520 proved that the ESA tracker is one of them.

Since this phone had hardware navigation buttons, there was no need to keep a navigation bar on screen as the Amazon Kindle did. This allowed the ISS tracker to actually have the full screen as intended. The is one cosmetic problem: the map occupied top part of the screen leaving a little black bar at the bottom instead of vertically centered. But that’s a tiny nit to pick.

I could tell this the phone never to turn off the screen even after some period of inactivity, better than I could with my Kindle. The phone should be able to run indefinitely on USB power making it suitable for an around-the-clock display. The only thing I can gripe about is screen resolution. The 800×480 screen of this Windows Phone is just a little too low resolution for all the ISS tracking details to be clearly legible. I think a HTML-based status display will be a promising way to reuse obsolete Windows Phone hardware, but maybe a different project preferably with lower information density. This shortcoming of the Lumia 520 motivated me to repeat the same experiment on a Google Nexus 5, another phone that has fallen out of support.

ESA ISS Tracker on Samsung 500T

Setting aside the HP Stream 7 as unsuitable for my current project, we reach the final piece of x86 Windows hardware in my pile of unused devices: the Samsung 500T. I guess 500T was a shorthand for its full designation XE500T1C, though I don’t think it made the name roll off the tongue much easier.

This device has a 11.6 inch diagonal touchscreen. It was designed for Windows 8 and launched at around the same time. Its primary focus is on tablet workloads, but can become a convertible tablet/laptop like the HP Split X2 with purchase of an optional keyboard base. Since the keyboard is optional, the 500T has more peripherals packed along its edges. Not just the microSD expansion slot like the HP but also a full-size type A USB and micro HDMI connectors. HP delegated the latter tasks to its included base, which has two type A USB ports and a full sized HDMI connector.

As a piece of Windows 8 hardware like HP Split X2 and HP Stream 7, the 500T has a Windows license in embedded hardware. Thus I was also able to install Windows 10 erasing the existing installation of Windows 8 which was protected by a password I no longer remember. Once device drivers were installed, all features functioned as expected including the ability to run on plug-in power and charge its battery.

That capability was inexplicably nonfunctional in my HP Stream 7. Which meant unlike the HP Stream 7, I could run this display continuously on wired power around the clock. And showing the ESA HTML live space station tracker might be the best way to make use of this hardware. It would be a better end than collecting dust, as my past experience of this tablet has failed to live up to its potential and generally soured me on buying any more computers from Samsung.

ESA ISS Tracker on HP Stream 7

After I found that Amazon Fire HD 7 tablet was unsuitable for an always-on screen to display ESA’s HTML live tracker for the International Space Station, I moved on to the next piece of hardware in my inactive pile: a HP Stream 7. This tablet was an effort by Microsoft to prove that they would not cede the entry-level tablet market to Android. In hindsight we now know that effort did not pan out.

But at the time, it was an intriguing product as it ran Windows 10 on an Intel Atom processor. This overcame the lack of x86 application compatibility of the previous entry level Windows tablet, which ran Windows RT on an ARM processor. It was difficult to see how an expensive device with a from-scratch application ecosystem could compete with Android tablets, and indeed Windows RT was eventually withdrawn.

Back to this x86-based tablet: small and compact, with a screen measuring 7″ diagonally that gave it its name, it launched at $120 which was unheard of for Windows machines. Discounts down to $80 (when I bought it) made it cheaper than a standalone license of Windows software. Buying it meant I got a Windows license and basic hardware to run it.

But while nobody expected it to be a speed demon, its performance was nevertheless disappointing. At best, it was merely on par with similarly priced Android tablets. Sure we could run standard x86 Windows applications… but would we want to? Trying to run Windows apps not designed with a tablet in mind was a pretty miserable experience, worse than an entry level PC. Though to be fair, it is impossible to buy an entry level PC for $120 never mind $80.

The best I can say about this tablet was that it performed better than the far more expensive Samsung 500T (more on that later.) And with a Windows license embedded in hardware, I was able to erase its original Windows 8 operating system (locked with a password I no longer recall) and clean install Windows 10. It had no problems updating itself to the current version (1909) of Windows 10. The built-in Edge browser easily rendered ESA ISS tracker, and unlike the Kindle I could set screen timeout to “never”.

That’s great news, but then I ran into some problems with power management components that would interfere with around-the-clock operation.

ESA ISS Tracker on Kindle Fire HD 7 (9th Gen)

I wanted to play with old PCs and that’s why I tried ESA’s ISS Tracker on a HP Mini and Dell Latitude X1. But if I’m being honest, the job of a dedicated display is better suited to devices like tablets. They are designed for information consumption and are not hampered by the overhead of input devices like keyboards. I was not willing to dedicate my iPad to this task: it is too useful for other things. But I do have a pile of older devices that haven’t lived up to their promise.

Top of this pile (meaning most recent) is an Amazon Kindle Fire HD 7 tablet 9th generation (*) purchased during holiday sale for a significant discount. If there isn’t a sale today, wait a few weeks and another will be along shortly. I ended up paying roughly 20% of what I paid for my iPad, and I had been curious how it would perform. The verdict was that it had too many annoyances to be useful and I ended up not leaving it collecting dust. At 20% of the price with 0% of utility, it was not a win.

But maybe I could dedicate its screen for a live ISS tracker? I brought up Silk web browser and launched the ESA site. Switching to full screen mode unveiled a problem: Kindle never removes its device navigation buttons from the bottom of the screen. The triangle/circle/square obscures part of ISS tracker’s display.

I wondered if this behavior applied to native Kindle apps as it did full screen web pages, so I searched through the Amazon Kindle app store for an ISS tracker and found ISSLive (*) for experimentation. The answer: yes, the navigation bar is overlaid on top of native applications just as it did on web pages.

Kindle Fire HD 7 running ISSLive

But that was only visually annoying and not an outright deal breaker. That would be Kindle’s sleep behavior. There is no option to keep the screen display active. The user can choose from one of several time duration for the tablet to wait before it turns off the display and goes to sleep, but there is no “Never go to sleep” option.

Kindle Fire HD 7 Always Sleeps

The Kindle Fire HD 7 will not be suitable as a dedicated ISS tracker screen, so I’m moving on to the next device in the unused pile for investigation: a HP Stream 7.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

ESA ISS Tracker on Dell Latitude X1

My failed effort at an ISS Tracker web kiosk reminded me of my previous failure trying to get Ubuntu Core web kiosk up and running on old hardware. That computer, a Dell Latitude X1, was also very sluggish running modern Ubuntu Mate interactively when I had tried it. I was curious how it would compare with the HP Mini.

The HP Mini has the advantage of age: it is roughly ten years old, whereas the X1 is around fifteen years old. When it comes to computers, an age difference of five years is a huge gulf spanning multiple hardware generations. However, the X1 launched as a top of the line premium machine for people who were willing to pay for a thin and light machine. Hence it was designed under very different criteria than the HP Mini despite similarity in form factor.

As one example: the HP Mini housed a commodity 2.5″ laptop hard drive, but the Dell Latitude X1 used a much smaller form factor hard drive that I have not seen before or since. Given its smaller market and lower volume, I think it is fair to assume the smaller hard drive comes at a significant price premium in exchange for reduction of a few cubic centimeters in volume and grams of weight.

Installing Ubuntu Mate 18.04 on the X1, I confirmed it is still quite sluggish by modern standards. However, this is a comparison test and the Dell X1 surprised me by feeling more responsive than the five years younger HP Mini. Given that they both use spinning platter hard drives and had 1GB of RAM, I thought the difference is probably due to their CPU. The Latitude X1 had an ULV (ultra low voltage) Pentium M 744 processor, which was a premium product showcasing the most processing power Intel can deliver while sipping gently on battery power. In comparison the HP Mini had an Atom processor, an entry-level product optimized for cost. Looking at their spec sheet comparison shows how closely an entry level CPU matches up to a premium CPU from five years earlier, but the Atom had only one quarter of the CPU cache and I think that was a decisive difference.

Despite its constrained cache, the Atom had two cores and thermal design power (TDP) of just 2.5W. In contrast the Pentium M 733 ULV had only a single core and TDP of 5W. Twice the cores, half the electrical power, the younger CPU far more power efficient. And it’s not just the CPU, either, it’s the whole machine. Whereas the HP Mini 110 only needed 7.5W to display ESA ISS Tracker, the Latitude X1 reports drawing more than double that. A little over 17W, according to upower. An aged battery, which has degraded to 43% of its original capacity, could only support that for about 40 minutes.

Device: /org/freedesktop/UPower/devices/battery_BAT0
native-path: BAT0
vendor: Sanyo
model: DELL T61376
serial: 161
power supply: yes
updated: Thu 23 Apr 2020 06:19:06 PM PDT (69 seconds ago)
has history: yes
has statistics: yes
battery
present: yes
rechargeable: yes
state: discharging
warning-level: none
energy: 11.4663 Wh
energy-empty: 0 Wh
energy-full: 11.4663 Wh
energy-full-design: 26.64 Wh
energy-rate: 17.2605 W
voltage: 12.474 V
time to empty: 39.9 minutes
percentage: 100%
capacity: 43.0417%
technology: lithium-ion
icon-name: 'battery-full-symbolic'
History (rate):
1587691145 17.261 discharging

Putting a computer to work showing the ESA tracker is only using its display. It doesn’t involve the keyboard. Such information consumption tasks are performed just as well by touchscreen devices, and I have a few to try. Starting with an Amazon Kindle Fire HD 7.