Arduino Interface for Mitutoyo SPC Data Port

I started looking for an inexpensive electronic indicator with digital output port, and ended up splurging for a genuine Mitutoyo. Sure it is over five times the cost of the Harbor Freight alternative, but I thought it would be worth the price for two reasons. One: Mitutoyo is known for high quality precision instruments, and two: they are popular enough that the data output port should be documented somewhere online.

The second point turned out to be moot because the data output port was actually documented by pamphlet in the box, no need to go hunting online. But I went online anyway to get a second opinion, and found this project on Instructables. Most of the information matched up, but the wiring pinout specifically did not. Their cable schematic had a few apparent inconsistencies. (Example: one end of the cable had two ground pins and the other end did not.) They also had a “Menu” button that I lacked. These may just be the result of different products, but in any case it is information on the internet to be taken with a grain of salt. I took a meter to my own cable to ensure I have the pinout described by the pamphlet in my own instrument.

Their Arduino code matched the pamphlet description, so I took that code as a starting point. I then released my derivative publicly on GitHub with the following changes:

  • Calculate distance within numeric domain instead of converting to string and back.
  • Decimal point placement with a single math expression instead of a list of six if statements.
  • Their code didn’t output if value is inch or millimeter, I added units.

A limitation of their code (that I did not fix) is a recovery path, should the Arduino falls out of sync. The Mitutoyo protocol was designed with a recovery provision: If the communication gets out of sync, we can sync back up using the opening 0xFFFF. But since there’s no code watching for that situation, if it falls out of sync our code would just be permanently confused until reset by the user.

For debugging I added the capability to output in raw hex. I was going to remove it once I had the distance calculation and decimal point code figured out, but I left it in place as a compile-time parameter just in case that would become handy in the future. Sending just hexadecimal data and skipping conversion to human-readable text would allow faster loops.

Note that this Mitutoyo Statistical Process Control (SPC) protocol has no interactive control — it just reports whatever is on the display. Switching units, switching direction, zeroing, all such functions are done through device buttons.

Once it all appears to work on the prototyping breadboard, I again soldered up a compact version and put it inside a custom 3D printed enclosure.

This image has an empty alt attribute; its file name is mitutoyo-spc-arduino-compact.jpg

Seeed Studio Odyssey X86J4105 Has Good ROS2 Potential

If I were to experiment with upgrading my Sawppy to ROS2 right now, with what I have on hand, I would start by putting Ubuntu ARM64 on a Raspberry Pi 3 for a quick “Hello World”. However, I would also expect to quickly run into limitations of a Pi 3. If I wanted to buy myself a little more headroom, what would I do?

The Pi 4 is an obvious step up from the 3, but if I’m going to spend money, the Seeed Studio Odyssey X86J4105 is a very promising candidate. Unlike the Pi, it has an Intel Celeron processor on board so I can build x86_64 binaries on my desktop machine and copy them straight over. Something I hope to eventually be a painless option for ROS2 cross compilation to ARM, but we’re not there yet.

This board is larger than a Raspberry Pi, but still well within Sawppy’s carrying capacity. It’s also very interesting that they copied the GPIO pin layout from Raspberry Pi, the idea some HATs can just plug right in is very enticing. Although that’s not a capability that would be immediately useful for Sawppy specifically.

The onboard Arduino co-processor is only useful for this application if it can fit within a ROS2 ecosystem, and the good news is that it is based on the SAMD21. Which makes it powerful enough to run micro-ROS, an option not available to the old school ATmega32U4 on the LattePanda boards.

And finally, the electrical power supply requirements are very robot friendly. The spec sheet lists DC input voltage requirement at 12V-19V, implying we can just put 4S LiPo power straight into the barrel jack and onboard voltage regulators will do the rest.

The combination of computing power, I/O, and power flexibility makes this board even more interesting than an Up Board. Definitely something to keep in mind for Sawppy contemplation and maybe I’ll click “Add to Cart” on this nifty little board (*) sometime in the near future.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Why I Still Like The 8-Bit Chips

I was sad to see ROS2 seemed to be leaving rosserial behind, which also abandons the simple 8-bit microcontrollers they supported. They may not measure up on the megahertz or the megabyte numbers, but they still had their advantages for quick and simple prototyping.

My favorite part is the fact they tend to require minimal support components, or occasionally none at all. This feature allows me to do simple “deadbug” projects where I solder components directly to the pins of the chip skipping the circuit board. This advantage is directly linked to their disadvantages: modern processors usually require an external clock signal source because it’s hard to build an accurate fast clock on board the chip, in contrast the older and slower chips can make do with on board oscillators because they’re not as dependent on an accurate clock. Similarly, many modern chips require external voltage regulation support because they need to run within a narrow voltage range and the older chips are much more relaxed about it. The PIC16F18345 I occasionally play with can be hooked up directly to a couple of AA batteries (or a single lithium ion cell) and as the battery voltage drops, the chip doesn’t seem to care.

The older simpler chips also tend to be more robust. In addition to tolerating a wider range of voltage input, they also tolerate a large range of current output. Some of the modern controllers’ output pins can’t even sustain 20 milliamps, the standard amount to illuminate a LED. It felt very weird for me to wire up a transistor just to do a LED. In contrast, the PIC16F18345 is specified to support 50 milliamps.

And really, they’re just good simple tools for good simple projects. I was happy to dust off an old PIC program for the hacked-up VFD driver project. When I only needed to flip some pins around, I really don’t need a Swiss army knife of fancy peripherals.

Notes on ROS2 and rosserial

I’m happy to see ROS2 improve over the past several releases, each release more mature and suitable for adoption than the last. Tackling some long standing problems like cross compilation and also new frontiers. I know a lot of my current reservations about ROS2 are on the to-do list, but there’s a fairly significant item that appears to be deliberately absent: rosserial.

In ROS, the rosserial module is the default way of for something simple to communicate with the rest of a ROS robot. It’s been a popular way for robot builders to add small dedicated modules that serve their little niche simply and effectively with only an Arduino or similar 8-bit microcontroller. By following its conventions for translating ROS messages into simple serial byte sequences, robot builders don’t have to constantly reinvent this wheel. However, it is only really applicable when we are in control of both the computer and Arduino end of the communication. When one side is outside of our control — such as the case for LX-16A servos used on Sawppy — we can’t use the rosserial protocol and a custom node has to be created.

But while I couldn’t use rosserial for communication with the servos on my own Sawppy, I’ve seen it deployed for other Sawppy builds in different contexts. Rhys Mainwaring’s Curio rover ROS stack uses rosserial to communicate with its Arduino Mega, and Steve [jetdillo] has just installed a battery voltage monitor that reports via rosserial.

With its proven value to budget robotics, I’m sad to see it’s not currently in the cards for ROS2. Officially, the replacement for rosserial is micro-ROS built on the Micro XRCE-DDS Client. DDS is the standardized communication protocol used by ROS2, and XRCE stands for “eXtremely Resource Constrained Environment.” It’s an admirable goal to keep the protocol running with low resource consumption, but “low” is relative. Micro XRCE-DDS proudly listed its resource requirements as thus:

From the point of view of memory footprint, the latest version of this library has a memory consumption of less than 75 KB of Flash memory and 2.5 KB of RAM for a complete publisher and subscriber application.

If we look at the ATmega328P chip at the heart of a basic Arduino, we see it has 32KB of Flash and 2KB of RAM and that’s just not going to work. A straightforward port of rosserial was aborted due to intrinsic ties to ROS, but that Github issue still sees traffic because people want the thing that does not exist.

I found a ros2arduino library built on Micro XRCE DDS, and was eager to see how it managed to fit on a simple ATmega328. Disappointingly, it doesn’t. The “Arduino” in the name referred to newer high end boards like the Arduino MKR ZERO, leaving the humble ATmega328 Arduino boards out in the cold.

As far as I can tell, this is by design. Much as how ROS2 has focused on 64-bit computing platforms over 32-bit CPUs, their “low end microcontroller support” is graduating from old school 8-bit chips to newer designs like the ESP32 and various ARM Cortex flavors such as the STM32 family. Given how those microcontrollers have fallen to single digit dollar amounts of cost, it’s hard to argue for the economic advantage of old 8-bit chips. (The processing power per dollar argument was lost long ago.) So even though the old 8-bit chips still hold some advantages, I can see the reasoning, and have resigned to accept it as the way of the future.

Update on ARM64: ROS2 on Pi

When I last looked at running ROS on a Raspberry Pi robot brain, I noticed Ubuntu now releases images for Raspberry Pi in both 32-bit and 64-bit flavors but I didn’t know of any compelling reason to move to 64-bit. The situation has now changed, especially if considering a move to the future of ROS2.

The update came courtesy of an announcement on ROS Discourse notifying the community that supporting 32-bit ARM builds have become troublesome, and furthermore, telemetry indicated that very few ROS2 robot builders were using 32-bit anyway. Thus the support for that platform is demoted to tier 3 for the current release Foxy Fitzroy.

This was made official on REP 2000 ROS 2 Releases and Target Platforms showing arm32 as a Tier 3 platform. As per that document, tier 3 means:

Tier 3 platforms are those for which community reports indicate that the release is functional. The development team does not run the unit test suite or perform any other tests on platforms in Tier 3. Installation instructions should be available and up-to-date in order for a platform to be listed in this category. Community members may provide assistance with these platforms.

Looking at the history of ROS 2 releases, we can see 64-bit has always been the focus. The first release Ardent Apalone (December 2017) only supported amd64 and arm64. Support for arm32 was only added a year ago for Dashing Diademata (May 2019) and only at tier 2. They kept it at tier 2 for another release Eloquent Elusor (November 2019) but now it is getting dropped to tier 3.

Another contributing factor is the release of Raspberry Pi 4 with 8GB of memory. It exceeded the 4GB limit of 32-bit addressing. This was accompanied by an update to the official Raspberry Pi operating system, renamed from Raspbian to Raspberry Pi OS, is still 32-bit but with mechanisms to allow addressing 8GB of RAM across the operating system even though individual processes are limited to 3GB. The real way forward is to move to a 64-bit operating system, and there’s a beta 64-bit build of Raspberry Pi OS.

Or we can go straight to Ubuntu’s release of 64-bit operating system for Raspberry Pi.

And the final note on ROS2: a bunch of new tutorials have been posted! The barrier for transitioning to ROS2 is continually getting dismantled, one brick at a time. And it’s even getting some new attention in long problematic areas like cross-compilation.

Xbox One Is Part Of Universal Windows Platform

Independent of my interest in learning a new pattern for asynchronous programming, I started reading about Microsoft’s Universal Windows Platform (UWP) because I wanted to see their latest take on dynamic layout concepts. This is something that web designers are familiar with, since their users might be using anything from a small touchscreen phone to a tablet to a laptop to a desktop computer. But I found that UWP had ambitions to scale across an even wider spectrum. On the low end, they wanted to cover IoT devices with small (or even no) screen. On the high end, the ambition surpasses desktop computers with the big screen TV in a living room connected to a Xbox One.

I’m intrigued by the possibility of writing code to run on my Xbox One game console. At the moment I have no idea what I would do with that potential, but just the possibility adds to my motivation to continue exploring UWP. The part that I haven’t figured out is how much of UWP I’m reading on the documentation site is real, what is still coming, and what might have been aspirations fallen by the wayside. The now-defunct Windows Phone was supposed to be part of this spectrum, placing “UWP on phones” in the “fallen by the wayside” category.

There’s more hand-waving than I would like about the design guidelines for scaling UI all the way up to TV sizes. A website spanning the range of phones to computers has a hard enough time reconciling user input via touchscreens versus keyboard and mouse. The UWP range of IoT devices (with possibly no UI) up to an Xbox running UWP on the living room TV (with an Xbox game controller) is a very wide range to cover, and I didn’t find as much guidelines as I had hoped.

Still, it’s a possibility. The most likely way for me to put my UWP education to use is to create some prototypes of interfacing with electronics hardware. I started looking at platforms like UWP from the perspective of machine control. Does that path make sense for a Xbox? Other than the Nintendo R.O.B. there hasn’t been a lot of console-controlled peripherals. Maybe I can have Sawppy join me in a game?

I looked over the UWP Limitations on Xbox page, and I didn’t see USB and serial APIs explicitly listed in the “Doesn’t Work” list. It’s not on the explicit “Does Work” list, either, so investigating this gray limbo zone might be a fun future exercise. For now, though, it’s time to start running hardware I know works.

Samsung 500T Disappointments

I pulled out my old Samsung 500T to see if it could run ESA’s ISS tracker. It could, and quite well, so I think the role might be the best use of this machine. Because it has proven to be a huge disappointment in so many other ways.

I knew of the machine when it launched alongside Windows 8. This was when Microsoft also launched the original ARM-powered Surface tablet to demonstrate what’s possible by using ARM chips: look at how thin and light they are! Samsung & friends launched the 500T as counterpoint: just as thin and light as the Windows RT tablets, but with an Intel CPU for full x86 compatibility. Judging by the spec sheet, it was a spectacular device to undercut and humiliate the “Windows on ARM made thin and light possible” story.

But that’s only on the spec sheet and not on the price tag. The 500T was expensive and Surface tablets sold for far less. Due to that fact, I didn’t get my 500T until much later when it showed up as a secondhand refurbished unit on Woot. When it arrived, I was sure there was something wrong with the machine. Maybe it somehow slipped past testing in refurbishment? It was completely unusable, taking several minutes to boot and user commands took many seconds (5-10) to respond or were ignored entirely. I went online and found a firmware update, which took all night to apply and upgraded performance from “disastrous” to “merely horrible”.

The screen was another cool feature that didn’t panned out. Not just a touchscreen for fingers, it was also a pen digitizer. Compatible with passive Wacom stylus used by the much thicker Surface Pro tablet, the 500T also had a tiny stylus holder built in. It held the promise to be a digital sketchpad with pressure sensitivity making it superior to a contemporary iPad. But slow system response killed that dream. Who wants to sketch on a pad when strokes don’t show up until a few seconds after we draw it?

Judging by Windows Task Manager, this device’s major implementation flaw was its eMMC flash storage, constantly showing 100% activity. The Atom CPU was not exactly a stellar performer, but it wasn’t the reason for delay as the 500T was constantly waiting to read from or write to storage. Generally ruining user experience across the board.

Not to let Intel entirely off the hook, though, as its Atom Z2760 CPU turned out to be a long term liability as well. This CPU was part of Intel’s Clover Trail family, and they had problems running Windows 10 features newly introduced in 2016. Intel had discontinued that line and declined to do anything about it, so Microsoft blocked Clover Trail devices from advancing beyond Windows 10 build 1607. They will still receive security fixes until January 2023, but features are stuck at July 2016 levels forever.

All of the above are things that I might be able to overlook as unfortunate result of things outside Samsung’s control. The eMMC storage might have performed well when new but degraded with time as solid state storage sometimes do. (The TRIM command could help, but they had to make use of it.) And Samsung had no way of knowing Intel would just abandon Clover Trail.

But let’s talk about what Samsung have chosen to install on the machine. As is typical of machines around that age, there was the usual useless bucket of trials and discount offers. There are also Samsung features that duplicate existing Windows functionality, others thinly veiled advertisement for Samsung products, and more. The worst part? I could not get rid of them. I thought they would be gone once I wiped and installed Windows 10, but they were bundled with critical device drivers so I had no choice but to reinstall them as well. Holding device drivers hostage to force users to accept unrelated software is consistent with Samsung’s anti-user behavior I saw across the board.

The image at the top of this post is just one example. SWMAgent.exe appears to be some sort of Samsung software update engine (What’s wrong with Windows Update?) and it asks for elevation. If the user declines to grant elevated privileges, the software is supposed to respect that choice and go away. But not Samsung! We see a black border visible around that dialog box, which might look strange at first glance. Windows 10 adds a subtle dark shadow to dialog boxes, why this ugly black thing? It is because we’re not looking at a single SWMAgent.exe dialog box, but a huge stack of of them. Each popping on top of the last, and another one added every minute or so. The thick black border is a result of that subtle dark shading stacked deep and combining, because Samsung would not take no for an answer.

I don’t need that in my life. The upside of this machine being disappointing was that I had no motivation to put up with it. Into the unused bin it went, and I haven’t bought a Samsung computer since.

HP Stream 7 Power Problems

I wanted to see if I can employ my unused HP Stream 7 as an International Space Station tracker at home, displaying ESA’s HTML application. The software side looks promising, but I ran into problems on the hardware side. Specifically, power management on this little tablet currently seems to be broken.

The first hint something was awry is the battery runtime remaining estimate. It is unrealistically optimistic as shown in the screen image above: 46% battery may run this little tablet for several hours, but there’s no way it would last 4 days 4 hours. At first I didn’t think it was a big deal. Battery-powered devices that I’ve dusted off would frequently give wildly inaccurate initial readings on battery. It is common for power management module to require a few charge-discharge cycles to re-calibrate.

In the case of my tablet, a few battery cycles did not help. Battery estimates remained wildly inaccurate after multiple cycles. But I was willing to ignore that estimate, since battery life is not a concern in a project that intends to run tethered to power around the clock. The bigger problem was the tablet’s behavior when plugged in.

HP Stream 7 plugged in not charging

Once power is plugged in, the battery life estimate disappears and (plugged in) was added to the description. This is fine, but I had expected to see something about charging the battery and there was nothing. Not “charging”, not “2 hours until full”, not even the occasionally infuriating “not charging”. There is a complete lack of information about charging in any form.

Still I wasn’t worried: if the tablet wants to run off plug-in power and not charge the battery, that’s fine by me. In fact I am happy to leave the battery at around 50% charge, as that is the healthiest level for long term storage of a lithium chemistry battery. But that’s not the case, either: the tablet will run mostly on plug-in power, but still slowly drain the battery until it was near empty, at which time the tablet would power down.

Only after shutting down did this tablet begin to charge its battery. Now I am worried. If I can’t run this tablet on plug-in power alone, requiring a battery that can’t be charged while it is turned on, that combination would make it impossible to build an around-the-clock ISS tracker display.

What I wanted to do next was to poke around with the hardware of this tablet and see if I can run it without the battery. Fortunately, unlikely most modern compact electronics, the HP Stream 7 can be opened up for a look.

ESA ISS Tracker on HP Stream 7

After I found that Amazon Fire HD 7 tablet was unsuitable for an always-on screen to display ESA’s HTML live tracker for the International Space Station, I moved on to the next piece of hardware in my inactive pile: a HP Stream 7. This tablet was an effort by Microsoft to prove that they would not cede the entry-level tablet market to Android. In hindsight we now know that effort did not pan out.

But at the time, it was an intriguing product as it ran Windows 10 on an Intel Atom processor. This overcame the lack of x86 application compatibility of the previous entry level Windows tablet, which ran Windows RT on an ARM processor. It was difficult to see how an expensive device with a from-scratch application ecosystem could compete with Android tablets, and indeed Windows RT was eventually withdrawn.

Back to this x86-based tablet: small and compact, with a screen measuring 7″ diagonally that gave it its name, it launched at $120 which was unheard of for Windows machines. Discounts down to $80 (when I bought it) made it cheaper than a standalone license of Windows software. Buying it meant I got a Windows license and basic hardware to run it.

But while nobody expected it to be a speed demon, its performance was nevertheless disappointing. At best, it was merely on par with similarly priced Android tablets. Sure we could run standard x86 Windows applications… but would we want to? Trying to run Windows apps not designed with a tablet in mind was a pretty miserable experience, worse than an entry level PC. Though to be fair, it is impossible to buy an entry level PC for $120 never mind $80.

The best I can say about this tablet was that it performed better than the far more expensive Samsung 500T (more on that later.) And with a Windows license embedded in hardware, I was able to erase its original Windows 8 operating system (locked with a password I no longer recall) and clean install Windows 10. It had no problems updating itself to the current version (1909) of Windows 10. The built-in Edge browser easily rendered ESA ISS tracker, and unlike the Kindle I could set screen timeout to “never”.

That’s great news, but then I ran into some problems with power management components that would interfere with around-the-clock operation.

Key Press Timeline For Entering and Exiting Developer Mode on Toshiba Chromebook 2 (CB35-B3340)

When I got my hands on this Toshiba Chromebook 2 (CB35-B3340) its primary screen was cracked and unreadable. I was happy when I discovered I could use it as a Chromebox by an external HDMI monitor, but I quickly found the limitation of that approach when I tried to switch the device into developer mode: those screens were only visible on the screen I couldn’t read.

Google has published instructions for putting a Chromebook into developer mode, but they weren’t specific enough for use with an unreadable screen. It doesn’t say how many menus are involved, and it doesn’t say how much time to expect between events. Probably because these details vary from device to device and not suitable for a general document.

I looked around online for information from other sources and didn’t find enough to help me navigate the procedure. Most irritating are the sites that say things like: “And from this point just follow the on screen prompts” when the whole point was I couldn’t read those prompts at the time.

Now that I’ve installed a replacement screen, I can see what happens and more importantly, I can now document the information I wished I was able to find several weeks ago. The most surprising part to me was that it took roughly seven minutes to transition into developer mode, but less than a minute to transition out of it. Since all user data are erased in both of these transitions, I’m curious why the times are so different.

In any case, I’ve captured the process on video (embedded below) and here is the timeline for putting a Toshiba Chromebook 2 into developer mode even if the screen is not readable. Times are in (minutes:seconds)

  • Preparation: power down the device.
  • (0:00) While holding down [ESC] + [REFRESH], press [POWER].
  • (0:15) Press [CONTROL] + [D] to enter developer mode
  • (0:25) Press [ENTER] to confirm
  • (0:36) Press [CONTROL] + [D] to confirm
  • (6:52) Two beeps warning user the Chromebook is in developer mode.
  • (7:25) Chrome OS booted up into developer mode and ready to be mirrored to external monitor.

Following power-ups while in developer mode:

  • (0:00) [POWER] button as normal (no other keys needed)
  • (0:10) Press [CONTROL] + [D] to bypass 30-second developer mode warning screen, also skipping the two beeps.
  • (0:20) Chrome OS booted up into developer mode and ready to be mirrored to external monitor.

Exiting Developer Mode:

  • Preparation: power down the device.
  • (0:00) [POWER] button as normal
  • (0:08) Press [SPACE] to re-enable OS verification.
  • (0:15) Press [ENTER] to confirm
  • (0:48) Chrome OS booted up into normal mode and ready to be mirrored to external monitor.

Life with a Chromebook

Ever since I started looking at this Toshiba Chromebook 2 (CB35-B3340) I had been focused on how I can break out of constraints imposed by Chrome OS. Although I had occasionally acknowledged the benefits of a system architecture focused on the most popular subset of all computing activities, I still wanted to know how to get out of it.

Now that my research got far enough to learn of a plausible path to removing all constraints and turning this Chromebook into a normal Ubuntu laptop, I’m satisfied with my available options and put all that aside for the moment. I took the machine out of Chrome OS Developer Mode so I could experience using a Chromebook for its original intended purpose and seeing how well it fills its designated niche.

Reviewing all the security protections of Chrome OS during the course of my adventure made me more willing to trust one. If I didn’t have my own computer on hand and needed to use someone else’s, I’m far more likely to log into a Chromebook than an arbitrary PC. Chrome OS receives frequent updates to keep it secure, raising the value of continued support. Since my original session to fast forward through four years of updates, I’ve received several more just in the few weeks I’ve been playing with this machine.

On the hardware side, this lightweight task-focused machine has lived up to the expectation that it would be more pleasant to use than a bulky jack of all trades convertible tablet of similar vintage. Its secondhand replacement screen‘s visual blemishes have not been bothersome, and the keyboard and trackpad had been responsive to user input. Its processing hardware is adequate, but not great. During light duty browsing the estimated battery life stretches past eight hours. But if a site is loaded down with ads and tracking scripts, responsiveness goes down and battery life estimate quickly nosedives under 4 hours.

The biggest disappointment in processing power comes from the display department. This machine was the upgraded model with a higher resolution 1920×1080 panel instead of the standard 1366×768. The higher resolution made for more pleasantly readable text, but constantly updating those pixels was too much for the machine to handle. It couldn’t sustain web video playback at 1080p. Short clips of a few seconds are fine, but settling down to watch something longer would be frustrated with stutters unless the video resolution was lowered to 720p.

And finally, there’s the fact that when the network connection goes down, a Chromebook becomes largely useless. This is unfortunately more common than I would like at the moment as I’m having trouble with my new router. In theory Google offers ways for web sites to remain useful on a Chromebook in the absence of connectivity (like Progressive Web Apps) but adoption of such techniques has not yet spread to the sites I frequent. So when the router crashes and reboots itself, the Chromebook becomes just a picture frame for showing the “No internet” error screen. This is not a surprise since network connectivity is a fundamental pillar of Chrome OS, but it is still annoying.

All things considered, a Chromebook is a nice lightweight web appliance that makes sense for the right scenarios. Its focused scope and multilayered security provisions mean I would heartily recommend one for the technically disinclined. If they learn enough technology to find Chrome OS limiting, there’s always Mr. Chromebox. If I should buy one for my own use, I would want a high resolution panel, and now I also know to get a more powerful processor to go with it.

Chrome OS Alternatives On Toshiba Chromebook 2 (CB35-B3340)

If I couldn’t solve the challenges getting ROS up and running with Ubuntu 18 under Crouton in Chrome OS, there is yet another option: erase Chrome OS completely and install Ubuntu in its place. I understand this would remove the developer mode warning and menu, and the software startup can go straight into ROS via an Ubuntu service just like any other Ubuntu machine.

The internet authority for this class of modification is Mr. Chromebox. I don’t know who this person is, but all my web searches on this topic inevitably points back to some resource on https://mrchromebox.tech. Starting with the list of alternate operating system options for a Chromebook. Ubuntu is not the only option, but for the purposes of a robot brain, I’m most interested in the option of a full UEFI ROM replacement allowing me to install Ubuntu like any other UEFI computer.

In order to install Mr. Chromebox’s ROM replacement, the hardware must on the list of supported devices. Fortunately the Toshiba Chromebook 2 (CB35-B3340) is represented on the list under its Google platform name “Swanky” so it should be eligible for the firmware utility script that makes the magic happen.

Before running the script, though, there are some hardware modifications to be made. Firmware replacement can undercut security promises of a Chromebook, even more than developer mode, so there are protections that require deliberate actions by a technically capable user before the firmware can be replaced. For “Swanky” Chromebooks, this hardware write-protect switch is in the form of a screw inside the case that makes an electrical connection across two contacts on the circuit board. Before the firmware can be replaced, that screw must be removed and the two pads insulated so there is no longer electrical contact.

Having a hardware component to the protection makes it very difficult for a Chromebook to be compromised by software bugs. Yet the screw + PCB design is a deliberate provision allowing modification with just simple hand tools. Such provisions to bypass hardware security is not found in many other security-minded consumer hardware, for example gaming consoles. I appreciate Google’s effort to protect the user, yet still offer the user an option to bypass such protection if they choose.

For the moment I am not planning to take this option, but it is there if I need it. In the near future I took this opportunity to get some first hand experience living with a Chromebook with its originally intended (non developer) use.

First Few Issues of ROS on Ubuntu on Crouton on Chrome OS

Some minor wrong turns aside, I think I’ve successfully installed ROS Melodic on Ubuntu 18 running within a Crouton chroot inside a Toshiba Chromebook 2 (CB35-B3340). The first test is to launch roscore, verify it is up and running without errors, then run rostopic /list to verify the default set of topics are listed.

With that done, then next challenge is to see if ROS works across machines. First I tried running roscore on another machine, and set ROS_MASTER_URI to point to that remote machine. With this configuration, rostopic /list shows the expected list of topics.

Then I tried the reverse: I started roscore on the Chromebook and pointed another machine’s ROS_MASTER_URI to the IP address my WiFi router assigned to the Chromebook. In this case rostopic/list failed to communicate with master. There’s probably some sort of networking translation or tunneling between Chrome OS and an installation of Ubuntu running inside Crouton chroot, and that’s something I’ll need to dig into and figure out. Or it might be a firewall issue similar to what I encountered when running ROS under Windows Subsystem for Linux.

In addition to the networking issue, if I want to embed this Chromebook into a robot as its brain, I’ll also need to figure out power-up procedure.

First: upon power-up, a Chromebook in developer mode puts up a dialog box notifying the user as such, letting normal users know a Chromebook in developer mode is not trustworthy for their personal data. This screen is held for about 30 seconds with an audible beep, unless the user presses a key combination prescribed onscreen. How might this work when embedded in a robot?

Second: when Chrome OS boots up, how do I also launch Ubuntu 18 inside Crouton chroot? The good news is that this procedure is covered in Crouton wiki, the bad news is that it is pretty complex and involves removing a few more Chromebook security provisions.

Third: Once Ubuntu 18 is up and running inside Crouton chroot, how do I launch ROS automatically? My current favorite “run on bootup” procedure for Linux is to create a service, but systemctl does not run inside chroot so I’ll need something else.

And that’s only what I can foresee right now, I’m sure there are others I haven’t even thought about yet. There’ll be several more challenges to overcome before a Chrome OS machine can be a robot brain. Perhaps instead of wrestling with Chrome OS, I should consider bypassing Chrome OS entirely?

Developer Mode and Crouton on Toshiba Chromebook 2 (CB35-B3340)

Having replaced a cracked and illegible screen with a lightly blemished but perfectly usable module, I can finally switch this Toshiba Chromebook 2 (CB35-B3340) into developer mode. It’s not a complicated procedure, but the critical menus are displayed only on the main display and not an external monitor. With the earlier illegible screen, there was no way to tell when I needed to push the right keys. I might have been able to do it blind if I had a timeline reference… which is a potential project for another day.

Today’s project was to get Crouton up and running on this Chromebook. Following instructions for the most mainstream path, I went through a bunch of procedures where I only had a vague idea of what was happening. Generally speaking it’s not a great idea to blindly run scripts downloaded from the internet, but Crouton is fairly well known and I had no personal data on this Chromebook, something enforced by Chrome OS.

Until I put this Chromebook into developer mode myself I hadn’t known that user data is erased whenever a Chrome OS device transitions into or out of developer mode. This meant whatever data is saved on a Chrome OS device can’t be snooped upon in developer mode. Also, any tools or utilities that might have been installed to view system internals in developer mode are erased and no longer usable once the machine is in normal mode. This policy increased my confidence in privacy and security of Chrome OS. I’m sure it’s not perfect as all software have bugs, but it told me they had put thought into the problem.

What it meant for me today was that everything I had put on that Chromebook was wiped before I could start playing with Crouton. Whose default instructions quickly got me up and running on Ubuntu 16 (Xenial) with the xfce desktop. Running two full user-mode GUI on top of a single kernel noticeably stresses this basic machine, with user response becoming a little sluggish. Other than that, it felt much like any other Ubuntu installation except it’s all running simultaneously with full Chrome OS on the exact same machine.

Raw performance concerns aside, it seemed to work well. And the wonder of chroot meant it’s pretty easy to erase and restart with a different configuration. Which is what I’ll tackle next, because ROS Melodic is intended for Ubuntu 18 (Bionic).

Secondhand Replacement Screen for Toshiba Chromebook 2 (CB35-B3340)

Once I discovered the support window for a Toshiba Chromebook 2 (CB35-B3340) extended longer than I had originally anticipated, I was more willing to spend money to bring it back to working condition. While I was shopping for a replacement screen earlier I saw several offers for new units and a few scattered offers for secondhand units. I presume these were salvaged from retired machines and resold, which is fine by me as it came at a significant discount. $47 with taxes and shipping (*), as compared to $75 (before taxes & shipping) for a new unit.

That ~40% discount also came with a caveat: I clicked “Buy” on a unit that was rated “Grade B: Fully functional but with visible blemishes.” It was a bit of a gamble, but my primary requirement is only to see enough to enter developer mode, so I decided I would tolerate visual blemishes to save a few extra dollars. There was also a bit of a gamble in shipping. from my disassembly efforts I knew this panel is very thin and fragile. This time around, I did not mind the extensive packaging of Amazon orders.

I saw no physical blemishes on the panel during installation. Once installed, I was happy to see Chrome OS boot up and run. I had to work hard to see the visual blemishes that earned this panel its Grade B rating. I had to set the screen to full black, and artificially increase contrast in a photo editor, before we can see the magenta smudges: Two light horizontal smudges, and two dots one of which look a bit smeared.

Toshiba Chromebook 2 CB35-B3340 used replacement screen defects

I’m not familiar with failure mode of LCD display modules so I have no idea what’s going on here. Perhaps these were manufacturing defects? In any case, these flaws are only visible if I strain to look for them and there is no physical damage to the screen so I’m satisfied with my purchase.

Toshiba Chromebook 2 CB35-B3340 recovery screen now readable

The visual blemishes are not at all bothersome in normal usage. This level of performance was more than good enough to be used as a normal Chromebook if I wanted to use it as such. But the reason I got the screen was to access Chrome OS recovery menu to enter developer mode, so I will try that first.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.

Old Chromebook Lifespan Longer Than Originally Thought

A cracked screen seemed to be the only problem with this Toshiba Chromebook 2 (CB35-B3340). I found no other hardware or software issues with this machine and it seemed to be up and running well with an external monitor. The obvious solution was to buy a replacement screen module, but I was uncertain if that cost would be worthwhile. I based my opinion on Google’s promise to support Chromebook hardware for five years, and it’s been five years since this model was introduced. I didn’t want to spend money on hardware that would be immediately obsolete.

I’ve since come across new information while exploring the device. This was the first Chrome OS device I was able to spend a significant time with, and I was curious about all the capabilities and limitations of this constrained-by-design operating system. While poking around in the Settings menu, under “About Chrome OS” I found the key quote:

This device will get automatic software and security updates until September 2021.

I don’t know how this September 2021 time was decided, but it is roughly seven years after the device was introduced. At a guess I would say Google estimated a two year shelf life for this particular Chromebook hardware to be sold, and the promised five year support clock didn’t start until the end of that sales window. This would mean someone who bought this Chromebook just as it was discontinued would still get five years of support. If true, it is more generous than the typical hardware support policy.

Whatever the reason, this support schedule changes the equation. If I bought a replacement screen module, this machine could return to full functionality and support for a year and a half. It could just be a normal Chromebook, or it could be a Chromebook running in developer mode to open up a gateway to more fun. With this increased motivation, I resumed my earlier shopping for a replacement and this time bought a salvaged screen to install.

Inviting My FreeNAS Box To The Folding Party

Once my Luggable PC Mark I was up and running, I have one more functional desktop-class CPU in my household that has not yet been drafted into my Folding@Home efforts: it was recently put in charge of running FreeNAS. As a network attached storage device, FreeNAS is focused on its main job of maintaining files and serving them on demand. There are FreeNAS plug-ins to add certain features, such as a home Plex server, but there’s no provision for running arbitrary programs on the FreeBSD-based task-specific appliance.

What FreeNAS does have is the ability to act as a host for separate virtual environments that run independently of core FreeNAS capability. This extension capability is a part of why I upgraded my FreeNAS box to more capable hardware. The lighter-weight mechanism is a “jail”, similar in concept to the Linux container (from which Docker was built) but for applications that can run under the FreeBSD operating system. However, Folding@Home has no native FreeBSD clients, so we can’t run it in a jail and have to fall back to plan B: full virtual machine under bhyve. This incurs more overhead as a virtual machine will need its own operating system instead of sharing the underlying FreeBSD infrastructure, consuming hard disk storage and locking away a portion of RAM unusable by FreeNAS.

But the overhead wasn’t too bad in this particular application. I installed the lightweight Ubuntu 18 server edition in my VM, and Folding@Home protein folding simulation is not a memory-intensive task. The VM consumed less than 10GB of hard drive space, and only 512MB of memory. In the interest of always reserving some processing power for FreeNAS, I only allocated 2 virtual CPUs to the folding VM. The Intel Core i3-4150 processor has four logical CPUs which are actually 2 physical cores with hyperthreading. Giving the folding simulation VM 2 virtual CPUs should allow it to run at full speed on the two physical CPUs and still leave some margin to keep FreeNAS responsive.

Once the VM was up and running, FreeNAS CPU usage report does show occasional workload pushing it above 50% (2 out of 4 logical CPU) load. CPU temperature also jumped up well above ambient temperature, to 60 degrees C. Since this Core i3 is far less powerful than the Core i5 in Luggable PC Mark I and II, it doesn’t generate as much heat to dissipate. I can hear the fan increased speed to keep temperature at 60 degrees, but the difference is minor relative to the other two.

Old AMD GPU for Folding@Home: Ubuntu Struggles, Windows Win

The ex-Luggable Mark II is up and running Folding@Home, chewing through work units quickly mostly thanks to its RTX 2070 GPU. An old Windows 8 convertible tablet/laptop is also up and running as fast as it can, though its best speed is far slower than the ex-Luggable. The next recruit for my folding army is Luggable PC Mark I, pulled out of the closet where it had been gathering dust.

My old AMD Radeon HD 7950 GPU was installed in Luggable PC Mark I. It is quite old now and AMD stopped releasing Ubuntu drivers after Ubuntu 14. Given its age I’m not sure if it even works for GPU folding workloads. It was designed and released near the dawn of the age when GPUs started finding work beyond rendering game screens, and its GCN1 architecture probably had problems typical of first versions of any technology.

Fortunately I also have an AMD Radeon R9 380 available. It was formerly in Luggable PC Mark II but during the luggable chassis decommissioning I retired it in favor of a NVIDIA RTX 2070. The R9 380 is a few years younger than the HD 7950, I know it supports OpenCL, and AMD has drivers for Ubuntu 18.

A few minutes of wrenching removed the HD 7950 from Luggable Mark I, putting the R9 380 in its place, and I started working out how to install those AMD Ubuntu drivers. According to this page, the “All-Open stack” is recommended for consumer products, which I mean to include my consumer-level R9 380 card. So the first pass started by running amdgpu-install. To verify OpenCL is up and running, I installed clinfo to verify GPU is visible as OpenCL device.

Number of platforms 0

Hmm. That didn’t work. On advice of this page on Folding@Home forums, I also ran sudo apt install ocl-icd-opencl-dev That had no effect, so I went back to reread the instructions. This time I noticed the feature breakdown chart between “All-Open” and “Pro” and OpenCL is listed as a “Pro” only feature.

So I uninstalled “All-Open” and installed “Pro” stack. Once installed and rebooted, clinfo still showed zero platforms. Returning to the manual, on a different page I found the fine print saying OpenCL is an optional component of the Pro stack. So I reinstalled yet again, this time with --opencl=pal,legacy flag.

Running clinfo now returns:

Number of platforms 1
Platform Name AMD Accelerated Parallel Processing
Platform Vendor Advanced Micro Devices, Inc.
Platform Version OpenCL 2.1 AMD-APP (3004.6)
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd cl_amd_event_callback cl_amd_offline_devices
Platform Host timer resolution 1ns
Platform Extensions function suffix AMD

Platform Name AMD Accelerated Parallel Processing
Number of devices 0

NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] <error: no devices in non-default plaforms>
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No devices found in platform

Finally, some progress. This is better than before, but zero devices is not good. Back to the overview page which says their PAL OpenCL stack supported their Vega 10 and later GPUs. My R9 380 is from their Tonga GCN 3 line, which is quite a bit older than Vega, which is GCN 5. So I’ll reinstall with --opencl=legacy to see if it makes a difference.

It did not. clinfo still reports zero OpenCL devices. AMD’s GPU compute initiative is called ROCm or RadeonOpenCompute but it is restricted to hardware newer than what I have on hand. Getting OpenCL up and running, on Ubuntu, on hardware this old, is out of scope for attention from AMD.

This was the point where I decided I was tired of this Ubuntu driver dance. I wiped the system drive to replace Ubuntu with Windows 10 along with AMD Windows drivers. Folding@Home saw the R9 380 as a GPU compute slot, and I was up and running simulating protein folding. The Windows driver also claimed to support my older 7950, so one potential future project would be to put both of these AMD GPUs in a single system. See if the driver support extends to GPU compute for multi GPU folding.

For today I’m content to have just my R9 380 running on Windows. Ubuntu may have struck out on this particular GPU compute project, but it works well for CPU compute, especially virtual machines.

Naked HP Split X2 (13-r010dx) Sitting In A Breeze Runs Faster

Mobile computer processors must operate within tighter constraints than their desktop counterparts. They sip power to prolong battery life, and that power also eventually ends up as heat that must be dissipated. Unfortunately both heat management mechanisms and batteries are heavy and take up space, so finding the proper balance is always a difficult challenge. It is typical for laptop computers to give up its ability to run sustained workloads at full speed. But if we’re not worried about voiding warranties or otherwise rendering a mobile computer immobile, we can lift some of those constraints limiting full performance: run on an AC adapter to provide power, and get creative on ways to enhance heat dissipation.

For this experiment I pulled out the most powerful computer from my NUCC trio of research project machines, the HP Split X2 (13-r010dx). The goal is to see if I can add it to my Folding@Home pool. Looking over the technical specifications published by Intel for Core i3-4012Y CPU, one detail caught my eye: it lists two separate power consumption numbers where most processors only have one. The typically quoted “Thermal Design Power” figure is at 11.5W, but this chip has an additional “Scenario Design Power” of 4.5W. This tells us the processor is designed for computers that only expect to run in short bursts. So even if TDP is 11.5W, it valid to design a system with only 4.5W of heat dissipation.

Which is likely the case here, as I found no active cooling on this HP Split X2. The outer case is entirely plastic meaning it doesn’t even have great thermal conduction to the environment. If I put a sustained workload on this computer, I expect it to run for a while and then start slowing itself down to keep the heat manageable. Which is indeed what happened: after a few minutes of Folding@Home, the CPU clock cycle pulled back to roughly half, and utilization was pulled back half again meaning the processor is chugging along at only about a quarter of its maximum capability.

HP Split X2 13-r010dx thermal throttling

For more performance, let’s help that heat escape. Just as I did earlier, I pulled the core out of its white plastic case. This time for better ventilation rather than just curiosity.

HP Split X2 13-r010dx tablet internals removed from case

Removing it from its plastic enclosure helped only a tiny bit. Most of the generated heat are still trapped inside, so I pulled the metal shield off its main processor board. This exposed the slab of copper acting as CPU heat sink.

HP Split X2 13-r010dx CPU heat sink under shield

Exposing that heat sink to ambient air helped a lot more, but passive convection cooling is still not quite enough. The final push was to introduce some active airflow. I was contemplating several different ideas on how to jury-rig an active cooling fan, but this low power processor didn’t actually need very much. All I had to do is to set the computer down in the exhaust fan airflow from a PC tower case. That was enough for it to quickly climb back up to full 1.5 GHz clock speed with 100% utilization, and sustain running at that rate.HP Split X2 13-r010dx receiving cooling

It’s not much, but it is contributing. I can leave it simulating folding proteins and move on to another computer: my Luggable PC Mark I.

Desktop PC Component Advantage: Sustained Performance

A few weeks ago I decommissioned Luggable PC Mark II and the components were installed into a standard desktop tower case. Heeding Hackaday’s call for donating computing power to Folding@Home, I enlisted my machines into the effort and set up my own little folding farm. This activity highlighted a big difference between desktop and laptop components: their ability to sustain peak performance.

My direct comparison is between my ex-Luggable PC Mark II and the Dell laptop that replaced it for my mobile computing needs. Working all out folding proteins, both of those computers heated up. Cooling fans of my ex-Luggable sped up to a mild whir, the volume and pitch of the sound roughly analogous to my microwave oven. The laptop fans, however, spun up to a piercing screech whose volume and pitch is roughly analogous to a handheld vacuum cleaner. The resemblance is probably not a coincidence, as both move a lot of air through a small space.

The reasoning is quite obvious when we compare the cooling solution of a desktop Intel processor against one for a mobile Intel processor. (Since my active-duty machines are busy working, I pulled out some old dead parts for the comparison picture above.) Laptop engineers are very clever with their use of heat pipes and other tricks of heat management, but at the end of the day we’re dealing with the laws of physics. We need surface area to transfer heat to air, and a desktop processor HSF (heat sink + fan) has tremendously more of it. When workload is light, laptops keep their fans off for silent operation whereas desktop fans tend to run even when lightly loaded. However, when the going gets rough, the smaller physical volume and surface area of laptop cooling solutions struggle.

This is also the reason why different laptop computers with nearly identical technical specifications can perform wildly differently. When I bought my Inspiron 7577, I noticed that there was a close relative in Dell’s Alienware line that has the same CPU and GPU. I decided against it as it cost a lot more money. Some of that is branding, I’m sure, but I expect part of it goes to more effective heat removal designs.

Since I didn’t buy the Alienware, I will never know if it would have been quieter running Folding@Home. To the credit of this Inspiron, that noisy cooling did keep its i5-7300HQ CPU at a consistent 3.08GHz with all four cores running full tilt. I had expected thermal throttling to force the CPU to drop to a lower speed, as is typical of laptops, so the fact this machine can sustain such performance was a pleasant surprise. I appreciate the capability but that noise got to be too much… when I’m working on my computer I need to be able to hear myself think! So while the ex-Luggable continued to crunch through protein simulations, the 7577 had to drop out. I switched my laptop to the “Finish” option where it completed the given work unit overnight (when I’m not sitting next to it) and fetched no more work.

This experience taught me one point in favor of a potential future Luggable PC Mark III: the ability to run high performance workloads on a sustained basis without punishing hearing of everyone nearby. But this doesn’t mean mobile oriented processors are hopeless. They are actually a lot of fun to hack, especially if an old retired laptop doesn’t need to be mobile anymore.