Luggable Frame Experiment #1

Catleap1The dimensions for my Luggable PC project were determined by the components within. The width and height, specifically, were dictated by the LCD screen module. Even though I made the CAD files public for anybody to build their own Luggable PC, in practical terms only people with the exact same LCD module would be able to use the files without modification.

A friend who saw the Luggable PC was interested in generalizing the concept and create a frame for lugging a (not disassembled) screen alongside its (also not disassembled) PC. Relative to my project, it would be easier to build and less specialized to the components within, with a trade-off in larger size and heavier weight.

I thought it was a great idea to explore and joined in the experiment. We each came up with a design, and we built both of them at Tux-Lab to see how the ideas translated into reality.

This blog post is a brief summary of my first experiment.

The Components

The monitor is an Yamakasi Catleap monitor, built around a 27″ IPS panel with 2560×1440 resolution. The specific dimensions don’t really matter, as it will be mounted via the standard 75mm VESA pattern on the back. Any large monitor with 75mm VESA pattern would fit as-is, and only minor modifications would be necessary to accommodate monitors with a different mounting pattern.

The PC is a HP Z220, small form factor PC from the HP business line available with a range of components to trade off processing power against price. For the purposes of this experiment, the important details are its height of 331mm and depth of 100mm. Thought not a standardized dimension, many small form factor PCs are roughly the same size.

The Construction

The core of the frame are built from 15mm aluminum extrusions (Misumi HFS3) for strength and the remainder of the frame are made from 6mm laser-cut acrylic fastened to the extrusions via M3 nuts and bolts.

Making the panels from laser-cut acrylic has the advantage of simpler modifications. Many of the critical dimensions in my Luggable PC 3D CAD file has the problem that, when changed, they trigger cascading changes that need to be reconciled. When designing for the 2D tool path of laser laser cutting, it is easier to keep modifications in mind so that a change in one sheet does not cascade to other sheets.

Example #1: The frame has a 331mm x 100mm hole to fit the Z200 case. This can be adjusted to fit any other SFF frame without cascading changes to other components.

Example #2: The monitor mount pattern can be changed, and the mount position can be moved up or down to adjust elevation of the monitor.

The Result

CompleteI had never designed for laser cutting before and was happy for the chance to do something on the Tux-Lab laser cutter. I knew that, having little experience with the material, my first few designs will have some amateurish flaws. So this frame #1 was fairly minimalist just to see what happens.

I didn’t have a good grasp how many fasteners I would need to hold everything together. I laser-cut roughly double the number of fastener positions than what I think I would need, as it is easier to have more options rather than less. For the assembly I only installed fasteners in every other hole.

The screen mount was surprisingly successful. We questioned whether 6mm acrylic would be suitable for holding up the Catleap monitor by its 75mm VESA mounts. When we found some worrisome flex, the suspicion went immediately to the 6mm acrylic but it turned out the Catleap monitor enclosure was the source of the flex.

When attempting to install the PC, we found that the case itself would fit just fine but the rubber feet attached to the side of the case did not. I added cutouts in the CAD file but it seemed wasteful to cut entirely new pieces of acrylic just for the little feet cutouts. For purposes of experimentation, a Dremel tool was used to cut gaps to clear the rubber feet.

After the frame was assembled with the screen and the PC, we started plugging in all the cables and wires and I realized I had forgotten to account for the cables. There’s no good place to coil up the excess so they kind of dangle and stand ready to catch on something inconvenient.

The entire assembly was built in a tiny fraction of the time of my Luggable PC and included a much larger monitor with a much higher resolution. The trade off was almost doubling of the weight. The handle, part of the acrylic assembly, appeared to be sufficient to manage the weight.

I carried it across Tux-Lab and quickly encountered the first failure.

The Failure

Lesson of the Day: Sharp internal corners are bad.

My amateur mistake was cutting a sharply cornered rectangle to hold the PC. The sharp corners concentrated the physical load of the PC into a small point in the 6mm acrylic, which protested the poor design by breaking apart.

The next experiment will incorporate this lesson.

Build, fail, learn, iterate, repeat.

Broken

 

My First Cloud Storage Failure

Amazon_Drive_logoI count myself as a skeptic of the new cloud-based world. When I first learned of DropBox I wasn’t willing to trust it with my data. When I read about the frustration of people whose data are still trapped on MegaUpload,  I felt vindicated.

But the tide of progress moved forward and now we have many cloud-based storage providers with enough of a track record for me to dip my toes in the water. Starting in January 2016 I started using cloud-based storage for my personal needs, spreading my files across Microsoft OneDrive, Google Drive, and Amazon Drive.

That’s not to say I trust cloud-based storage yet. I still maintain my regimen of home offline backups on removable hard drives. And for the files that I feel are important, I duplicate storage across at least two of the cloud storage providers.

Almost a year and a half in, I admit I see a lot of benefit. I’ve become a lot less dependent on a specific computer as much of my files are accessible from any computer with an internet connection. I can pick up work where I left off on another computer. I can refresh and reformat a computer with far less worry I’ll destroy any irreplaceable data.

And then there are the wacky outliers. When I wanted to obtain a library card, one required proof of local residency is an utility bill. I was able to pull out my phone and retrieve a recent electric bill thanks to cloud storage.

But just like physical spinning hard drives, a failure is only a matter of time. And tonight, after almost 18 months of flawless cloud storage performance, we have our first winner. Or more accurately, our first loser. Tonight I was not able to access my files on Amazon Drive. I get the “Loading” spinner that never goes away.

The underlying Amazon storage infrastructure seems ok. (AWS S3 status is green across the board.) This must be a failure in their consumer-level storage offering, which will probably get fixed soon.

In the meantime, they have one annoyed customer.

Fusion 360 vs. Onshape: Raspberry Pi

raspberry-pi-logoAnd now for something completely silly: let’s look at how our two competing hobbyist-friendly CAD offerings fare on the hobbyist-friendly single-board computer, the Raspberry Pi.

(Spoiler: both failed.)

Raspberry Pi

I have on hand the Raspberry Pi 3 Model B. Featuring a far more powerful CPU than the original Pi which finally made the platform usable for basic computing tasks.

When the Raspberry Pi foundation updated its Raspbian operating system with PIXEL, they switched the default web browser from Epiphany to Chromium, the open-source fork of Google’s Chrome browser. Bringing in a mainstream HTML engine resulted in far superior compatibility with a wider range of web sites, supporting many of the latest web standards, including WebGL which is what we’ll be playing with today.

Autodesk Fusion 360

Fusion 360 is a native desktop application compiled for Windows and MacOS, so we obviously couldn’t run that on the Pi. However, there is a web component: Fusion 360 projects can be shared on the Autodesk 360 collaboration service. From there, the CAD model can be viewed in a web browser via WebGL on non-Windows/MacOS platforms.

While such files can be viewed on a desktop machine running Ubuntu and Chromium, a Raspberry Pi 3 running Chromium is not up to the task. Only about half of the menu bar and navigation controls are rendered correctly, and in the area of the screen where the actual model data should be, we get only a few nonsensical rectangles.

Onshape

Before this experiment I had occasionally worked on my Onshape projects on my desktop running Ubuntu and Chromium, so I had thought the web-based Onshape would have an advantage in Raspberry Pi Chromium. It did, just not usefully so.

In contrast to A360’s partial menu UI rendering, all of Onshape’s menu UI elements rendered correctly. Unfortunately, the actual CAD model is absent in the Raspberry Pi Chromium environment as well. We get the “Loading…” circle and it was never replaced by the CAD model.

Conclusion

Sorry, everyone, you can’t build a web-based CAD workstation with a $35 Raspberry Pi 3.

You can, however, use these WebGL sites as a stress test of the Raspberry Pi. I had three different ways of powering my Pi and this experiment proved enlightening.

  1. A Belkin-branded 12V to 5V USB power adapter: This one delivered good steady voltage at light load, but when the workload spiked to 100% the voltage dropped low enough for the Pi to brown out and reset.
  2. A cheap Harbor Freight 12V to 5V USB adapter: This one never delivered good voltage. Even at light load, the Pi would occasionally flash the low-voltage warning icon, but never low enough to trigger a reboot. When the workload spiked to 100%, the voltage is still poor but also never dropped enough to trigger a reset. Hurray for consistent mediocrity!
  3. An wall outlet AC to 5V DC power unit (specifically advertised to support the Raspberry Pi) worked as advertised – no low-voltage warnings and no resets.

Static Web Site Hosting with Amazon S3 and Route 53

aws_logoWeb application frameworks have the current spotlight, which is why I started learning Ruby on Rails to get an idea what the fuss was about. But a big framework isn’t always the right tool for the job. Sometimes it’s just a set of static files to be served upon request. No server-side smarts necessary.

This was where I found myself when I wanted to put up a little web site to document my #rxbb8 project. I just wanted to document the design & build process, and I already had registered the domain rxbb8.com. The HTML content was simple enough to create directly in a text editor and styled with CSS from the Materialize library.

After I got a basic 1.0 version of my hand-crafted site, I uploaded the HTML (and associated images) to an Amazon S3 bucket. It only takes a few clicks to allow files in a S3 bucket to be web-accessible via a long cumbersome URL on Amazon AWS domain http://rxbb8.com.s3-website-us-west-2.amazonaws.com. Since I wanted this content to be accessible via the rxbb8.com domain I already registered, I started reading up on the AWS service named in geek-humor style as Route 53.

Route 53 is designed to handle the challenges of huge web properties, distributing workload across many computers in many regions. (No single computer could handled all global traffic for, say, netflix.com.) The challenge for a novice like myself is to figure out how to pull out just the one tool I need from this huge complex Swiss army knife.

Fortunately this usage case is popular enough for Amazon to have written a dedicated developer guide for it. Unfortunately, the page doesn’t have all the details. The writer helpfully points the reader to other reference articles, but those pages revert back to talking about complex deployments and again it takes effort to distill the simple basics out of the big feature list.

If you get distracted or lost, stay focused on this Cliff Notes version:

  1. Go into Route 53 dashboard, create a Hosted Zone for the domain.
  2. In that Hosted Zone, AWS has created two record sets by default. One of them is the NS type, write down the name servers listed.
  3. Go to your domain registrar and tell them to point name servers for the domain to the AWS name servers listed in step 2.
  4. Create S3 storage bucket for the site, enable static website hosting.
  5. Create a new Record Set in the Route 53 Hosted Zone. Select “Alias” to “Yes” and point alias target to the S3 storage bucket in step 4.

Repeat #4 and #5 for each sub-domain that needs to be hosted. (The AWS documentation created example.com and repeated 4-5 for www.example.com.)

And then… wait.

The update in step 3 needs time to propagate to name servers across the internet. My registrar said it may take up to 24 hours. In my case, I started getting intermittent results within 2 hours, but it took more than 12 hours before everything stabilized to the new settings.

But it was worth the effort to see version 1.0 of my created-from-scratch static web site up and running on my domain! And since it’s such a small and simple site with little traffic, it will cost me only a few pennies per month to host in this manner.

rxbb8.com.v1