Hackaday Badge LCD Screen 2: Documented Limitations

Now that I’ve completed my overview of the Hackaday Belgrade 2018 badge (which the upcoming Hackaday Superconference 2018 badge is very close to) it’s time to dig deeper. First topic for a deep dive: that LCD screen. In my earlier brief look, I only established that the screen is fully graphics capable and not restricted to text only. What kind of graphics can we do with this?

18-bit panel, accepts 24-bit color

First topic: color depth. The default badge firmware’s BASIC programming interface seems to allow only 16 colors which the documentation called “EGA colors”, see color_table[16] in disp.c. I was confident a modern LCD module has more than 4-bit color, and page 2 of the LCD module datasheet called itself a “262K color display.” That works out to 18 bits, so the panel native color depth is likely 6 bits red, 6 bits green, 6 bits blue. However, it is not limited to taking information in 18-bit color, the display can be configured to communicate at a wide range of bit depths. Looking in tft_fill_area() in disp.c, we can see the existing firmware is communicating in 24-bit color. 8 bits each for red, green, and blue.

Not enough memory for full 24-bit off-screen buffer

If we don’t want to change modes around on the fly, then, we’ll need to work with the panel in 24-bit color. Standard operating procedure is to draw using an off-screen buffer and, when the result is ready for display, send the buffer to screen in a single fast operation. A 24-bit off screen buffer is usually done with a 32-bit value representing each pixel with ARGB, even if we’re not using A(lpha) channel. 320 wide * 240 high * 32 bits = 300 kilobytes. Unfortunately, the datasheet for our PIC32 processor shows it only has a total of 128 kilobytes of memory, so the easy straightforward full screen buffer is out of the question. We’ll have to be more creative about this.

NHD-2.4-240320CF-CTXI LCD module wiring diagramNo VSYNC

But we wouldn’t have been able to make full use of an off screen buffer anyway. We need to send buffer data to screen at the right time. If we send while the screen is in the midst of drawing, there will be a visible tear as the old content is partially mixed with the new. The typical way to avoid this is to listen to a vertical synchronization signal (VSYNC) to know when to perform the update. And while the ST7789 controller on board the LCD module has provision to signal VSYNC, the LCD module does not expose VSYNC information. There may be some other way to avoid visual tearing, but it ain’t VSYNC.

These limitations, which are relatively typical of embedded electronics projects, are part of the fun of writing software for this class of hardware. Sure, it is a limitation, but it is also a challenge to be overcome, a puzzle to solve, and a change of pace from the luxury of desktop computers where such limitations are absent.

 

 

Animated GIF For When A Screenshot Is Not Enough

Trying to write up yesterday’s blog post posed a challenge. I typically write up a blog post with one or more images to illustrate the topic of the day. Two days ago I talked about lining up Phoebe’s orientation before and after a turn. For that topic, a screenshot was appropriate illustration. But yesterday’s topic is about Phoebe’s RViz plot of LIDAR data moving strangely. How do I convey “moving strangely” in a picture?

After thinking over the problem for a while, I decided that I can’t do it with a static image. What I need is a short video. This has happened before on this blog, but doing a full YouTube video clip seems excessive for this particular application. I only need a few frames. An animated GIF seems like just the thing.

I went online looking for ways to do this, and there were an overwhelming number of answers. Since my project is relatively simple, I didn’t want to spend a lot of time learning a new powerful tool that can do far more than what I need. When a tool is described as simple and straightforward, that gets my attention.

So for this project I went with Kazam that was described on this page as a “lightweight screen recorder for Ubuntu”. Good enough for me. The only hiccup in my learning curve was in the start/stop sequence. Recording can be started by clicking “Capture” button from the application dialog box, but there was no counterpart for stopping. Recording had to be stopped from the icon in the upper right corner of the screen.

Everything else was straightforward and soon I had a MP4 video file of RViz displaying LIDAR movement. Then it was off to this particular Ubuntu answer page to turn MP4 into an animated GIF using command line tools ffmpeg and convert. However, that results in a rather large multi-megabyte GIF file, far larger than the MP4 source at a little over 100 kilobytes! Fortunately these instructions also pointed to the convert option -layers optimize which reduced the size drastically. At over 200 kilobytes, it was still double the size of the captured MP4, but at least it’s well under a megabyte. And more importantly, it is allowed for embedding at my blog hosting subscription tier.

This RViz plot has only simple colors and lines, ideally suited for animated GIF so it could have been even smaller. I suspect a tool dedicated to representing simple geometries on screen could produce a more compact animated GIF. If I ever need to do this on a regular basis, I’ll spend more time to find a better solution. But for today Kazam + ffmpeg + convert was good enough.

Electric Car Chargers Need To Keep Their Cool

A constant criticism of electric cars is their charging time. Despite all of their other advantages, charging takes noticeably longer than refueling a gasoline car and this difference make some people dismissive. When I leased a Volt for 3 years, charging time was a nonissue because it was parked in a garage with a charging station. Meaning my car recharged overnight while I slept, just like my phone and laptop. I rarely ever charged the car away from home, and it’s usually done more out of novelty than necessity.

Bob and Di Thompson Volt J1772 Charging

But real issue or not, charge time is something that needs to be addressed for consumer acceptance. So technology has been developing to make electric car charging ever faster. The rapid pace of development also means a lot of competition, each claiming to be faster than the last. The latest round of news has General Motors proclaiming that they’re working on a 400 kilowatt charging system.

The big headline number of 400 kilowatts is impressive, but the engineers would dig deeper for an arguably a more impressive number: 96.5% efficiency. The publicity material focuses on economical and ecological advantage of wasting less energy, but it also makes adoption far more realistic. Wasting less power isn’t just good for the pocketbook and environment, it also means less power being turned into heat.

How much heat are we talking about? 96.5% efficiency implies 3.5% waste. So that 400 kilowatt charger is turning about 3.5%, or about 14 kilowatts, into heat. For comparison, cheap home electric space heaters usually top out at about 1500 watts, or 1.5 kilowatts. Meaning these car chargers need to deal with byproduct waste heat that’s roughly ten times that generated by a home heater whose entire purpose is to generate heat. That’s a lot of heat to deal with! Heat management is a concern for all the high speed charging stations, from Tesla to Volkswagon. It’s good to see progress in efficiency so our super high power charging stations don’t cook themselves or anyone nearby.

Duckietown Is Full Of Autonomous Duckiebots

Duckietown duckiebotThe previous blog post outlined some points of concern against using Raspberry Pi 3 as the brains of an autonomous robot. But it’s only an inventory of concerns and not condemning the platform against robotics use. A Raspberry Pi is quite a capable little computer in its own right and that’s even before considering its performance in light of low cost. There are certainly many autonomous robot projects where a Raspberry Pi provides sufficient computing power for their respective needs. As one example, we can look at the robots ferrying little rubber duckies around the city of Duckietown.

According to its website, the Duckietown started as a platform to teach a 2016 MIT class on autonomous vehicles. Browsing through their public Github repository it appears all logic is expressed as a ROS stack and executed on board its Raspberry Pi, no sending work to a desktop computer over network like the TurtleBot 3.  A basic Duckiebot has minimal input and output to contend with – just a camera for input and two motors for output. No wheel encoders, no distance scanners, no fancy odometry calculations. And while machine vision can be computationally intensive, it’s the type of task that can be dialed back and shoehorned into a small computer like the Pi.

Making this task easier is assisted by Duckietown, an environment designed to help Duckiebots function by leveraging its strengths and mitigating its weaknesses. Roads have clear contrast to make vision processing easier. Objects have machine-friendly markers to aid object identification. And while such measures imply a Duckiecar won’t function very well away from a Duckietown, it’s still a capable little robotics platform for exploring basic concepts.

At first glance the “Duckiebooks” documentation area has a lot of information, but I was quickly disappointed by finding many pages filled with TODO and links to “404 Not Found”. I suppose it’ll be filled out in coming months, but for today it appears I must look elsewhere for guidelines on building complete robots running ROS on Raspberry Pi.

Duckietown TODO

Embedding an Instagram Post with BBCode Without Plugin

Embedding an Instagram post is trivial on a WordPress blog like this one: copy the full Instagram URL (like https://www.instagram.com/p/BfryG0VnUmF/) and paste it into the visual editor window. Behind the scenes, that URL is parsed to create an embed as shown here.

There are similar plugins to add an tag to a BBCode-based web forum. But what if a forum does not have such direct support installed? This was the case for the web forum set up as community driven support for JPL’s Open Source Rover.

On every Instagram post, there’s an “Embed” option that will bring up a chunk of HTML (which links to some JavaScript) to create an embed. However, a BBCode based web forum does not allow embedding arbitrary HTML like that.

Time to read the manual which in this case is Instagram’s developer resources page about embedding. They prefer that people use the fancy methods like that chunk of HTML we can’t use. But way down towards the bottom, they do describe how to use the /media/ endpoint to pull down just an image file with no active components.

Instagram Rover L

This is simple enough to use within the BBCode [IMG] tag. Then we can surround that image tag with a [URL] tag to turn it into a link to the Instagram post.

[URL=https://www.instagram.com/p/BfryG0VnUmF/][IMG]https://instagram.com/p/BfryG0VnUmF/media/?size=m[/IMG][/URL]

It’s not as fancy as the full embed code, but it does get the basic point across and provides an easy way to access the original Instagram post. Good enough for a SGVHAK Rover post on the JPL OSR web forum.

Diagnosing Periodic Artifact in 3D Print Due To Inconsistent Extrusion

A common error when setting up a 3D printer is putting motor control parameters that don’t actually match the installed physical hardware. Sometimes this is glaringly obvious: maybe the X-axis moves 5mm when it should move 10mm. Big errors are easy to find and fix, but the little “off by 0.5%” errors are tough to track down.

In this category, a specific class of errors are specific to the Z-axis. When X- and Y-axis are moving around printing a layer, the Z-axis needs to hold still for a consistent print. And when it’s time to print another layer, the Z-axis needs to move a precise and consistent amount for every layer. This is usually not a problem for stepper motors typical of hobby level 3D printer Z-axis control, as long as the layers correspond to an even number of steps.

When the layers don’t map cleanly to a number of steps, the Z-axis motor might attempt to hold position in between steps. This is fundamentally a difficult task for a stepper motor and its controller, rarely successful, so most control boards round off to the nearest step instead. This rounding tends to cause periodic errors in the print as the Z-axis rounds a tiny bit higher or lower than the desired position, and failing to meet the “precise and consistent” requirement for a proper print.

With a freshly configured Azteeg X5 Mini WiFi control board in my open-box Monoprice Maker Select printer, seeing a periodic error along the Z-axis when printing Sawppy’s wheels immediately placed suspicion on Z-axis motor configuration.

Debug Periodic Print Layer Artifact

Back to hardware measurement I go, and reviewing motor control parameters. After over an hour of looking for problems in Z-axis configuration I came up empty-handed.

Then a key observation when looking at details under magnification: the error is occurring every 6 layers, and not at a consistent location all around the print. This little bump is actually in a spiral shape around the wheel, which would not be the case when rounding off Z-axis steps.

Following this insight, I went to review the 3D priner G-Code file and saw the print path is on a regular cycle printing the six spokes of the wheel. It printed the same way between 5 of those spokes, but the sixth is slightly different and that slightly different behavior cycles through the six spokes as the print rises through each layer.

It turns out this print artifact is not a Z-axis configuration issue at all, but the result of inconsistent extrusion. When moving in one pattern (5 of the spokes) it extrudes a certain amount, when moving in another (the final spoke) it ends up putting a tiny bit of extra plastic on the print, causing the artifact.

For Cheap Commodity Bearings, Search For 608

My thoughts went to bearings while contemplating a mechanical project. I have the luxury of adjusting the design to fit a bearing thanks to the wonders of 3D printing. Given this flexibility, the obvious first path to investigate is to learn where to get bearings – cheap!

I’ve learned to not kill myself on a roller blade some years back, so I started looking for roller blade bearings based on the logic that there’s enough roller blade production volume – and each pair of blades use 16 bearings – to drop the price of bearings. I quickly found that skateboard wheels use the same size bearing, then I learned that fidget spinners also use the same size bearing.

608-bearingEventually I realized I had the logic backwards – these bearings are not cheap because they’re used in skates, they are used in skates because they were cheap. These bearings have been around far longer than any of those consumer products.

The industrial name for these mass volume commodity bearings seems to be “608“. The 60 designate a series (Google doesn’t seem to know the origin of this designation) and the 8 designate the interior diameter of the bearing. Letter suffixes after the 608 describe the type of seal around the bearings but does not change the physical dimensions.

Another misconception I had from roller blade advertising was the ABEC rating. It has come to imply smoother and faster bearings but technically it only describes the manufacturing tolerances involved. While higher ABEC rated bearings do reduce the tolerance range, that by itself does not necessarily mean faster bearings. There are more variables involved (the lubricant used inside, etc) but somebody decided the mechanical engineering details were too much for the average consumer to wade through, so its meaning was distorted for marketing. Oh well, it’s not the first time that has happened.

Such details may or may not be important, it depends on the project requirements. Strict project demands (temperature, speed, load, etc) will require digging deeper for those details. For projects where pretty much any bearing would do, the 608 designation is enough to guarantee physical dimensions for CAD design and we’re free to order something cheap. Either off Amazon (~$25 for 100 of them) or for even larger quantities, go straight to the factories on Alibaba.

WebAssembly: A High Level Take On Low Level Concepts

webassemblyWebAssembly takes the concepts of low-level assembly language and brings them to web browsers. A likely first question is “Why would anybody want to do that?” And the short answer is: “Because not everybody loves JavaScript.”

People writing service-side “back-end” code have many options on technologies to use. There are multiple web application platforms that are built around different languages. Ruby on Rails and SinatraDjango and Flask, PHP, Node.JS, the list goes on.

In contrast, client-side “front end” code running on the user’s web browser has a far more limited choice in tools and only a single choice for language: JavaScript. The language we know today was not designed with all of its features and capabilities up front. It was a more organic growth that evolved alongside the web.

There have been multiple efforts to tame the tangle that is modern JavaScript and impose some structure. The Ruby school of thought led to CoffeeScript. Microsoft came up with TypeScript. Google invented Dart. What they all had in common was that none have direct browser support like JavaScript. As a result, they all trans-compile into JavaScript for actual execution on the browser.

Such an approach does address problems with JavaScript syntax, by staying within well-defined boundaries. Modern web browsers’ JavaScript engines have learned to look for and take advantage of such structure, enabling the resulting code to run faster. A project focused entirely on this aspect – making JavaScript easy for browsers to run fast – is asm.js. By limiting JavaScript to a very specific subset , sometimes adding hints to the browser it is so, allows JavaScript that can be parsed down to very small and efficient code. Even if it ends up being very difficult for a human to read.

Projects like asm.js make the resulting code run faster than general JavaScript, but that’s only once code starts running. Before it runs, it is still JavaScript transmitted over the network, and JavaScript that needs to be parsed and processed. The only way to reduce this overhead is to describe computation at a very low-level in a manner more compact and concise than JavaScript. This is WebAssembly.

No web developer is expected to hand-write WebAssembly on a regular basis. But once WebAssembly adoption takes hold across the major browsers (and it seems to be making good progress) it opens up the field of front-end code. Google is unlikely to build TypeScript into Chrome. Microsoft is unlikely to build Dart into Edge. Mozilla is not going to bother with CoffeeScript. But if they all agree on supporting WebAssembly, all of those languages – and more – can be built on top of WebAssembly.

The concept can be taken beyond individual programming languages to entire application frameworks. One demonstration of WebAssembly’s potential runs the Unity 3D game engine, purportedly with smaller download size and faster startup than the previous asm.js implementation.

An individual front end web developer has little direct need for WebAssembly today. But if it takes off, it has the potential to enable serious changes in the front end space. Possibly more interesting than anything since… well, JavaScript itself.

One Month of Living With Moto X4

The Motorola X4 “Android One Edition” is the mid-range offering for Project Fi subscribers as a more affordable ($399) alternative to the flagship Google Pixel phone. ($649) Here are some person notes on how they compared after using the Pixel for over a month followed by using the Moto X4 for over a month.

Moto X4

Exterior: The Pixel is a much more distinctive design compared to the relatively generic Moto X4. But the fragile aluminum & glass construction meant they both ended up encased in the cheapest TPU case from Amazon, muting the design distinction. Result: Tie.

Display: The Pixel’s OLED screen promised brighter colors, longer battery life, and a more responsive display. The responsiveness part was a requirement for Daydream VR. Outside of VR, the day-to-day user experience is equivalent. Result: Pixel wins in VR, but otherwise a tie.

Fingerprint Sensor: The Pixel sensor is far more reliable at reading and unlocking the device. The X4 sensor can be frustrating at times, occasionally sending the user to the backup screen pattern unlock. Advantage: Pixel

Camera: The Pixel camera is pretty fantastic, but the X4 camera isn’t very far behind. The X4 adds a second camera with a wide-angle lens that was occasionally very useful. People who need a high quality camera usually have a dedicated camera, phone cameras are useful as backup in a pinch and needs to offer flexibility. Advantage: Moto X4.

Storage: They both start at 32GB of storage but the Moto X4 augments that with a microSD slot for storage expansion. Apps need to stay in internal storage but movies, music, pictures, and some other data (Google offline maps) could be shifted to the memory card. If a Pixel owner needs more storage they need to buy a higher capacity device up front. If they find they need more storage later… they are out of luck. Advantage: Moto X4

Convenience: The Google Assistant on the Pixel hasn’t turned out to be terribly useful so far, neither has the Pixel-specific launcher. In contrast, Motorola’s customization to the X4 have proven to be more useful. A surprisingly handy feature is to turn the flashlight on/off by shaking the phone twice in a karate-chop motion. This was initially dismissed as a gimmick but it ended up being used multiple times a week. Advantage: Moto X4

Summary: There are clearly advantages to the flagship Pixel device, but if none of them are important to a user, they can save a lot of money with the competent Moto X4 and also get some nice features missing from the Pixel.

I Should Have Bought a Real Wire Stripping Tool a Long Time Ago

A lot of the talks at Hackaday Superconference 2017 were inspiring, informative, entertaining, or a combination of the above. But one of them is the first to have a significant impact on my hands-on projects and that honor goes to the Wiring Bootcamp talk by Bradley Gawthrop.

Your first reaction is probably the same as mine: “wiring? really?” Yes, really. At first glance a boring subject, Bradley turned it into an engaging presentation. One portion of the talk preached the wonders of having an actual wire-stripping tool. After the talk I felt motivated enough to try the Knipex tool he recommended.

Knipex

After using it in a few projects, I found myself really enjoying the luxury of stripping wire insulation with a single motion. This purchase has thus been categorized under “Where have you been all my life?

Knipex Jaw.jpg

Key to the magic is the relationship between the handle, the front jaw (black plastic) and the cutting blade (shiny metal.) When the handle is first pulled, the motion goes towards closing this assembly. When jaw closes on the wire insulation, the blade closes a little bit further to cut into the insulation. Beyond this point, motion on the handle is translated into horizontal movement so the blade pulls the insulation away from the conductor.

There’s no obvious way to adjust the distance between the jaw and the blade. It is either fixed or inferred from some spring tension. This works fairly well, the only problem surfaces when cutting wires with very thin insulation. In these cases the blade bites too deeply and nicks the conductor.

But that is a minor nitpick. I certainly nick conductors at a far higher rate when using my previous wire strippers. Which have been assigned the job of collecting dust while waiting as backup in case the Knipex breaks.

Retiree

I got myself a real wire stripping tool and loved it. You should do it, too.

Here’s the wiring talk posted on the Hackaday YouTube channel:

Hologram Working to Make Cellular Data Easy

One of the sponsors at Hackaday Superconference 2017 is Hologram.io. In the attendee bag I saw a sticker with their name and logo. It was just one of many names and logo stickers in the bag so it didn’t make much of an impression beyond “I saw it”. The name “Hologram” made me think they were some sort of video or image related system, possibly VR. But when I dug deeper I found a SIM card with the company name and logo on it.

Hologram SIM

Well, now, this is different. Since video and VR are very data-intensive services, I doubt my initial guess was right. So they have something to do with the cellular network, but I had a badge to hack and thought I’d get more information later.

As it turned out, I didn’t have to go looking for more information, they came to me. Specifically two people wearing T-shirts with the Hologram logo were walking through the badge hacking area and wanted to know more about my Luggable PC. I paused my project to answer their questions and generally chat to see what people are interested in. (A big part of the fun of hanging around supercon.) I asked about their company and got the quick sales pitch: they make it easy to use cellular data.

Their SIM is just the starting point. It allows access to cellular data worldwide without having to worry about dealing with cellular carriers. Hologram takes care of that. To help curious experimenters get started, their entry-level “Developer” plan is free for the first megabyte of data in the month. Additional data would be $0.60/mb which is not the cheapest rate, but if only a few megabytes a month are needed, it should still end up cheaper than the monthly fee charged by every other carrier.

That sounds great, but they go further: Hologram Nova is a USB device that acts as a cellular data modem and can be plugged into a Raspberry Pi, or a Beaglebone, basically any computer running Linux to give it cellular data connectivity.

What if a Linux computer is overkill for the task at hand? What about projects that could be handled by something simpler like an Arduino? They’ve got that covered, too. Their Hologram Dash is a board with self-contained cellular hardware and a CPU that can be programmed with the Arduino IDE. No computer necessary.

Now I’m impressed. I’ve had project ideas that would send data over the cellular network, but they were sitting in the low-priority stack because I didn’t feel motivated enough to deal with all the overhead of using cellular data. Now I know I could pay Hologram to deal with the ugly parts and focus on my idea.

I hadn’t heard of the name before Supercon, and now I’m contemplating projects that would use their service. Their sponsorship outreach effort is a success here.

TI eZ430-Chronos and ISM Bands for RF Projects

An event like Hackaday Superconference 2017 is supported by many sponsors that want to reach that audience. An important part of the outreach is the bag of goodies handed out to conference attendees. One item was from Texas Instruments, offering a discount for the “eZ430-Chronos wireless development tool in a watch” which caught my interest.

Recent news in smart watches are dominated by Apple and Google. Very powerful but at a price point I find unacceptable. So while I’m intrigued by the idea of a wrist computer that I could write code for, I’m waiting for the market to mature and the price to drop.

ez430-chronos
Photo by Texas Instruments

It never occurred to me that there might be smart watch platforms that offer less power and capability, at a much lower price. If I had gone looking, maybe I would have found the TI Chronos watch earlier. A web search indicated it is about 7 years old so hardly cutting edge, but it is a wristwatch I could program, for around the same money as a non-programmable Casio watch from Target. The development kit also includes two USB devices: one is a programmer to deploy code to the watch, and the other lets software running on a PC to communicate with software on the watch via RF.

Following the instruction to search for “Chronos” on the store site, I got two results: eZ430-Chronos-868 and eZ430-Chronos-915. What distinguishes the -868 from the -915? I went looking for data sheets and other documentation to help me choose between them. But they all assumed the reader already knew which they’d want! It turns out this is an instance of a complete beginner tripped up by basic knowledge in the field. These numbers indicate the RF frequency the device operates on: 868 MHz vs. 915MHz.

These are frequencies of the ISM (Industrial, Scientific, Medical) radio bands, open frequency range that people can use with minimal regulatory requirements. People who have worked with ISM RF would have recognized 868 MHz as the ISM band common in Europe and 915 MHz for North America.

Well, we’re all beginners at some point. At least now I know.

Texas Instruments has a whole set of products for people who want to build RF solutions in the ISM radio band under the SimpliciTI brand. I like the fact that these hardware components are available, but I’m less thrilled with the fact the software development is based on tools by IAR Systems. I’m barely a beginner on Microchip’s MPLAB X, I really don’t want to learn another development stack right now.

I already have a set of things I want to gain proficiency on and have to choose where to spend my time. So as interesting as the TI smart watch development platform is, I’m going to have to set it aside as a distraction.

Sorry, TI!

Supercon 2017 Fun: Other People’s Projects

The Hackaday Superconference 2017 was full of people who have a long list of project ideas. And it is also a venue where it’s easy to chat people up and ask about their projects.

Here are some highlights from people I had a chance to talk to:


Yesterday’s post mentioned Ariane Nazemi’s Compaq Portable, the original luggable PC. While he is very obviously skilled at keeping old PC running, he also does some pretty cool modern stuff. The talk was about mechanical keyboards and his Dark Matter keyboard in particular.

img_6113
Photo from Atom Computer web site’s Project Dark Matter page.

I was quite encouraged to learn that making my own custom mechanical keyboards wouldn’t be as crazy as I thought they might be. I’m rather particular about the feel of my keyboards, and the encroachment of cheap membrane keyboards meant I had to pay more and more for the mechanical keyboards with the feel I like. I’m now well into the gamer keyboards of the ~$100 range. Which, according to Ari, is to the point where I might as well start building my own. I’ll give it serious consideration.

 


I had the chance to chat with Sarah Petkus after her talk about her robotics projects, looking at robots from a refreshingly different perspective than most robot tinkerers I’ve met. Her projects are “personally expressive”, more works of art than functional tool. But they’re not just static sculptures! The projects are still real machines built from the same mechanical principles I’m familiar with, but they were born out of very different motivation.

I have not considered robots from her world view, and it was mind-opening to try to see and think about robots in a different way.

And it was a pleasure to meet Noodle in person.

dodlt9du8aaw85x
Photo by Twitter @cameronjblocker

Sarah said Noodle doesn’t walk very well just yet, and there are a lot of challenges to solve on the way to get there. I have ambition to know about control systems for leg-walking robots, but I’m not there now. Perhaps, if I ever get there, I can help her teach Noodle to walk. (Or better yet, help Noodle learn to walk.)


I was impressed by the Tomu project: an ARM microprocessor that fits mostly in a USB port and costs roughly $10. It is in the very early stage of development and like almost all open source projects, could use the help of more people. The creator was at Supercon to spread the word. As an incentive to join in the effort, people who do something useful and submit a pull request on Github will receive a unit. I’ll look into this in more detail later.


The creator of OpenMV was walking around and showing off units and giving demos. This project is at a much more advanced stage than Tomu was. It’s a product versus a project getting off the ground. As a result the demo is less a recruitment for the effort and more of a sales pitch. Still, it looks pretty cool and I’m definitely interested in machine vision. Once I learn enough about vision to understand what OpenMV can and can’t do for me, I’ll evaluate if I’m interested in buying.

Microchip “Curiosity” Development Board and its Zero Ohm Resistors

When I purchased my batch of PIC16F18345 chips, Microchip offered 20% discount off standard price for its corresponding Curiosity development board (DM164137). I thought it might be interesting and added it to my order, but I hadn’t pulled it out of its packaging until today.

Today’s motivation is the mTouch button built onto the board. As part of my investigation into projects I might tackle with the Hackaday Superconference 2017 camera badge, I found that the capacitive touch capabilities of the MCU is unused and thought it might be interesting to tie it into the rest of the camera badge. Before I try to fabricate my own touch sensors, I thought it’d be a good idea to orient myself with an existing mTouch implementation. Enter the Curiosity board.

Looking over the board itself and the schematics on the user’s guide, I noticed a generous scattering of zero ohm surface-mount resistors. If I had seen zero ohm resistors in isolation, I would have been completely mystified. Many electronics beginner like myself see a zero ohm resistors as something that does nothing, take up space, and there’s no point. For those beginners, a web search would have led them to this StackExchange thread, possibly the Wikipedia article, or maybe the Hackaday post.

Curiosity Zero OhmsBut I was not introduced to them in isolation – I saw them on the Curiosity board and in this context their purpose was immediately obvious: a link between pins on the PIC socket and the peripheral options built on that board. If I wanted to change which pins connected to which peripherals, I would not have to cut traces on the circuit board, I just had to un-solder the zero ohm resistor. Then I can change the connection on the board by soldering to the empty through-holes placed on the PCB for that purpose.

This was an illuminating “Oh that makes sense!” introduction to zero ohm resistors.

Ball Aerospace COSMOS: Open Source Command and Control

COSMOSToday’s entry for “neat stuff I stumbled across on the web” discovery is COSMOS by Ball Aerospace, an open-source command-and-control system for embedded systems. It has been added to my candidate list of software platforms to drive low-level hardware projects.

My primary target for high-level infrastructure has and still remains ROS, but COSMOS will have its place in projects yet to come. The strengths of COSMOS is that it is already designed for specific scenarios around telemetry gathering and display so should be better suited for projects in that category. ROS also has telemetry capabilities but it is less focused on displaying that data to the user.

A robot running ROS is concerned about the data, but it is more concerned about what it should do in response to that data. COSMOS has less focus there. A command-and-control system gathers the data, shows it to the operator, and the operator decides what to do. COSMOS can send commands to the systems it is monitoring but the thinking between the input (data) and output (action) is mostly left open for the human operator and/or task-specific custom software. It feels like a platform for building my own SCADA system. It will also be useful for times when the project is purely a data-gathering operation with no response necessary.

COSMOS is written in Ruby using the Qt framework. I have a working knowledge of Ruby thanks to my exploration with Ruby on Rails, and I also have a minimal working knowledge of Qt thanks to the Tux Lab thermoformer project with the Raspberry Pi GUI. That experience should make things easier if I ever decide to get serious using COSMOS for a future project.

There’s More To Wire Twisting Than Meets the Eye

For as long as I’ve been playing with electronics, there’s been bundles of wires held together by twisting the individual strands together. It’s so ubiquitous that I had never given it thought. It seems to be perfectly obvious how they are made: lay wires out alongside each other, hold the ends, twist, done. Right?

Today I learned: yes… mostly.

Certainly the simple straightforward way is sufficient for my daily life, because I only ever need short segments to be twisted. Household electric projects using twist nuts only deal with 1-2cm worth of wire. Hobbyist projects can also get away with this kind of thing – sometimes assisted by a cordless screwdriver/drill – because we rarely need more than a few meters of wire. There are pliers designed for twisting, but again they are only for a meter or less of wire.

When the wires are twisted simply, the individual strands also receive a rotational torque tension. Each strand will want to relieve this tension by un-twisting the bundle. For short runs, this tension can be mostly ignored. It is also less of a factor when the individual strands are relatively rigid: they’ll want to hold their shape more than they want to untwist. (Even more if the strands are hammered together.) But for longer twists of flexible cable, it’s easy to see the wire bundle trying to untwist itself.

When the twisted wire bundle needs to be longer – much much longer – this tension will become unacceptable. So wire twisting machines that make long runs of cables (hundreds of feet or longer) have added complexity. For example, the twisted pairs in our CAT-5 networking cable. As the wires are getting twisted together, the individual strands also must rotate with the twisting motion to relieve the tension before being merged into the twisted bundle.

A simpler way to approximate this is to let the individual strands move freely while twisting. The built-up tension at the point of twist will be relieved in the form of the individual strand rotating about. This can seen in video of some wire-twisting machines. (Pay attention to the individual strands rotating in the feed tube.)

A twisted wire bundle built using this technique is less likely to fight to untwist itself. In this picture, I hand twisted about 5cm of wire while letting the individual strands rotate to relieve the torque tension. I then held both ends and twisted another 5cm in the naive method. As soon as I released the stranded end, the second half of the bundle untwisted itself. The first half stayed twisted.

Wire twist test

UPDATE: I didn’t have any luck finding YouTube videos illustrating the twisting that needs to be done for the wiring bundle to stay together. At least, not machines that twist wire. I found one that twists yarn, but illustrates a similar principle.

Play Atari 2600 Games for Science

glogoGames offer a predictable controlled environment to develop and test artificial intelligence algorithms. Tic-tac-toe is usually the first adversarial game people learned when young, and so is ideal for a class teaching the basics of writing game playing algorithms. Advanced algorithms tackle playing timeless games like Chess and Go.

While those games are extremely challenging, they fail to represent many of the tasks that are interesting to pursue in artificial intelligence research. For some of these research areas, researchers turn to video games.

I’ve seen research results presented for playing various classic Atari 2600 arcade games. One example was when Google’s DeepMind research algorithm played Breakout in a super efficient and very non-human way by hitting the bricks from behind the wall.

What I hadn’t realized until today was that there’s a whole infrastructure built up for this type of research. Anybody who wishes to dip their toes in this field (or dive in head first) would not have to recreate everything from scratch.

This infrastructure for putting AI at the controls of an Atari 2600 is available via the Arcade Learning Environment, based on a game emulator and making all the inputs and outputs available in a program-friendly (instead of human-friendly) manner. I learned of this while reading about Maluuba’s announcement of their Hybrid-Reward Architecture. They applied their system to an algorithm that learned how to get the maximum score in Ms. Pac-Man.

And if getting ALE from Github is still too much work to set up, people can go to places like the OpenAI gym which has built entire algorithm training environments. All it takes is a working knowledge of Python to access everything that is available.

I’m impressed how barriers to entry have been removed for anybody interested in getting into this field of AI research. The only hard parts left are… well, the actual hard parts of algorithm design.

 

Plastic Bottle Upcycling with TrussFab

csm_chair_FEA-nolable-02_ea4ad9b60f
Image from TrussFab.

A perpetual limitation of 3D printing is the print volume of the 3D printer. Any creations larger than that volume must necessarily consist of multiple pieces joined together in some way. My Luggable PC project is built from 3D printed pieces (each piece limited in size by the print volume) mounted on a skeleton of aluminum extrusions.

Aluminum extrusions are quite economical for the precision and flexibility they offer, but such capabilities aren’t always necessary for a project. Less expensive construction materials are available offering varying levels of construction flexibility, strength, and precision depending on the specific requirements of the project.

For the researchers behind TrussFab, they chose to utilize the ubiquitous plastic beverage bottle as structural component. Mass produced to exact specifications, the overall size is predictable and topped by a bottle cap mechanism necessarily precise to seal the contents of the bottle. And best of all, empty bottles that have successfully served their primary mission of beverage delivery are easily available at quantity.

These bottles are very strong in specific ways but quite weak in others. TrussFab leverages their strength and avoids their weakness by building them into truss structures. The software calculates the geometry required at the joints of the trusses and generates STL files for them to be 3D printed. The results are human-scale structures with the arbitrary shape flexibility of 3D printing made possible within the (relatively) tiny volume of a 3D printer.

Presented recently at ACM CHI’17 (Association for Computing Machinery, conference for Computer-Human Interaction 2017) the biggest frustration with TrussFab is that the software is not yet generally available for hobbyists to play with. In the meantime, their project page has links to a few generated structures on Thingiverse and a YouTube video.

 

Thread Tapping Failure and Heat-Set Threaded Inserts

Part of the design for PEM1 (portable external monitor version 1.0) was a VESA-standard 100 x 100mm pattern to be tapped with M5 thread. This way I can mount it on an existing monitor stand and avoid having to design a stand for it.

I had hand tapped many M5 threads in 3D printed plastic for the Luggable PC project, so I anticipated little difficulty here. I was surprised when I pulled the manual tapping tool away from one of the four mounting holes and realized I had destroyed the thread. Out of four holes in the mounting pattern, two were usable, one was marginal, one was unusable.

AcrylicTappedThreads
Right: usable #6-32 thread for circuit board standoff. Left: Unusable M5 thread for VESA 100 monitor mount.

A little debugging pointed to laser-cutting too small of a hole for the tapping tool. But still the fact remains tapping threads in plastic is time-consuming and error-prone. I think it is a good time to pause the project and learn: What can we do instead?

One answer was literally sitting right in front of me: the carcass of the laptop I had disassembled to extract the LCD panel. Dell laptop cases are made from plastic, and the case screws (mostly M2.5) fasten into small metal threaded inserts that were heat-set into the plastic.

Different plastics have different behavior, so I thought I should experiment with heat-set inserts in acrylic before buying them in quantity. It doesn’t have to be M5 – just something to get a feel of the behavior of the mechanism. Where can I get my hands on some inserts? The answer is again in the laptop carcass: well, there’s some right here!

Attempting to extract an insert by brute force instead served as an unplanned demonstration of the mechanical strength of a properly installed heat-set insert. That little thing put up quite a fight against departing from its assigned post.

But if heat helped soften the insert for installation, perhaps heat can help soften the plastic for extraction. And indeed, heat did. A soldering iron helped made it far easier to salvage the inserts from the laptop chassis for experimentation.

See World(s) Online

NASALogoOne of the longest tenure items on my “To-Do” exploration is to get the hang of the Google Earth API and learn how to create a web app around it. This was very exciting web technology when Google seemed to be moving Google Earth from a standalone application to a web-based solution. Unfortunately its web architecture was based around browser plug-ins which eventually lead to its death.

It made sense for Google Earth functionality to be folded into Google Maps, but that seemed to be a slow process of assimilation. It never occurred to me that there are other alternatives out there until I stumbled across a talk about NASA’s World Wind project. (A hands-on activity, too, with a sample project to play with.) The “Web World Wind” component of the project is a WebGL library for geo-spatial applications, which makes me excited about its potential for fun projects.

The Java edition of World Wind has (or at least used to) have functionality beyond our planet Earth. There were ways to have it display data sets from our moon or from Mars. Sadly the web edition has yet to pick up that functionality.

JPL does currently expose a lot of Mars information in a web-browser accessible form on the Mars Trek site. According to the speaker of my talk, it was not built on World Wind. He believes it was built on Cesium, another WebGL library for global data visualization.

I thought there was only Google Earth, and now I know there are at least two other alternatives. Happiness.

The speaker of the talk is currently working in the JPL Ops Lab on the OnSight project, helping planetary scientists collaborate on Mars research using Microsoft’s Hololens for virtual presence on Mars. That sounds like an awesome job.