Hologram Working to Make Cellular Data Easy

One of the sponsors at Hackaday Superconference 2017 is Hologram.io. In the attendee bag I saw a sticker with their name and logo. It was just one of many names and logo stickers in the bag so it didn’t make much of an impression beyond “I saw it”. The name “Hologram” made me think they were some sort of video or image related system, possibly VR. But when I dug deeper I found a SIM card with the company name and logo on it.

Hologram SIM

Well, now, this is different. Since video and VR are very data-intensive services, I doubt my initial guess was right. So they have something to do with the cellular network, but I had a badge to hack and thought I’d get more information later.

As it turned out, I didn’t have to go looking for more information, they came to me. Specifically two people wearing T-shirts with the Hologram logo were walking through the badge hacking area and wanted to know more about my Luggable PC. I paused my project to answer their questions and generally chat to see what people are interested in. (A big part of the fun of hanging around supercon.) I asked about their company and got the quick sales pitch: they make it easy to use cellular data.

Their SIM is just the starting point. It allows access to cellular data worldwide without having to worry about dealing with cellular carriers. Hologram takes care of that. To help curious experimenters get started, their entry-level “Developer” plan is free for the first megabyte of data in the month. Additional data would be $0.60/mb which is not the cheapest rate, but if only a few megabytes a month are needed, it should still end up cheaper than the monthly fee charged by every other carrier.

That sounds great, but they go further: Hologram Nova is a USB device that acts as a cellular data modem and can be plugged into a Raspberry Pi, or a Beaglebone, basically any computer running Linux to give it cellular data connectivity.

What if a Linux computer is overkill for the task at hand? What about projects that could be handled by something simpler like an Arduino? They’ve got that covered, too. Their Hologram Dash is a board with self-contained cellular hardware and a CPU that can be programmed with the Arduino IDE. No computer necessary.

Now I’m impressed. I’ve had project ideas that would send data over the cellular network, but they were sitting in the low-priority stack because I didn’t feel motivated enough to deal with all the overhead of using cellular data. Now I know I could pay Hologram to deal with the ugly parts and focus on my idea.

I hadn’t heard of the name before Supercon, and now I’m contemplating projects that would use their service. Their sponsorship outreach effort is a success here.

TI eZ430-Chronos and ISM Bands for RF Projects

An event like Hackaday Superconference 2017 is supported by many sponsors that want to reach that audience. An important part of the outreach is the bag of goodies handed out to conference attendees. One item was from Texas Instruments, offering a discount for the “eZ430-Chronos wireless development tool in a watch” which caught my interest.

Recent news in smart watches are dominated by Apple and Google. Very powerful but at a price point I find unacceptable. So while I’m intrigued by the idea of a wrist computer that I could write code for, I’m waiting for the market to mature and the price to drop.

ez430-chronos
Photo by Texas Instruments

It never occurred to me that there might be smart watch platforms that offer less power and capability, at a much lower price. If I had gone looking, maybe I would have found the TI Chronos watch earlier. A web search indicated it is about 7 years old so hardly cutting edge, but it is a wristwatch I could program, for around the same money as a non-programmable Casio watch from Target. The development kit also includes two USB devices: one is a programmer to deploy code to the watch, and the other lets software running on a PC to communicate with software on the watch via RF.

Following the instruction to search for “Chronos” on the store site, I got two results: eZ430-Chronos-868 and eZ430-Chronos-915. What distinguishes the -868 from the -915? I went looking for data sheets and other documentation to help me choose between them. But they all assumed the reader already knew which they’d want! It turns out this is an instance of a complete beginner tripped up by basic knowledge in the field. These numbers indicate the RF frequency the device operates on: 868 MHz vs. 915MHz.

These are frequencies of the ISM (Industrial, Scientific, Medical) radio bands, open frequency range that people can use with minimal regulatory requirements. People who have worked with ISM RF would have recognized 868 MHz as the ISM band common in Europe and 915 MHz for North America.

Well, we’re all beginners at some point. At least now I know.

Texas Instruments has a whole set of products for people who want to build RF solutions in the ISM radio band under the SimpliciTI brand. I like the fact that these hardware components are available, but I’m less thrilled with the fact the software development is based on tools by IAR Systems. I’m barely a beginner on Microchip’s MPLAB X, I really don’t want to learn another development stack right now.

I already have a set of things I want to gain proficiency on and have to choose where to spend my time. So as interesting as the TI smart watch development platform is, I’m going to have to set it aside as a distraction.

Sorry, TI!

Supercon 2017 Fun: Other People’s Projects

The Hackaday Superconference 2017 was full of people who have a long list of project ideas. And it is also a venue where it’s easy to chat people up and ask about their projects.

Here are some highlights from people I had a chance to talk to:


Yesterday’s post mentioned Ariane Nazemi’s Compaq Portable, the original luggable PC. While he is very obviously skilled at keeping old PC running, he also does some pretty cool modern stuff. The talk was about mechanical keyboards and his Dark Matter keyboard in particular.

img_6113
Photo from Atom Computer web site’s Project Dark Matter page.

I was quite encouraged to learn that making my own custom mechanical keyboards wouldn’t be as crazy as I thought they might be. I’m rather particular about the feel of my keyboards, and the encroachment of cheap membrane keyboards meant I had to pay more and more for the mechanical keyboards with the feel I like. I’m now well into the gamer keyboards of the ~$100 range. Which, according to Ari, is to the point where I might as well start building my own. I’ll give it serious consideration.

 


I had the chance to chat with Sarah Petkus after her talk about her robotics projects, looking at robots from a refreshingly different perspective than most robot tinkerers I’ve met. Her projects are “personally expressive”, more works of art than functional tool. But they’re not just static sculptures! The projects are still real machines built from the same mechanical principles I’m familiar with, but they were born out of very different motivation.

I have not considered robots from her world view, and it was mind-opening to try to see and think about robots in a different way.

And it was a pleasure to meet Noodle in person.

dodlt9du8aaw85x
Photo by Twitter @cameronjblocker

Sarah said Noodle doesn’t walk very well just yet, and there are a lot of challenges to solve on the way to get there. I have ambition to know about control systems for leg-walking robots, but I’m not there now. Perhaps, if I ever get there, I can help her teach Noodle to walk. (Or better yet, help Noodle learn to walk.)


I was impressed by the Tomu project: an ARM microprocessor that fits mostly in a USB port and costs roughly $10. It is in the very early stage of development and like almost all open source projects, could use the help of more people. The creator was at Supercon to spread the word. As an incentive to join in the effort, people who do something useful and submit a pull request on Github will receive a unit. I’ll look into this in more detail later.


The creator of OpenMV was walking around and showing off units and giving demos. This project is at a much more advanced stage than Tomu was. It’s a product versus a project getting off the ground. As a result the demo is less a recruitment for the effort and more of a sales pitch. Still, it looks pretty cool and I’m definitely interested in machine vision. Once I learn enough about vision to understand what OpenMV can and can’t do for me, I’ll evaluate if I’m interested in buying.

Microchip “Curiosity” Development Board and its Zero Ohm Resistors

When I purchased my batch of PIC16F18345 chips, Microchip offered 20% discount off standard price for its corresponding Curiosity development board (DM164137). I thought it might be interesting and added it to my order, but I hadn’t pulled it out of its packaging until today.

Today’s motivation is the mTouch button built onto the board. As part of my investigation into projects I might tackle with the Hackaday Superconference 2017 camera badge, I found that the capacitive touch capabilities of the MCU is unused and thought it might be interesting to tie it into the rest of the camera badge. Before I try to fabricate my own touch sensors, I thought it’d be a good idea to orient myself with an existing mTouch implementation. Enter the Curiosity board.

Looking over the board itself and the schematics on the user’s guide, I noticed a generous scattering of zero ohm surface-mount resistors. If I had seen zero ohm resistors in isolation, I would have been completely mystified. Many electronics beginner like myself see a zero ohm resistors as something that does nothing, take up space, and there’s no point. For those beginners, a web search would have led them to this StackExchange thread, possibly the Wikipedia article, or maybe the Hackaday post.

Curiosity Zero OhmsBut I was not introduced to them in isolation – I saw them on the Curiosity board and in this context their purpose was immediately obvious: a link between pins on the PIC socket and the peripheral options built on that board. If I wanted to change which pins connected to which peripherals, I would not have to cut traces on the circuit board, I just had to un-solder the zero ohm resistor. Then I can change the connection on the board by soldering to the empty through-holes placed on the PCB for that purpose.

This was an illuminating “Oh that makes sense!” introduction to zero ohm resistors.

Ball Aerospace COSMOS: Open Source Command and Control

COSMOSToday’s entry for “neat stuff I stumbled across on the web” discovery is COSMOS by Ball Aerospace, an open-source command-and-control system for embedded systems. It has been added to my candidate list of software platforms to drive low-level hardware projects.

My primary target for high-level infrastructure has and still remains ROS, but COSMOS will have its place in projects yet to come. The strengths of COSMOS is that it is already designed for specific scenarios around telemetry gathering and display so should be better suited for projects in that category. ROS also has telemetry capabilities but it is less focused on displaying that data to the user.

A robot running ROS is concerned about the data, but it is more concerned about what it should do in response to that data. COSMOS has less focus there. A command-and-control system gathers the data, shows it to the operator, and the operator decides what to do. COSMOS can send commands to the systems it is monitoring but the thinking between the input (data) and output (action) is mostly left open for the human operator and/or task-specific custom software. It feels like a platform for building my own SCADA system. It will also be useful for times when the project is purely a data-gathering operation with no response necessary.

COSMOS is written in Ruby using the Qt framework. I have a working knowledge of Ruby thanks to my exploration with Ruby on Rails, and I also have a minimal working knowledge of Qt thanks to the Tux Lab thermoformer project with the Raspberry Pi GUI. That experience should make things easier if I ever decide to get serious using COSMOS for a future project.

There’s More To Wire Twisting Than Meets the Eye

For as long as I’ve been playing with electronics, there’s been bundles of wires held together by twisting the individual strands together. It’s so ubiquitous that I had never given it thought. It seems to be perfectly obvious how they are made: lay wires out alongside each other, hold the ends, twist, done. Right?

Today I learned: yes… mostly.

Certainly the simple straightforward way is sufficient for my daily life, because I only ever need short segments to be twisted. Household electric projects using twist nuts only deal with 1-2cm worth of wire. Hobbyist projects can also get away with this kind of thing – sometimes assisted by a cordless screwdriver/drill – because we rarely need more than a few meters of wire. There are pliers designed for twisting, but again they are only for a meter or less of wire.

When the wires are twisted simply, the individual strands also receive a rotational torque tension. Each strand will want to relieve this tension by un-twisting the bundle. For short runs, this tension can be mostly ignored. It is also less of a factor when the individual strands are relatively rigid: they’ll want to hold their shape more than they want to untwist. (Even more if the strands are hammered together.) But for longer twists of flexible cable, it’s easy to see the wire bundle trying to untwist itself.

When the twisted wire bundle needs to be longer – much much longer – this tension will become unacceptable. So wire twisting machines that make long runs of cables (hundreds of feet or longer) have added complexity. For example, the twisted pairs in our CAT-5 networking cable. As the wires are getting twisted together, the individual strands also must rotate with the twisting motion to relieve the tension before being merged into the twisted bundle.

A simpler way to approximate this is to let the individual strands move freely while twisting. The built-up tension at the point of twist will be relieved in the form of the individual strand rotating about. This can seen in video of some wire-twisting machines. (Pay attention to the individual strands rotating in the feed tube.)

A twisted wire bundle built using this technique is less likely to fight to untwist itself. In this picture, I hand twisted about 5cm of wire while letting the individual strands rotate to relieve the torque tension. I then held both ends and twisted another 5cm in the naive method. As soon as I released the stranded end, the second half of the bundle untwisted itself. The first half stayed twisted.

Wire twist test

UPDATE: I didn’t have any luck finding YouTube videos illustrating the twisting that needs to be done for the wiring bundle to stay together. At least, not machines that twist wire. I found one that twists yarn, but illustrates a similar principle.

Play Atari 2600 Games for Science

glogoGames offer a predictable controlled environment to develop and test artificial intelligence algorithms. Tic-tac-toe is usually the first adversarial game people learned when young, and so is ideal for a class teaching the basics of writing game playing algorithms. Advanced algorithms tackle playing timeless games like Chess and Go.

While those games are extremely challenging, they fail to represent many of the tasks that are interesting to pursue in artificial intelligence research. For some of these research areas, researchers turn to video games.

I’ve seen research results presented for playing various classic Atari 2600 arcade games. One example was when Google’s DeepMind research algorithm played Breakout in a super efficient and very non-human way by hitting the bricks from behind the wall.

What I hadn’t realized until today was that there’s a whole infrastructure built up for this type of research. Anybody who wishes to dip their toes in this field (or dive in head first) would not have to recreate everything from scratch.

This infrastructure for putting AI at the controls of an Atari 2600 is available via the Arcade Learning Environment, based on a game emulator and making all the inputs and outputs available in a program-friendly (instead of human-friendly) manner. I learned of this while reading about Maluuba’s announcement of their Hybrid-Reward Architecture. They applied their system to an algorithm that learned how to get the maximum score in Ms. Pac-Man.

And if getting ALE from Github is still too much work to set up, people can go to places like the OpenAI gym which has built entire algorithm training environments. All it takes is a working knowledge of Python to access everything that is available.

I’m impressed how barriers to entry have been removed for anybody interested in getting into this field of AI research. The only hard parts left are… well, the actual hard parts of algorithm design.

 

Plastic Bottle Upcycling with TrussFab

csm_chair_FEA-nolable-02_ea4ad9b60f
Image from TrussFab.

A perpetual limitation of 3D printing is the print volume of the 3D printer. Any creations larger than that volume must necessarily consist of multiple pieces joined together in some way. My Luggable PC project is built from 3D printed pieces (each piece limited in size by the print volume) mounted on a skeleton of aluminum extrusions.

Aluminum extrusions are quite economical for the precision and flexibility they offer, but such capabilities aren’t always necessary for a project. Less expensive construction materials are available offering varying levels of construction flexibility, strength, and precision depending on the specific requirements of the project.

For the researchers behind TrussFab, they chose to utilize the ubiquitous plastic beverage bottle as structural component. Mass produced to exact specifications, the overall size is predictable and topped by a bottle cap mechanism necessarily precise to seal the contents of the bottle. And best of all, empty bottles that have successfully served their primary mission of beverage delivery are easily available at quantity.

These bottles are very strong in specific ways but quite weak in others. TrussFab leverages their strength and avoids their weakness by building them into truss structures. The software calculates the geometry required at the joints of the trusses and generates STL files for them to be 3D printed. The results are human-scale structures with the arbitrary shape flexibility of 3D printing made possible within the (relatively) tiny volume of a 3D printer.

Presented recently at ACM CHI’17 (Association for Computing Machinery, conference for Computer-Human Interaction 2017) the biggest frustration with TrussFab is that the software is not yet generally available for hobbyists to play with. In the meantime, their project page has links to a few generated structures on Thingiverse and a YouTube video.

 

Thread Tapping Failure and Heat-Set Threaded Inserts

Part of the design for PEM1 (portable external monitor version 1.0) was a VESA-standard 100 x 100mm pattern to be tapped with M5 thread. This way I can mount it on an existing monitor stand and avoid having to design a stand for it.

I had hand tapped many M5 threads in 3D printed plastic for the Luggable PC project, so I anticipated little difficulty here. I was surprised when I pulled the manual tapping tool away from one of the four mounting holes and realized I had destroyed the thread. Out of four holes in the mounting pattern, two were usable, one was marginal, one was unusable.

AcrylicTappedThreads
Right: usable #6-32 thread for circuit board standoff. Left: Unusable M5 thread for VESA 100 monitor mount.

A little debugging pointed to laser-cutting too small of a hole for the tapping tool. But still the fact remains tapping threads in plastic is time-consuming and error-prone. I think it is a good time to pause the project and learn: What can we do instead?

One answer was literally sitting right in front of me: the carcass of the laptop I had disassembled to extract the LCD panel. Dell laptop cases are made from plastic, and the case screws (mostly M2.5) fasten into small metal threaded inserts that were heat-set into the plastic.

Different plastics have different behavior, so I thought I should experiment with heat-set inserts in acrylic before buying them in quantity. It doesn’t have to be M5 – just something to get a feel of the behavior of the mechanism. Where can I get my hands on some inserts? The answer is again in the laptop carcass: well, there’s some right here!

Attempting to extract an insert by brute force instead served as an unplanned demonstration of the mechanical strength of a properly installed heat-set insert. That little thing put up quite a fight against departing from its assigned post.

But if heat helped soften the insert for installation, perhaps heat can help soften the plastic for extraction. And indeed, heat did. A soldering iron helped made it far easier to salvage the inserts from the laptop chassis for experimentation.

See World(s) Online

NASALogoOne of the longest tenure items on my “To-Do” exploration is to get the hang of the Google Earth API and learn how to create a web app around it. This was very exciting web technology when Google seemed to be moving Google Earth from a standalone application to a web-based solution. Unfortunately its web architecture was based around browser plug-ins which eventually lead to its death.

It made sense for Google Earth functionality to be folded into Google Maps, but that seemed to be a slow process of assimilation. It never occurred to me that there are other alternatives out there until I stumbled across a talk about NASA’s World Wind project. (A hands-on activity, too, with a sample project to play with.) The “Web World Wind” component of the project is a WebGL library for geo-spatial applications, which makes me excited about its potential for fun projects.

The Java edition of World Wind has (or at least used to) have functionality beyond our planet Earth. There were ways to have it display data sets from our moon or from Mars. Sadly the web edition has yet to pick up that functionality.

JPL does currently expose a lot of Mars information in a web-browser accessible form on the Mars Trek site. According to the speaker of my talk, it was not built on World Wind. He believes it was built on Cesium, another WebGL library for global data visualization.

I thought there was only Google Earth, and now I know there are at least two other alternatives. Happiness.

The speaker of the talk is currently working in the JPL Ops Lab on the OnSight project, helping planetary scientists collaborate on Mars research using Microsoft’s Hololens for virtual presence on Mars. That sounds like an awesome job.

The Cost for Security

In the seemingly never-ending bad news of security breaches, a recurring theme is “they knew how to prevent this, but they didn’t.” Usually in the form of editorializing condemning people as penny-pinching misers caring more about their operating cost than the customer.

The accusations may or may not be true, it’s hard to tell without the other side of the story. What’s unarguably true is that security has some cost. Performing encryption obviously takes more work than not doing any! But how expensive is that cost? Reports range wildly anywhere from less than 5% to over 50%, and it likely depends on the specific situations involved as well.

I really had no idea of the cost until I stumbled across the topic in the course of my own Rails self-education project.

I had designed my Rails project with an eye towards security. The Google ID login token is validated against Google certificates, and the resulting ID is salted and hashed for storage. The code for this added security were deceptively minor, as they triggered huge amounts of work behind the scenes!

I started on this investigation because I noticed my Rails test suite ran quite slowly. Running the test suite for the Rails Tutorial sample app, the test framework ran through ~120 assertions per second. My own project test suite ran at a snail’s pace of ~12 assertions/second, 10% of the speed. What’s slowing things down so much? A few hours of experimentation and investigation pointed the finger at the encryption measures.

Obviously security is good for the production environment and should not be altered. However, for the purposes of development & test, I could weaken them because there would be no actual user data to protect. After I made a change to bypass some code and reducing complexity in others, my test suite speed rose to the expected >100 assertions/sec.

Granted, this is only an amateur at work and I’m probably making other mistakes doing security inefficiently. But as a lesson to experience “Security Has A Cost” firsthand it is eye-opening to find a 1000% performance penalty.

For a small practice exercise app like mine, where I only expect a handful of users, this is not a problem. But for a high-traffic site, having to pay ten times the cost would be the difference between making or breaking a business.

While I still don’t agree with the decisions that lead up to security breaches, at least now I have a better idea of the other side of the story.

Limiting Google Client ID Exposure

google-sign-inToday’s educational topic: the varying levels of secrecy around cloud API access.

In the previous experiment with AWS, things were relatively straightforward: The bucket name is going to be public, all the access information are secret, and none of them are ever exposed to the user. Nor are they checked into the source code. They are set directly on the Heroku server as environment variables.

Implementing a web site using Google Identity got into a murky in-between for the piece of information known as the client ID. Due to how the OAuth system is designed, the client ID has to be sent to the user’s web browser. Google’s primary example exposed it as a HTML <meta> tag.

The fact the client ID is publicly visible led me to believe the client ID is not something I needed to protect, so I had merrily hard-coded it into my source and checked it into Github.

Oops! According to this section of the Google Developer Terms of Service document, that was bad. See the sections I highlighted in bold:

Developer credentials (such as passwords, keys, and client IDs) are intended to be used by you and identify your API Client. You will keep your credentials confidential and make reasonable efforts to prevent and discourage other API Clients from using your credentials. Developer credentials may not be embedded in open source projects.

Looks like we have a “secret but not secret” level going on: while the system architecture requires that the client ID be visible to an user logging on to my site, as a developer I am still expected to keep it secret from anybody just browsing code online.

How bad was this mistake? As far as security goofs go, this was thankfully benign. On the Google developer console, the client ID is restricted to a specific set of URIs. Another web site trying to use the same client ID will get an error:

google-uri-mismatch

IP addresses can be spoofed, of course, but this mitigation makes abuse more difficult.

After this very instructional detour, I updated my project’s server-side and client-side code to retrieve the client ID from an environment variable. The app will still end up sending the client ID in clear text to the user’s web browser, but at least it isn’t in plain sight searchable on Github.

And to close everything out, I also went into the Google developer console to revoke the exposed client ID, so it can no longer be used by anybody.

Lesson learned, moving on…

Behavior Driven Development

cucumberlogoMy new concept of the day: Behavior Driven Development. As this beginner understands the concept, the ideal is that the plain-English customer demands on the software is formalized just enough to make it a part of automated testing. In hindsight, a perfectly logical extension of Test-Driven Development concepts, which started as QA demands on software treated as the horse instead of the cart. I think BDD can be a pretty fantastic concept, but I haven’t seen enough to decide if I like the current state of the art in execution.

I stumbled into this entirely by accident. As a follow-up to the Rails Tutorial project, I took a closer look at one corner of the sample app. The image upload feature of the sample app used a gem called carrierwave uploader to do most of the work. In the context of the tutorial, CarrierWave was a magic black box that was pulled in and used without much explanation. I wanted to better understand the features (and limitations) of CarrierWave for use (or not) in my own projects.

As is typical of open-source projects, the documentation that exists is relatively thin and occasionally backed by the disclaimer “for more details, see source code.” I prefer better documentation up front but I thought: whatever, I’m a programmer, I can handle code spelunking. It should be a good exercise anyway.

Since I was exploring, I decided to poke my head into the first (alphabetically sorted) directory : /features/. And I was immediately puzzled by the files I read. The language is too formal to be conversational English for human beings, but too informal to be a programming language as I knew one. Some amount of Google-assisted research led me to the web site for Cucumber, the BDD tool used by the developers of CarrierWave.

That journey was fun, illuminating, and I haven’t even learned anything about CarrierWave itself yet!

Cache is King

15Puzzle

C is an old familiar friend, so it is not part of my “new toolbox” push, but I went back to it for a bit of refresher for old time’s sake. The exercise is also an old friend – solving the 15-puzzle. The sliding tile puzzle is a problem space that I studied a lot in college looking for interesting things around heuristic search.

For nostalgia’s sake, I rewrote a textbook puzzle solver in C using the iterative-deepening A* (IDA*) algorithm employing the Manhattan Distance heuristic. It rubbed off some rust and also let me see how much faster modern computers are. It used to be: most puzzles would take minutes, and the worst case would take over a week. Now most puzzles are solved in seconds, and the worst case topped out at “merely” few tens of hours.

Looking to further improve performance, I looked online for advances in heuristics research since the time I graduated and found several. I decided to implement one of them named “Walking Distance” by the person credited with devising it, Ken’ichiro Takahashi.

From the perspective of algorithmic effectiveness, Walking Distance is a tremendous improvement over Manhattan Distance. It is a far more accurate estimate of solution length. Solving the sliding tile puzzle with the Walking Distance eliminated over 90% of duplicated work within IDA*.

On paper, then, Walking Distance should be many orders of magnitude faster… but my implementation was not. Surprised, I dug into what’s going on and I think I know the answer: CPU cache. The Manhattan Distance algorithm and lookup table all would easily fit within the 256kb L2 cache of my Intel microprocessor. (It might even fit in L1.) The Walking Distance data structures would not fit and would spill into the much-slower L3 cache. (Or possibly even main memory.) It also takes more logical operations to perform a table lookup with Walking Distance, but I believe that is less important than the location of the lookup table themselves.

In any case: with my implementation and running on my computer, it takes about 225 processor cycles to examine a node with Manhattan Distance. In contrast, a Walking Distance node averages over 81 thousand cycles. That’s 363 times longer!

Fortunately, the author was not blind to this. While building the Walking Distance lookup table, Takahashi also built a table that tracks how one lookup state transitions to another in response to a tile move. This meant we perform the full Walking Distance calculation only on startup. After the initial calculation, the updates are very fast using the transition link table, effectively a cache of Walking Distance computation.

Takahashi also incorporated the Inversion Distance heuristic as support. Sometimes the inversion count is higher than the walking distance, and we can use whichever is higher. Like walking distance, there’s also a set of optimization so the updates are faster than a full calculation.

Lastly, I realized that I neglected to compile with the most aggressive optimization settings. With it, the Manhattan Distance implementation dropped from ~225 cycles down to ~75 cycles per node.

Walking Distance was much more drastic. By implementing lookup into the transition table cache, the per-node average dropped from 81 thousand cycles to ~207 cycles per node. With fully optimized code, that dropped further to ~52 cycles per node. Fewer cycles per node, and only having to explore < 10% of the nodes, makes Walking Distance a huge winner over Manhattan Distance. One test case that took tens of hours with Manhattan Distance takes tens of minutes with Walking Distance.

That was a fun exercise in low level C programming, a good change of pace from the high-level web tools.

For the curious, the code for this exercise is up on Github, under the /C/ subdirectory.

Minor Derailment Due To Infrastructure

One of the reasons I put Node.js education on hold and started with Ruby on Rails is because of my existing account at Dreamhost. Their least expensive shared hosting plan does not support Node.js applications. It does support Ruby on Rails, PHP, and a few others, so I started learning about Ruby on Rails instead.

The officially supported version of Ruby (and associated Ruby on Rails) is very old, but their customer support wiki assured me it could be updated via RVM. However, it wasn’t until I paid money and got into the control panel did I learn RVM is not supported on their shared hosting plan.

RVM Requires VPS

At this point I feel like the victim of a bait-and-switch…

So if I want to work with a non-ancient version of Ruby on Rails (and I do) I must upgrade to a different plan. Their dedicated server option is out of the question due to expense, so it’s a choice between their managed Virtual Private Server option or a raw virtual machine via DreamCompute.

In either case, I didn’t need to pause my study of Node.js because it’d work on these more expensive plans. Still, Ruby is a much more pleasant language than JavaScript. And Rails is a much better integrated stack than the free-wheeling Node.js. So it wasn’t all loss.

Before I plunk down more money, though, I think I should look into PHP. It was one of the alternatives to Ruby when I learned NodeJS wasn’t supported on Dreamhost shared hosting. It is the server-side technology available to Dreamhost shared hosting, fully managed and kept up to date. Or at least I think it is! Maybe I’ll learn differently as I get into it… again.

Dreamhost offers a 97-day satisfaction guarantee. I can probably use that to get off of shared hosting and move on to VPS. It’s also a chance find out if their customer service department is any good.

UPDATE 1: Dreamhost allowed me to cancel my hosting plan and refunded my money, zero fuss. Two clicks on the web control panel (plus two more to confirm) and the refund was done. This is pretty fantastic.

UPDATE 2: I found Heroku, a PaaS service that caters to developers working in Rails and other related web technologies. (It started with Ruby on Rails then expanded from there.) For trial and experimentation purposes, there is a free tier of Heroku I can use, and I shall.

Neural network in JavaScript

When I was first introduced to neural networks, they were considered algorithms with extremely expensive computational requirements. Even the most trivial network required a high-end PC with lots of memory and floating-point math capability.

Of course, at the time a high-end PC processor ran at 90 megahertz, 32 megabytes of RAM is considered a lot, and floating point math required a separate (and expensive) floating-point co-processor.

Now the cell phones we have in our pockets have faster processor and more memory than those powerful PCs of old. Every current processor has floating-point math capability, no extra chip required.

Which means what used to be the domain of specialized programmers, running on expensive hardware, is now possible everywhere: running in a web browser like the TensorFlow playground.

But it’s still hard for a human to grasp what’s going on inside a neural network as it learns and adjusts. While the accessibility of the technology (meaning how easy it is to obtain) has improved, the accessibility of the knowledge (meaning how easy it is to understand) hasn’t.

Computer brains have made great advances in the past years…

Human brains have not.

Upsetting the NPM apple cart

Decades-old words of wisdom from a computer science pioneer, proven true once again.

A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.

Leslie Lamport

In a coincidence of perfect timing, my education of NPM yesterday came just in time for me to understand the left-pad incident. The short version is simple enough to understand: unhappy programmer took his ball and went home, causing a lot of other people grief in the process. The bigger picture, though, needed a bit more knowledge to understand.

While going through the NodeSchool.io NPM workshop I had noticed a few things. The workshop used a dummy placeholder registry but there was really no technical or policy reason why every Jane and Jack can’t run the same lesson against the global registry. Up to and including the fact that they can clean up (un-publish) their NPM package when the workshop is over.

I found that fact curious. Such open accessibility felt fragile and I was wondering about the mechanisms to make sure the mechanism is fortified against accidents or abuse. It wouldn’t be something covered in a workshop, so I thought I’d see more details of this protection elsewhere.

Nope, I was wrong.

The left-pad story proved that there wasn’t any mechanism in place at all. A hilariously trivial package was yanked, causing many houses of cards to fall down.

For all the wonders of NPM, there are downsides that had its share of critics. This incident kicked the criticism into high gear. The NPM registry owner received a lot of fire from all sides and have pledged to update their procedure to avoid a repeat in the future. But I’m not sure that’s enough for the famously anti-authoritarian OSS purists. For every “conflict resolution policy” there will be some who see “ruling with an iron fist.”

 

JavaScript closures make my head spin

Coming from a world of strongly typed programming languages, JavaScript is weird. And the deeper I get, the weirder it got.

I’ve had brushes with JavaScript closures in my learning to date, and the fragments I saw looked like evil black magic. Today I dove in headfirst to learn more about it.  With my new found knowledge, it no longer feels like black magic.

It still feels evil, though.

Closures have all the appearance of something that “fell out” of the flexibility of the JavaScript type system. It felt like somebody, in an effort to solve some unrelated problems A, B, and C, accidentally opened a Pandora’s Box and closures emerged. With some bizarre behavior and huge potential for difficult-to-diagnose bugs. I’d hate to think it was designed to be that way. I prefer to believe it was an accident.

Accident or not, it is a very powerful mechanism and people are using it in the world. Which means I will need to be able to read and understand code that uses closures. It is irrelevant whether I personally believe closures are evil.

It’ll take a few more rounds of practice before I’m comfortable with the nuances. In the meantime, I’ll be reviewing this page frequently as I found it to be the most helpful. The author emphasized repeatedly that hands-on experience with real closure code is more illuminating than reading a lot of rigorous academic style description of closures. So that’s exactly what I intend to do.

 

The best I can hope for is to start feeling comfortable with the power and pitfall of closures. Maybe I’ll even come to appreciate it as a necessary evil.

But I doubt I’ll ever come to think of it as A Good Thing.

Compilation of JavaScript resources

The benefit of JavaScript is that there are a ton of resources. The downside of JavaScript is that there is so much, it’s hard to figure out where to start and who to believe.

After the expected period of beginner fumbling, I now know a few things to be incorrect. But more importantly, I now know many things to have no single Right Answer(™). JavaScript is so flexible that there are many ways to do many things and not much to differentiate one from another except personal preference.

This makes me wary of advice compiled on somebody’s blog, because that’s really their personal opinion and I don’t know if I necessarily agree with that person’s priorities.

But if the collection of resources was assembled by a group of people, that makes me a little more comfortable. So I was happy to stumble across JSTheRightWay.org.

The name seemed pompous and arrogant, but the introduction made me feel like I’ve found a good thing:

This is a guide intended to introduce new developers to JavaScript and help experienced developers learn more about its best practices.

Despite the name, this guide doesn’t necessarily mean “the only way” to do JavaScript.

We just gather all the articles, tips, and tricks from top developers and put it here. Since it comes from exceptional folks, we could say that it is “the right way”, or the best way to do so.

I’ll be coming back to this page quite frequently. May it live up to my hopes!

The other “cloud development”

When I set out on this adventure, I knew I wanted to eventually cover the basics of the major cloud services. Write some sample services to run on Amazon Web Services, Microsoft Azure, Google cloud services, etc.

I was surprised to stumble into an entirely different meaning of “cloud development”: writing code in a browser. I had seen the educational coding playgrounds of Codecademy, and I had seen small trial tools like JSFiddle, but I had no idea that was just the tip of the iceberg and things can get much fancier.

I had started a project to practice my newly-learned jQuery skills. Just with a text editor on my computer and running the HTML straight off the file system. As soon as I learned of these web-based development environments I wanted to try it out by moving my project over.

The first I tried was Codenvy, whose whitepapers are quite grandiose in what it offers for improving developer productivity. Unfortunately the kind of development Codenvy supports aren’t the kind of things I know anything about today. But I’ll revisit in a few weeks to check again.

The second I tried was Cloud 9, which does support simple HTML+CSS+JS projects like what I wanted to do. Working in Cloud 9 gave me some tools for static analysis and serving my file off a real web server. It also integrates into Github preserving my source control workflow.

After a JavaScript project of around 300 lines of code, I can comfortably say I’m quite impressed. In the areas of development-time tooling and integration experience, it far exceeded my expectations. However, there was an area of disappointment: the debugging experience was either hard to find or just wasn’t there at all.

When my little project goes awry, I resorted to loading up the project in a separate browser window and using the web browser debugger. This is on par with the simpler tools like JSFiddle and JSBin. I had hoped for better.

I’m cautiously optimistic I’ll find a tool with better debugging experience as I continue to explore.