Notes on Google I/O 2023: Angular Signals

It was reassuring to learn that Google Chrome team is continuing to invest in debugging tools for web developers, but I need to practicing writing more web apps before I have anything to debug. My final set of Google I/O presentations focus on Angular framework, which I’ve been learning about and intend to use for future projects.

What’s new in Angular

The biggest news for Google I/O 2023 is the release of Angular 16, and the rundown included a mixture of features I understood and features I did not. One of the latter is “Angular Standalone” which I knew was introduced in an early state in Angular 15. I had mentally filed it away as an advanced feature I could postpone to later, but these presenters said it was intended to reduce Angular learning curve. Oh yeah? Maybe I shouldn’t have shelved it. I should take a closer look.

New to Angular 16 is “Angular Signals“. I saw mentions of this when I was learning RxJS, but all I saw at the time were excitement from reactive programming proponents. Listening to its overview in this presentation, I quickly realized this was Angular catching up to what Vue.js had with its reactive data system. For more details, we were pointed to another session…

Rethinking reactivity with Signals

This session covered Angular Signals in more detail. Conceptually, I understood an Angular signal is equivalent to fields in Vue data(). An Angular “computed” is a counterpart to members of Vue’s computed, changing in response to signal changes. And finally, Angular’s “effect” is analogous to Vue’s watch, executing code in response to signal changes.

While concepts may be similar, Angular’s implementation have some significant differences. For example, Angular is all-in on TypeScript and incorporating type information in Angular Signals didn’t feel like a hacked-on afterthought like it did for Vue data. As a fan of TypeScript, I found this very encouraging.

Getting started with Angular Signals

After concepts of Angular Signal were introduced and explained, this presentation follows up the theory with some hands-on practice. This presentation is a companion to the code lab on Angular signals, building a sample cipher game. I want to go through that code lab myself and learn the primary lesson, and I expect I can learn even more from that exercise beyond the main lesson of Angular signals. But first, I’m curious to learn what GPU programming looks like with WebGPU.

Notes on Google I/O 2023: Browser Debugging

It was fun to see which advanced browser capabilities Google wanted to call out in their I/O conference, but advanced capabilities or not, developers need to test and debug code. I checked into a few sessions about Chrome Developer Tools.

Learn how Chrome DevTools features help developers debug effectively

That title is a pretty big promise covering a huge area, so naturally a 14-minute video couldn’t deliver everything. The title page showed a more modest and accurate title “Reduce debugging friction.”

One thing I noticed when I started working with web frameworks like Angular and Vue is that I can’t find the code I wrote anywhere. Part of this is expected: TypeScript is not directly executed but translated into JavaScript for the browser. But that translated code is buried within a lot of framework code.

To make debugging easier, Chrome team recognizes the reality that what the developer sees in the browser debug window has little resemblance to what they wrote. C developers know well that when things crash into the debugger we’d be looking at optimized assembly code and not our C source code. Debug symbols allow C code to be matched up against their assembly code output, and similarly browsers support “Source Maps” to provide this correlation between “Authored View” (the code developer wrote) and “Deployed View” (the code being executed by browser.)

A few other browser debug features were covered: the ability to mark code to be ignored, useful for decluttering our screen of library code we don’t care about at the moment. Native code debugging concepts like conditional breakpoints are also available in browser debugger. Another breakpoint derivative are “logpoints” which has all the usefulness of adding console.log() into our code without having to modify our code.

There was a brief blurb about a recorder function, whose output can then be used by Puppeteer. I thought this was a great way to document steps to reproduce bugs, making it much less ambiguous than writing text in a bug ticket. A little bit later, I realized this also meant we could incorporate those recorded steps into an automated regression test. Now that would be awesome. And speaking of browser automation…

WebDriver BiDi: The future of cross-browser automation

This presentation gave us a quick overview of browser automation past, present, and future. The future being represented by the WebDriver BiDi browser automation protocol. Still in development, but on track to be a cross-browser solution of the future. (Not just on Chrome.) I’ve barely dabbled in Selenium, but I knew enough to understand bidirectional communication between the test host and browser under test will open up a lot of benefits to make tests more consistent and waste less time waiting.

Build better forms

Here’s another session with a title far more grandiose than the actual topic. I have an alternate title: “How to make your site work with Chrome Autofill.” There’s more than Google’s self-interest at hand, though. A form crafted so a Chrome autofill recognizes its semantics also means the browser accessibility tools would understand it as well. Two birds, one stone. Most of this session boils down to following standardized form autocomplete hints, and Chrome developer tools to help you get there. I’ve barely done any web development and I’ve already learned a dislike for how complex and annoying forms can be. Every framework tries to make forms less annoying and I’m sure Angular has something to offer there but first I want to see what’s new and shiny in Angular.

Notes on Google I/O 2023: Advanced Web Browser Capabilities

After going through several Google I/O sessions on advancements in browser UI and animation, I headed over to a different section of web browser evolution: advanced hardware capabilities for performance and/or connectivity.

Introducing WebGPU

WebGPU recently became publicly available in a mainstream browser in the form of Chrome 113, and varying levels of support are underway for other browsers. This is a topic that has interest and support from many different parties and I’m cautiously optimistic it will become widespread in the future. WebGPU is an evolution from WebGL following the precedence of desktop APIs Vulkan/D3D 12/Metals as evolutions from OpenGL.

One major aspect of this evolution was growing beyond just drawing operations, opening up hardware capabilities to non-drawing applications. Today this interest is focused on running machine learning algorithms, but it is more general-purpose than that. (Side question: how does that relate to WebML? Something to look into later.) Today people have shoehorned ML workloads into WebGL by rephrasing computation as drawing operations, with associated limitations and conversion overhead. WebGPU eliminates such pretense.

Another major aspect is elimination of global state. Global state was fine when the focus of OpenGL was to support a single CAD rendering window, but very problematic in today’s world of large code modules running in parallel on different tasks.

One distinction between WebGPU and its counterpart desktop APIs is that absolute raw performance does not override everything else. Some performance were left on the table in favor of gaining consensus across partners in standardization, in order to reach better cross-platform compatibility. I found this very interesting and promising for the future.

This presentation had an associated Codelab “Your first WebGPU App” to build a parallelized Conway’s Game of Life, no prior GPU experience required. I don’t know what I might use WebGPU for, but I’ll add this Codelab to my to-do list just to see what WebGPU is like firsthand.

WebAssembly: a new development paradigm for the web

In contrast to the hot-from-the-oven WebGPU, WebAssembly has been around for a while. Letting browsers run code beyond JavaScript. The initial scenarios were to allow legacy C code to be compiled into WebAssembly for high performance browser execution, and that approach has found success. I knew about Google’s TensorFlow AI runtime running on WebAssembly, but I hadn’t know there were also ports of codebases like OpenCV and FFmpeg. That might be worth looking into later.

With success of C and similar code that manage their own memory, recent attention had turned to supporting managed-memory code efficiently. Beyond JavaScript, that is, since JavaScript is already a language where the runtime manages the memory on behalf of the developer. And when another such runtime is in the WebAssembly box, that meant two different memory managers. At best they are duplicating effort and wasting memory, at worst having two separate reference counting garbage collection systems risk them falling out of sync to cause memory errors. This presentation gave a few examples of the kind of messes we can get into, now resolved with latest WebAssembly evolution putting all memory management under a single system.

One thing thrown out as an aside was a comment “Node.js or other webasm server environments” but no further elaboration in the presentation. WebAssembly on the server? Sure, why not. I don’t know all the reasons why people might be interested but I also didn’t expect Node.js (JavaScript on the server) to be interesting and I was very wrong. If server-side WebAssembly takes off, I’ll cross paths with it soon enough.

Advanced web APIs in real world apps

I watched this presentation because I was curious about latest progress in Project Fugu, Google’s effort to bridge the capability gap between web apps and native apps. The magnetometer web API, which I recently played with, was done under this umbrella. As did many other electronics hobbyist friendly technologies like Web Serial and Web USB.

This session didn’t cover any capabilities that were personally relevant to me. They were selections from the Project Fugu Showcase, demonstrating capabilities like local file system access, use fonts installed on the system, etc. No hardware related fun this time, oh well. I moved on to a different topic: tools to help debug web apps.

Notes on Google I/O 2023: CSS Viewport and Animations

After watching a few Google I/O presentations strictly for curiosity’s sake, I proceeded to a few sessions that might have some relevance on near-term project ideas. First up are a few sessions that cover new developments in web browser capabilities. Generally speaking, I expect I’d only use most of these indirectly via libraries, but it’s neat to see what’s possible.

What’s new in web

This was a quick overview of new web development capabilities that, as tracked by Web Platform Baseline, have enough support to become realistic option for deploying in production. The part that caught my attention were CSS viewport units that lets us account for space consumed by browser UI. More details were promised in another session “What’s new in Web UI” so that’s where I went next.

What’s new in Web UI

I got a little more information about CSS viewport units here: not just small (with all browser UI present) and large (if all browser UI were absent) but also dynamic (adjusts as UI pieces moves in and out.) Nice! Related to this front are container queries that layout decisions to be made at a finer-grained level: parent container, instead of entire viewport.

CSS nesting folds a major advantage of Sass into standard CSS. Cascade layers and scoped styles allow fine control over style sheet cascading and avoid collisions in the face of modern web platforms that combine all styles from components and bundle them into a single download.

We’ve always had window.alert() to ask the browser to pop up a modal dialog box, but it’s very crude. Trying to recreate that illusion inside our web page required a lot of nasty DOM hackery. Popovers are still experimental, but it promises all the refinement of HTML under author’s control while letting the browser & operating system handle all the windowing complexity.

A few quick demonstrations of nifty animations were given, with a pointer to another session “Whats new in web animations”.

What’s new in web animations

This presenter opened up with a great manifesto on how things should work: “animations that help users build an accurate mental model of how the interface works, thereby increasing the overall usability.” When it works well, the user would rarely even notice their presence. What we usually notice to our annoyance are zealous overuse of animation getting in our way!

One incoming experimental feature is the View Transitions API for managing CSS animation effects as it applies to elements entering and leaving the markup DOM structure. It caught my attention because this would be a standardized version of what I just saw in Vue with <Transition> and something I’ve found mentioned in Angular Developer Guides as well.

Most of the effects demonstrated here are things I’ve seen online with other websites, but now it can be done strictly with CSS. No JavaScript required, which is great for moving workloads off the main thread.

These are all good things coming down the line for visual layout, but usually I’m more interested in hardware interfacing browser capabilities.

Notes on Google I/O 2023: AR, Material 3, ChromeOS Kiosk

Google I/O 2023 developer conference materials are available online free of charge. Back when they were in-person events with an admission fee, I never felt I would benefit enough to be worth the cost and effort. But now the price is but a few hours of my time. I looked around to see what information I can absorb. I started with sessions that were just for curiosity, with no short-term plans to get hands-on.

What’s new in Google AR

I examined Google’s ARKit earlier with an interest in adopting its structure-from-motion capabilities for robotics projects. I had hoped its “recognize things in the room for users to interact” capability can be used for a robot “recognize things in the room so robot doesn’t run into them.” The two intents were different enough I didn’t see a promising path forward. I thought I’d watch this presentation to see if that has changed. The answer is “No”, but it was still fun to see what they’re up to.

This video explained they are focusing on users wandering outdoors in an urban environment. Think Google Street View but augmented in 3D with “Streetscape Geometry”. Their updated “Geospatial Depth” and “Scene Semantics API” are optimized to work with street scale landmarks, not indoor rooms like I wanted for my robots. There’s a separate session on the “Geospatial Creator” tool available to create AR content at this scale. As part of I/O they’ve released an AR demo “Mega Golf” that lets you play a game of golf through your real-world cityscape. Another showcase sometime later this year will be an Pokemon-Go style AR update to an old classic with “Space Invaders World Defence.” I’ll probably give those apps a whirl but won’t do much more until/unless a cool project idea arises.

What’s New in Material Design

I’ve long been fascinated watching Google evolve their “Material Design.” Their latest push is in creating “Spirited UI” to evoke positive user emotions via animation. They’re also making an emphasis on letting individual designer establish their own unique look, deviating from rigid design rules. “Instead of laying down rules, it’s laying down a beat.”

I got pretty lost in the artistic design language, but I understood the engineering aspects. The primary platform for this design team is Android Jetpack Compose, followed by View and Flutter. Web is… somewhere after that and not mentioned in the presentation. I keep an eye out for future developments.

Developing Kiosk Apps for ChromeOS

I’ve been interested in building web apps for single focused tasks. My Compass practice project is pretty kiosk-ish with its singular focus and full-screen experience. There was also an experiment earlier investigating using a Raspberry Pi to serve up a kiosk experience. I wanted to check out what ChromeOS has to offer on this front.

I only got partway through the session, stopping after they listed a ChromeOS management license requirement to enable kiosk functionality. Either Chrome Enterprise (nope) Chrome Education (nope) or a Kiosk & Signage license at $25 per device per year. More than I’m willing to pay for idle curiosity, so I’m moving onwards to other presentations.

Sawppy at Space-Themed Episode of Hangout & NERDOUT

Roughly twenty-four hours from now, around December 15th, 2022 7PM Eastern time (4PM Pacific) I should be starting a chat with several other makers on a space-themed episode of Hackster.io/Make Hangout & NERDOUT. I will be one of three guest nerds invited to talk about their space-themed projects. Sawppy the Rover will be my topic for a ten-minute presentation, alongside similar presentation by the other guests. Then it’ll be an open Q&A where people can ask questions of the presenters (and presenters ask questions of each other!)

Sawppy has been a great adventure and it will be a challenge to compress the full story down to ten minutes, but I’ll give it my best shot. There’ll be a bit of Sawppy’s past, some of rover present, and a look towards the future. The Q&A session will be very informative at telling me which aspects of Sawppy catches people’s interest. Or if one or both of the other two presentations turn out to be more interesting to the audience, that’ll tell me something too!

Hackster.io landing page for the event: https://www.hackster.io/news/hangout-nerdout-ep-4-on-december-15th-goes-out-into-space-b633c0b485e8

The rudimentary PowerPoint slide deck I created for this event is publicly visible here: “20221215 Hangout Nerdout

Links shared over chat during the event (for all presenters, not just Sawppy): https://www.one-tab.com/page/OINy1FRqQiKasbZWUjtaww

The Zoom Events session was recorded, and I believe the intent is for it to be published at some point in the future. When that happens, I will see if I can embed the video here.

Mars 2020 Perseverance Surface Operations Begin

I’m interrupting my story of micro Sawppy evolution today to send congratulations to the Mars 2020 entry/descent/landing (EDL) team on successful mission completion! As I type this, telemetry confirms the rover is on the surface and the first image from a hazard camera has been received showing the surface of Mars.

Personally, I was most nervous about the components which are new for this rover, specifically the Terrain Relative Navigation (TRN) system. Not that the rest of the EDL was guaranteed to work, but at least many of the systems were proven to work once with Curiosity EDL. As I read about the various systems, TRN stood out to be a high-risk and high-reward step forward for autonomous robotic exploration.

When choosing Mars landing sites, past missions had to pick areas that are relatively flat with minimal obstacles to crash into. Unfortunately those properties also make for a geologically uninteresting area for exploration. Curiosity rover spent a lot of time driving away from its landing zone towards scientifically informative landscapes. This was necessary because the landing site is dictated by a lot of factor beyond the mission’s control, adding uncertainty to where the actual landing site will be.

TRN allows Perseverance to explore areas previously off-limits by turning landing from a passive into an active process by adding an element of control. Instead of just accepting a vague location dictated by unknown Martian winds and other elements of uncertainty, TRN has cameras that will look at the terrain and can maneuver the rover to a safe location avoiding the nastier (though probably interesting!) parts of the landscape. While it has a set of satellite pictures for reference, they were taken at much higher altitude than what it will see through its own cameras. Would it get confused? Would it be unable to make up its mind? Would it confidently choose a bad landing site? There are so many ways TRN can go wrong, but the rewards of TRN success means a far more scientifically productive mission making the risk worthwhile. And once it works, TRN successors will let future missions go places they couldn’t have previously explored. It is a really big deal.

Listening to the mission coverage, I was hugely relieved to hear “TRN has landing solution.” For me that was almost as exciting as hearing the rover is on the ground and seeing an image from one of the navigation hazard cameras. The journey is at an end, the adventure is just beginning.

[UPDATE: Video footage of Perseverance landing has shown another way my Sawppy rovers successfully emulated behavior of real Mars exploration rovers.]

“Surface operations begin” signals transition to the main mission on the surface of another planet. A lot of scientists are gearing up to get to work, and I return to my little rovers.

Remaining To-Do For My Next Unity 3D Adventure

I enjoyed my Unity 3D adventure this time around, starting from the LEGO microgame tutorials through to the Essentials pathway and finally venturing out and learning pieces at my own pace in my own order. My result for this Unity 3D session was Bouncy Bouncy Lights and while I acknowledge it is a beginner effort, it was more than I had accomplished on any of my past adventures in Unity 3D. Unfortunately, once again I find myself at a pause point, without a specific motivation to do more with Unity. But there are a few items still on the list of things that might be interesting to explore for the future.

The biggest gap I see in my Unity skill is creating my own unique assets. Unity supports 2D and 3D creations, but I don’t have art skills in either field. I’ve dabbled in Inkscape enough that I might be able to build some rudimentary things if I need to, and for 3D meshes I could import STL so I could apply my 3D printing design CAD skills. But the real answer is Blender or similar 3D geometry creation software, and that’s an entirely different learning curve to climb.

Combing through Unity documentation, I learned of a “world building” tool called ProBuilder. I’m not entirely sure exactly where it fits in the greater scheme of things, but I can see it has tools to manipulate meshes and even create them from scratch. It doesn’t claim to be a good tool for doing so, but supposedly it’s a good tool for whipping up quick mockups and placeholders. Most of the information about ProBuilder is focused on UV mapping, but I didn’t know it at the start. All ProBuilder documentation assume I already knew what UV meant, and all I could tell is that UV didn’t mean ultraviolet in this context. Fortunately searching for UV in context of 3D graphics gives me this Wikipedia article on UV mapping. There is a dearth of written documentation for ProBuilder, what little I found all point to a YouTube playlist. Maybe I’ll find the time to sit through them later.

I skimmed through the Unity Essentials sections on audio because Bouncy Bouncy Lights was to be silent, so audio is still on the to-do list. And like 2D/3D asset creation, I’m neither a musician nor a sound engineer. But if I ever come across motivation to climb this learning curve I know where to go to pick up where I left off. I know I have a lot to learn since meager audio experimentation already produced one lesson: AudioSource.Play would stop any prior occurrences of the sound. If I want the same sound to overlap each other I have to use AudioSource.PlayOneShot.

Incorporating video is an interesting way to make Unity scenes more dynamic, without adding complexity to the scene or overloading the physics or animation engine. There’s a Unity Learn tutorial about this topic, but I found that video assets are not incorporated in WebGL builds. The documentation said video files must be hosted independently for playback by WebGL, which adds to the hosting complications if I want to go down that route.

WebGL
The Video Clip Importer is not used for WebGL game builds. You must use the Video Player component’s URL option.

And finally, I should set aside time to learn about shaders. Unity’s default shader is effective, but it has become quite recognizable and there are jokes about the “Unity Look” of games that never modified default shader properties. I personally have no problem with this, as long as the gameplay is good. (I highly recommend Overcooked game series, built in Unity and have the look.) But I am curious about how to make a game look distinctive, and shaders are the best tool to do so. I found a short Unity Learn tutorial but it doesn’t cover very much before dumping readers into the Writing Shaders section of the manual. I was also dismayed to learn that we don’t have IntelliSense or similar helpfulness in Visual Studio when writing shader files. This is going to be a course of study all on its own, and again I await good motivation for me to go climb that learning curve.

I enjoyed this session of Unity 3D adventure, and I really loved that I got far enough this time to build my own thing. I’ve summarized this adventure in my talk to ART.HAPPENS, hoping that others might find my experience informative in video form in addition to written form on this blog. I’ve only barely scratched the surface of Unity. There’s a lot more to learn, but that’ll be left to future Unity adventures because I’m returning to rover work.

Venturing Beyond Unity Essentials Pathway

To help beginners learn how to create something simple from scratch, Unity Learn set up the Essentials pathway which I followed. Building from an empty scene taught me a lot of basic tasks that were already done for us in the LEGO microgame tutorial template. Enough that I felt confident enough to start building my own project for ART.HAPPENS. It was a learning exercise, running into one barrier after another, but I felt confident I knew the vocabulary to search for answers on my own.

Exercises in the Essentials pathway got me started on the Unity 3D physics engine, with information about setting up colliders and physics materials. Building off the rolling ball exercise, I created a big plane for balls to bounce around in and increased the bounciness for both ball and plane. The first draft was a disaster, because unlike real life it is trivial to build a perfectly flat plane in a digital world, so the balls keep bouncing in the same place forever. I had to introduce a tilt to make the bounces more interesting.

But while bouncing balls look fun (title image) they weren’t quite good enough. I thought adding a light source might help but that still wasn’t interesting enough. Switching from ball to cube gave me a clearly illuminated surface with falloff in brightness, which I thought looked more interesting than a highlight point on a ball. However, cubes don’t roll and would stop on the plane. For a while I was torn: cubes look better but spheres move better. Which way should I go? Then a stroke of realization: this is a digital world and I can change the rules if I want. So I used a cube model for visuals, but attached a sphere model for physics collisions. Now I have objects that look like cubes but bounce like balls. Something nearly impossible in the real world but trivial in the digital world.

To make these lights show up better, I wanted a dark environment. This was a multi-step procedure. First I did the obvious: delete the default light source that came with the 3D project template. Then I had to look up environment settings to turn off the “Skybox”. That still wasn’t dark, until I edited camera settings to change default color to black. Once everything went black I noticed the cubes weren’t immediately discernable as cubes anymore so I turned the lights back up… but decided it was more fun to start dark and turned the lights back off.

I wanted to add user interactivity but realized the LEGO microgame used an entirely different input system than standard Unity and nothing on the Essentials path taught me about user input. Searching around on Unity Learn I got very confused with contradictory information until I eventually figured out there are two Unity user input systems. There’s the “Input Manager” which is the legacy system and its candidate replacement “Input System Package” which is intended to solve problems with the old system. Since I had no investment in the old system, I decided to try the new one. Unfortunately even tthough there’s a Unity Learn session, I still found it frustrating as did others. I got far enough to add interactivity to Bouncy Bouncy Lights but it wasn’t fun. I’m not even sure I should be using it yet, seeing how none of the microgames did. Now that I know enough to know what to look for, I could see that the LEGO microgame used the old input system. Either way, there’s more climbing of the learning curve ahead. [UPDATE: After I wrote this post, but before I published it, Unity released another tutorial for the new Input Manager. Judging by this demo, Bouncy Bouncy Lights is using input manager incorrectly.]

The next to-do item was to add the title and interactivity instructions. After frustration with exploring a new input system, I went back to LEGO microgame and looked up exactly how they presented their text. I learned it was a system called TextMesh Pro and thankfully it had a Unity Learn section and a PDF manual was installed as part of the asset download. Following those instructions it was straightforward to put up some text using the default font. After my input system frustration, I didn’t want to get much more adventurous than that.

I had debated when to present the interactivity instructions. Ideally I would present them just as the audience got oriented and recognizes the default setup. Possibly start getting bored and ready to move on, so I can give them interactivity to keep their attention. But I have no idea when that would be. When I read the requirement that the title of the piece should be in the presentation, I added that as a title card before showing the bouncing lights. And once I added a title card, it was easy to add another card with the instructions to be shown before the bouncing lights. The final twist was the realization I shouldn’t present them as static cards that fade out: since I already had all these physical interactions in place, they are presented as falling bouncing objects in their own right.

The art submission instructions said to put in my name and a way for people to reach me, so I put my name and newscrewdriver.com at the bottom of the screen using TextMesh Pro. Then it occurred to me the URL should be a clickable link, which led me down the path of finding out how an Unity WebGL title can interact with the web browser. There seemed to be several different deprecated ways to do it, but they all point to the current recommended approach and now my URL is clickable! For fun, I added a little spotlight effect when the mouse cursor is over the URL.

The final touch is to modify the presentation HTML to suit the Gather virtual space used by ART.HAPPENS. By default Unity WebGL build generates an index.html file that puts the project inside a fixed-size box. Outside that box is the title and a button to go full screen. I didn’t want the full screen option for presenting this work in Gather, but I wanted to fill my <iframe> instead of a little box within it. My CSS layout skills are still pretty weak and I couldn’t figure it out on my own, but I found this forum thread which taught me to replace the <body> tag with the following:

  <body>
      <div class="webgl-content" style="width:100%; height:100%">
        <div id="unityContainer" style="width:100%; height:100%">
        </div>
      </div>
  </body>

I don’t understand why we need to put 100% styles on two elements before it works, but hopefully I will understand whenever I get around to my long-overdue study session on CSS layout. The final results of my project can be viewed at my GitHub Pages hosting location. Which is a satisfying result but there are a lot more to Unity to learn.

Notes on Unity Essentials Pathway

As far as Unity 3D creations go, my Bouncy Bouncy Lights project is pretty simple, as expected of a beginner’s learning project. My Unity (re)learning session started with their LEGO microgame tutorial, but I didn’t want to submit a LEGO-derived Unity project for ART.HAPPENS. (And it might not have been legal under the LEGO EULA anyway.) So after completing the LEGO microgame tutorial and its suggested Creative Mods exercises, I still had more to learn.

The good news is that Unity Learn has no shortage of instruction materials, the bad news is a beginner gets lost on where to start. To help with this, they’ve recently (or at least since the last time I investigated Unity) rolled out the concept of “Pathways” which organize a set of lessons targeted for a particular audience. People looking for something after completing their microgame tutorial is sent towards the Unity Essentials Pathway.

Before throwing people into the deep pool that is Unity Editor, the Essentials Pathway starts by setting us up with a lot of background information in the form of video interview clips with industry professionals using Unity. I preferred to read instead of watching videos, but I wanted to hear these words of wisdom so I sat through them. I loved that they allocated time to assure beginners that they’re not alone if they found Unity Editor intimidating at first glance. The best part was the person who claimed their first experience was taking one look and said “Um, no.” Closed Unity, and didn’t return for several weeks.

Other interviews covered the history of Unity, and how it enabled creation of real-time interactive content, and the tool evolved alongside the industry. There were also information for people who are interested in building a new career using Unity. Introducing terminology and even common job titles that can be used to query on sites like LinkedIn. I felt this section offered more applicable advise for this job field more than I ever received in college for my job field. I was mildly amused and surprised to see Unity classes ended with a quiz to make sure I understood everything.

After this background we are finally set loose on Unity Editor starting from scratch. Well, an empty 3D project template which is as close to scratch as I cared to get. The template has a camera and a light source but not much else. Unlike the microgames which are already filled with assets and code. This is what I wanted to see: how do I start from geometry primitives and work my way up, pulling from Unity Asset store as needed to for useful prebuilt pieces. One of the exercises was to make a ball roll down a contraption of our design (title image) and I paid special attention to this interaction. The Unity physics engine was the main reason I chose to study Unity instead of three.js or A-Frame and it became the core for Bouncy Bouncy Lights.

I’ve had a lot of experience writing in C# code, so I was able to quickly breeze through C# scripting portions of Unity Essentials. But I’m not sure this is enough to get a non-coder up and running on Unity scripting. Perhaps Unity decided they’re not a coding boot camp and didn’t bother to start at the beginning. People who have never coded before will need to go elsewhere before coming back to Unity scripting, and a few pointers might be nice.

I skimmed through a few sections that I decided was unimportant for my ART.HAPPENS project. Sound was one of them: very important for an immersive gaming experience, but my project will be silent because the Gather virtual space have a video chatting component and I didn’t want my sounds to interfere with people talking. Another area I quickly skimmed through were using Unity for 2D games, which is not my goal this time but perhaps I’ll return later.

And finally, there were information pointing us to Unity Connect and setting up a profile. At first glance it looked like Unity tried to set up a social network for Unity users, but it is shutting down with portions redistributed to other parts of the Unity network. I had no investment here so I was unaffected, but it made me curious how often Unity shuts things down. Hopefully not as often as Google who have become infamous for doing so.

I now have a basic grasp on this incredibly capable tool, and it’s time to start venturing beyond guided paths.

Bouncy Bouncy Lights

My motivation for going through Unity’s LEGO microgame tutorial (plus associated exercises) was to learn Unity Editor in the hopes of building something for ART.HAPPENS, a community virtual art show. I didn’t expect to build anything significant with my meager skills, but I made something with the skill I have. It definitely fit with the theme of everyone sharing works that they had fun with, and learned from. I arrived at something I felt was a visually interesting interactive experience which I titled Bouncy Bouncy Lights and, if selected, should be part of the exhibition opening today. If it was not selected, or if the show has concluded and its site taken down, my project will remain available at my own GitHub Pages hosting location.

There are still a few traces of my original idea, which was to build a follow-up to Glow Flow. Something colorful with Pixelblaze-controlled LED lights. But I decided to move from the physical to digital domain so now I have random brightly colored lights in a dark room each reflecting off an associated cube. But by default there isn’t enough for the viewer to immediately see the whole cube, just the illuminated face. I want them to observe the colorful lights moving around for a bit before they recognized what’s happening, prompting the delight of discovery.

Interactivity comes in two forms: arrow keys will change the angle of the platform, which will change the direction of the bouncing cubes. There is a default time interval for new falling cubes. I chose it so that there’ll always be a few lights on screen, but not so many to make the cubes obvious. The user can also press space bar to add lights faster than the default interval. If the space bar is held down, the extra lights will add enough illumination to make the cubes obvious and they’ll frequently collide with each other. I limited it to a certain rate because the aesthetics change if too many lights all jump in. Thankfully I don’t have to worry about things like ensuring sufficient voltage supply for lights when working in the digital world, but too many lights in the digital world add up to white washing out the individual colors to a pastel shade. And too many cubes interfere with bouncing, and we get an avalanche of cubes trying to get out of each other’s way. It’s not the look I want for the project, but I left in a way to do it as an Easter egg. Maybe people would enjoy bringing it up once in a while for laughs.

I’m happy with how Bouncy Bouncy Lights turned out, but I’m even happier with it as a motivation for my journey learning how to work with a blank Unity canvas.

Notes on Unity LEGO Microgame Creative Mods

Once a Unity 3D beginner completed a tightly-scripted microgame tutorial, we are directed towards a collection of “Creative Mods”. These suggested exercises build on top of what we created in the scripted tutorial. Except now individual tasks are more loosely described, and we are encouraged to introduce our own variations. We are also allowed to experiment freely, as the Unity Editor is no longer partially locked down to keep us from going astray. The upside of complete freedom is balanced by the downside of easily shooting ourselves in the foot. But now we know enough to not do that, or know how to fix it if we do. (In theory.)

Each of the Unity introductory microgames have their own list of suggested modifications, and since I just completed the LEGO microgame I went through the LEGO list. I was mildly surprised to see this list grow while I was in the middle of doing it — as of this writing, new suggested activities are still being added. Some of these weren’t actually activities at all, such as one entirely focused on a PDF (apparently created from PowerPoint) serving as a manual for the list of available LEGO Behaviour Bricks. But most of the others introduce something new and interesting.

In addition to the LEGO themed Unity assets from the initial microgame tutorial, others exist for us to import and use in our LEGO microgame projects. There was a Christmas-themed set with Santa Claus (causing me to run through Visual Studio 2019 installer again from Unity Hub to get Unity integration), a set resembling LEGO City except it’s on a tropical island, a set for LEGO Castles, and my personal favorite: LEGO Space. Most of my personal LEGO collection were from their space theme and I was happy to see a lot of my old friends available for play in the digital world.

When I noticed the list of activities grew while I was working on them, it gave me a feeling this was a work-in-progress. That feeling continued when I imported some of these asset collections and fired up their example scene. Not all of them worked correctly, mostly centered around how LEGO pieces attached to each other especially the Behaviour Bricks. Models detach and break apart at unexpected points. Sometimes I could fix it by using the Unity Editor to detach and re-attach bricks, but not always. This brick attachment system was not a standard Unity Editor but an extension built for the LEGO microgame theme, and I guess there are still some bugs to be ironed out.

The most exciting part of the tutorial was an opportunity to go beyond the LEGO prefab assets they gave us and build our own LEGO creations for use in Unity games. A separate “Build your own Enemy” tutorial gave us instructions on how to build with LEGO piece by piece within Unity Editor, but that’s cumbersome compared to using dedicated LEGO design software like BrickLink Studio and exporting the results to Unity. We don’t get to use arbitrary LEGO pieces, we have to stay within a prescribed parts palette, but it’s still a lot of freedom. I immediately built myself a little LEGO spaceship because old habits die hard.

I knew software like BrickLink Studio existed but this was the first time I sat down and tried to use one. The parts palette was disorienting, because it was completely unlike how I work with LEGO in the real world. I’m used to pawing through my bin of parts looking for the one I want, not selecting parts from a menu organized under an unfamiliar taxonomy. I wanted my little spaceship to have maneuvering thrusters, something I add to almost all of my LEGO space creations, but it seems to be absent from the approved list. (UPDATE: A few days later I found it listed under “3963 Brick, Modified 1 x 1 with 3 Loudspeakers / Space Positioning Rockets”) The strangest omission seem to be wheels. I see a lot of parts for automobiles, including car doors and windshields and even fender arches. But the only wheels I found in the approved list are steering wheels. I doubt they would include so many different fender arches without wheels to put under them, but I can’t find a single ground vehicle wheel in the palette! Oversight, puzzling intentional choice, or my own blindness? I lean towards the last but for now it’s just one more reason for me to stick with spaceships.

My little LEGO spaceship, alongside many other LEGO microgame Creative Mods exercises (but not all since the list is still growing) was integrated into my variant of the LEGO microgame and uploaded as “Desert Dusk Demo“. The first time I uploaded, I closed the window and panicked because I didn’t copy down the URL and I didn’t know how to find it again. Eventually I figured out that everything I uploaded to Unity Play is visible at https://play.unity.com/discover/mygames.

But since the legal terms of LEGO microgame assets are restricted to that site, I have to do something else for my learn-and-share creation for ART.HAPPENS. There were a few more steps I had to take there before I had my exhibit Bouncy Bouncy Lights.

Notes on Unity LEGO Microgame Tutorial

To help Unity beginners get their bearings inside a tremendously complex and powerful tool, Unity published small tutorials called microgames. Each of them represent a particular game genre, with the recently released LEGO microgame as the default option. Since I love LEGO, I saw no reason to deviate from this default. These microgame tutorials are implemented as Unity project templates that we can launch from Unity’s Hub launcher, they’re just filled out with far more content than the typical Unity empty project template.

Once a Unity project was created with the LEGO microgame template (and after we accepted all the legal conditions of using these LEGO digital assets) we see the complex Unity interface. Well aware of how intimidating it may look to a beginner, the tutorial darkened majority of options and highlighted just the one we need for that step in the tutorial. Which got me wondering: the presence of these tutorial microgames imply the Unity Editor UI itself can be scripted and controlled, how is that done? But that’s not my goal today so I set that observation aside.

The LEGO microgame starts with the basics: how to save our progress and how to play test the game in its current state. The very first change is adjusting a single variable, our character’s movement speed, and test its results. We are completely on rails at this point: the Unity Editor is locked off so I couldn’t change any other character variable, and I couldn’t even proceed unless I changed the character speed to exactly the prescribed value. This is a good way to make sure beginners don’t inadvertently change something, since we’d have no idea how to fix it yet!

Following chapters of the tutorial gradually open up the editor, allowing us to use more and more editor options and giving us gradually more latitude to change the microgame as we liked. We are introduced to the concept of “assets” which are pieces we use to assemble our game. In an ideal world they snap together like LEGO pieces, and in the case of building this microgame occasionally they actually do represent LEGO pieces.

Aside from in-game objects, the LEGO minigame also allows us to define and change in-game behavior using “Behaviour Bricks”: Assets that look just like another LEGO block in game, except they are linked to Unity code behind the scenes giving them more functionality than just a static plastic brick. I appreciated how it makes game development super easy, as the most literal implementation of “object-oriented programming” I have ever seen. However, I was conscious of the fact these behavior bricks are limited to the LEGO microgame environment. Anyone who wishes to venture beyond would have to learn entirely different ways to implement Unity behavior and these training wheels will be of limited help.

The final chapter of this LEGO microgame tutorial ended with walking us through how to build and publish our project to Unity Play, their hosting service for people to upload their Unity projects. I followed those steps to publish my own LEGO microgame, but what’s online now isn’t just the tutorial. It also included what they called “Creative Mods” for a microgame.

Unity Tutorial LEGO Microgame

Once I made the decision to try learning Unity again, it was time to revisit Unity’s learning resources. This was one aspect that I appreciated about Unity: they have continuously worked to lower their barrier to entry. Complete beginners are started on tutorials that walk us through building microgames, which are prebuilt Unity projects that show many of the basic elements of a game. Several different microgames are available, each representing a different game genre, so a beginner can choose whichever one that appeals to them.

But first an ambitious Unity student had to install Unity itself. Right now Unity releases are named by year much like other software like Ubuntu. Today, the microgame tutorials tell beginners to install version 2019.4 but did not explain why. I was curious why they tell people to install a version that is approaching two years old so I did a little digging. The answer is that Unity designates specific versions as LTS (Long Term Support) releases. Unity LTS is intended to be a stable and reliable version, with the best library compatibility and the most complete product documentation. More recent releases may have shiny new features, but a beginner wouldn’t need them and it makes sense to start with the latest LTS. Which, as of this writing, is 2019.4.

I vaguely recall running through one of these microgame exercises on an earlier attempt at Unity. I chose the karting microgame because I had always liked driving games. Gran Turismo on Sony PlayStation (the originals in both cases, before either got numbers) was what drew me into console gaming. But I ran out of steam on the karting microgame and those lessons did not stick. Since I’m effectively starting from scratch, I might as well start with a new microgame, and the newest hotness released just a few months ago is the LEGO microgame. Representing third-person view games like Tomb Raider and, well, the LEGO video games we can buy right now!

I don’t know what kind of business arrangement behind the scenes made it possible to have digital LEGO resources in our Unity projects, but I am thankful it exists. And since Unity doesn’t own the rights to these assets, the EULA for starting a LEGO microgame is far longer than for the other microgames using generic game assets. I was not surprised to find clauses forbidding use of these assets in commercial projects, but I was mildly surprised that we are only allowed to host them on Unity’s project hosting site. We can’t even host them on our own sites elsewhere. But the most unexpected clause in the EULA is that all LEGO creations depicted in our minigames must be creatable with real LEGO bricks. We are not allowed to invent LEGO bricks that do not exist in real life. I don’t find that restriction onerous, just surprising but made sense in hindsight. I’m not planning to invent an implausible LEGO brick in my own tutorial run so I should be fine.

Checking In on Unity 3D

Deciding to participating in ART.HAPPENS is my latest motivation to look at Unity 3D, something I’ve done several times. My first look was almost five years ago, and my most recent look was about a year and a half ago in the context of machine learning. Unity is a tremendously powerful tool and I’ve gone through a few beginner tutorials, but I never got as far as building my own Unity project. Will that finally change this time?

My previous look at Unity was motivated by an interest in getting into the exciting world of machine learning, specifically in the field of reinforcement learning. That line of investigation did not get very far, but as most machine learning tools are focused on Linux there was the question of Unity’s Linux support. Not just to build a game (which is supported) but also to run the Unity editor itself on Linux. My investigation was right around the time Unity Editor for Linux entered beta with expectation for release in 2020, but that has been pushed to 2021.

For my current motivation, it’s not as important to run the editor on Linux. I can just as easily create something fun and interactive by running Unity on Windows. Which led to the next question: could I output something that can work inside an <iframe> hosted within Gather, the virtual space for ART.HAPPENS? On paper the answer is yes. Unity has had the ability to render content using WebGL for a while, and their code has matured alongside browser support for WebGL. But even better is the development (and even more importantly, browser adoption) of WebAssembly for running code in a browser. This results in Unity titles that are faster to download and to execute than the previous approach of compiling Unity projects to JavaScript. These advancements are far more encouraging than what Unity competitor Unreal Engine has done, which was to boot HTML5 support out of core to a community project. Quite a sharp contrast to Unity’s continued effort to make web output a first class citizen among all of its platforms, and this gives me the confidence to proceed and dive in to the latest Unity tutorial for beginners: LEGO!

ART.HAPPENS Motivates Return to Unity 3D

I’ve been talking about rovers on this blog for several weeks nonstop. I thought it would be fun to have a micro Sawppy rover up and running in time for Perseverance landing on February 18th, but I don’t think I’ll make that self-imposed deadline. I have discovered I cannot sustain “all rovers all the time” and need a break from rover work. I’m not abandoning the micro rover project, I just need to switch to another project for a while as a change of pace.

I was invited to participate in ART.HAPPENS, a community art show. My first instinct was to say “I’m not an artist!” but I was gently corrected. This is not the fancy schmancy elitist art world, it is the world of people having fun and sharing their works kind of world. Yes, some of the people present are bona fide artists, but I was assured anyone who wants to share something done for the sheer joy of creating can join in the fun.

OK then, I can probably take a stab at it. Most of my projects are done to accomplish a specific purpose or task, so it’s a rare break to not worry about meeting objectives and build something for fun. My first line of thought was to build a follow-up to Glow Flow, something visually pleasing and interactive built out of 3D printed parts and illuminated by colorful LEDs controlled with a Pixelblaze. It’s been on my to-do list to explore more ideas on how else to use a Pixelblaze.

Since we’re in the middle of a pandemic, this art show is a virtual affair. I learned that people will be sharing photos and videos of their projects and shown in a virtual meeting space called Gather. Chosen partially because the platform was built to be friendly to all computer skill levels, Gather tries to eliminate friction of digital gatherings.

I poked my head into Gather and saw an aesthetic that reminded me of old Apple //e video games that used a top-down tiled view. For those old games, it was a necessity due to the limited computing power and memory of an old Apple computer. And those same traits are helpful here to build a system with minimal hardware requirements.

Sharing photos and videos of something like Glow Flow would be fun, but wouldn’t be the full experience. Glow Flow looked good but the real fun comes from handling it with our own hands. I was willing to make some compromises in the reality of the world today, until I noticed how individual projects will be shared as web content that will be hosted in an <iframe>. That changes the equation. Because it meant I could build something interactive after all. If I have control of content inside an <iframe>, I can build interactive web content for this show.

I briefly looked at a few things that might have been interesting, like three.js and A-Frame. But as I read documentation for those platforms, my enthusiasm dampened by shortcomings I came across. For example, building experiences incorporating physics simulation seems to be a big can of worms on those platforms. Eventually I decided: screw it, if I’m going to do this, I’m going to go big. It’s time to revisit Unity 3D.

Sewing Machine at CRASHspace Wearables Wednesdays

I brought a “naked” sewing machine to the February 2020 edition of Wearables Wednesdays. Wearables Wednesdays is a regularly occurring monthly meetup at CRASHspace LA, a makerspace in Culver City whose membership includes a lot of people I love to chat and hack with. But Culver City is a nontrivial drive from my usual range. So as much as I would love to frequently drop by and hang out, in reality I only visit at most once a month.

The sewing machine belongs to Emily who received it as a gift from its previous owner. That owner retired the machine due to malfunction and didn’t care to have it repaired. At our most recent Disassembly Academy, one of the teams worked through the puzzle of nondestructively removing its outer plastic enclosure. There were several very deviously hidden fasteners holding large plastic pieces in place.

Puzzling through all the interlocked mechanisms consumed most of the evening. Towards the end, Emily soldered a power cable (liberated from another appliance present at the event) to run its motor, which was the state I brought in to Wearables Wednesdays.

This event was focused on wearables, so everyone has some level of experience with a sewing machine. And it is also an audience who have experience and interest in mechanical design, so it was a perfect crowd for poking around a sewing machine’s guts.

When the outer enclosure was removed, a broken-off partial gear fell out. The rest of the gear was found to be part of the mechanism for selecting a sewing pattern. At the end of Disassembly Academy, our hypothesis for machine retirement was because of its inability to change patterns due to this broken gear.

Further exploration at CRASHspace has updated the hypothesis: there is indeed a problem in pattern selection, but probably not because of this broken gear. We can see the large mechanical cam mechanism that serves as read-only memory for patterns, and we can see the follower mechanism that can read one of several patterns encoded on that cam. However, pushing on the internal parts of the mechanism, we couldn’t get the follower to move to a different track.

New hypothesis: There is a problem in the pattern mechanism but it’s not the gear. The pattern selection knob was turned forcefully to try to push past the problem, but that force broke the little gear. It was a victim and not the root cause.

Exploratory adventures of this sewing machine will continue at some future point. In the meantime, we have a comparison reference from a friend who owns a sewing machine that predated fancy pattern features.

MatterHackers 3D Printing And Space Event

Even though Santa Monica is technically in the same greater LA metropolitan area as my usual cruising range, the infamous LA traffic requires a pretty significant effort for me to attend events in that area. One such event worth the effort was the “3D Printing and Space” event hosted by MatterHackers, Ultimaker, and Spaceport LA.

Like the previous MatterHackers event I attended, there is a nominal main event that is only part of the picture. Just as interesting and valuable is the time to mingle and chat with people and learn about their novel applications of 3D printing. Sometimes there is a show-and-tell area for people to bring in their projects, but it wasn’t clear from event publicity materials if there would be one at this event. I decided to traveled to Santa Monica via public transit, which meant Sawppy couldn’t come with me, which was just as well since the exhibit area was minimal and mostly occupied by items brought by members of the speaking panel.

I started off on the wrong foot by mistaking Matthew Napoli of Made in Space for someone else. Thankfully he was gracious and I learned his company built and operates the 3D printer on board the international space station. It was tremendously novel news a few years ago, and the company has continued to evolve technology and widen applications. Just for novelty’s sake I tried printing that wrench on my Monoprice Mini some time ago, with very poor results. Fortunately the Made in Space printer on board ISS is a significantly more precise printer, and Matthew Napolo brought a ground-printed counterpart for us to play with. It was, indeed, far superior to what I had printed at home. A question he had to answer several times throughout the night is whether FDM 3D printing in space still require support materials, which we use to hold melted filament up against gravity. The answer is that (1) their testing found that even though there’s no gravity, extruded filament nozzle has momentum that needs to be accounted for, and (2) Made in Space design their “production” parts to not require support material when printed either on earth or in space.

On an adjacent table were several 3D printed mounting brackets brought by Christine Gebara. Each of them had identical mounting points, but they had drastically different structural members connecting them. Their shape appeared to have been dictated by numerical evolution algorithms becoming available under several names. Autodesk calls theirs “generative design“. Learning how to best take advantage of such structures is something Christine Gebara confirmed was under active development at JPL.

Kevin Zagorski of Virgin Orbit brought something I didn’t recognize beyond the fact it had bolt patterns and fittings to connect to other things. During the discussion he explained it was part of a test rocket engine. While the auxiliary connecting pieces are either commodity parts or conventionally machined, the center somewhat tubular structure was 3D printed by a metal sintering(?) printer. 3D printing allowed them to fabricate a precise interior profile for the structure, and the carbon deposits inside a testament to the fact this piece was test-fired. He also described a development I was previously unaware of: they are using machines that has both additive and subtractive tooling. This meant they can build parts of a metal structure, move in with cutters or grinders to obtain a desired surface finish on the interior of that structure, before proceeding to build remaining parts. This allows them to get the best of both worlds: geometries that would be difficult to make by machining alone, but with interior surface finishes that would be difficult to make with 3D printing alone. Sadly he believes these machines satisfy a very narrow and demanding niche, so this capability is unlikely to propagate to consumer machines.

I didn’t know about Spaceport L.A. until this event, but I had been dimly aware of a cluster of “New Space” companies in the area. Southern California has been a hotbed of aerospace engineering for as long as that has been a field of engineering, though there have been some painful periods of transition such as severe industry downsizing at the end of the Cold War following collapse of the Soviet Union. But with SpaceX serving as the poster child for a new generation of space companies, a new community is forming and Spaceport L.A. wants to be the community hub for everyone in the area.

But even though some portray “Old Space” companies as dinosaurs doomed to extinction, in reality they are full of smart engineers who have no intention of being left behind. Representative of that was Andrew Kwas from Northrup Grumman and the entourage he brought with him. He said several times that the young Northrup Grumman engineers in his group will take the company into the future. It was fun to speak with a few of them as they had set up shop at one of the tables presenting pieces from their 3D printing test and research. One of them (I wish I remembered her name) gave me my first insight into support materials for laser sintering metal 3D printing. I thought that, since these parts were formed out of a bed of metal powder, it would not need support materials. It turns out I was wrong, and support materials are still required for mechanical hold and also for thermal dissipation. I don’t know if I’ll ever have the chance to design for laser sintering printing, but that was a valuable first lesson.

And last but not least, I got to talk to Kitty Yeung about her projects that express love of space through 3D printing. It’s a little different from the other speakers present as she’s not dealing with space flight hardware, but they are an important part of the greater community for space enthusiasm. In between esoteric space hardware, it’s great to see projects that are immediately relatable to hobbyists present.

I look forward to the next MatterHackers public event.

Sparklecon 2020 Day 2: Arduino VGAX

Unlike the first day of Sparklecon 2020, I had no obligations on the second day so I was a lot more relaxed and took advantage of the opportunity to chat and socialize with others. I brought Sawppy back for day two and the cute little rover made more friends. I hope that even if they don’t decide to build their own rover, Sawppy’s new friends might pass along information to someone who would.

I also brought some stuff to tinker at the facilities made available by NUCC. Give me a table, a power strip, and WiFi and I can get a lot of work done. And having projects in progress is always a great icebreaker for fellow hardware hackers to come up and ask what I’m doing.

Last night I was surprised to learn that one of the lighting panels at NUCC is actually the backlight of an old computer LCD monitor. The LCD is gone, leaving the brilliant white background illuminating part of the room. That motivated me to dust off the giant 30-inch monitor I had with a bizarre failure mode making it useless as a computer monitor. I wasn’t quite willing to modify it destructively just yet, but I did want to explore the idea of using the monitor as a lighting panel. Preserving the LCD layer, I can illuminate things selectively without overly worrying about the pixel accuracy problems that made it useless as a monitor.

The next decision was the hardest: what hardware platform to use? I brought two flavors of Arduino Nano, two flavors of Teensy, and a Raspberry Pi. There were solutions for ESP32 as well, but I didn’t bring my dev board. I decided to start at the bottom of the ladder and started searching for Arduino libraries that generate VGA signals.

I found VGAX, which can pump out a very low resolution VGA signal of 160 x 80 pixels. The color capability is also constrained, limited to a few solid colors that reminded me of old PC CGA graphics. Perhaps they share similar root causes!

To connect my Arduino Nano to my monitor, I needed to sacrifice a VGA cable and cut it in half to expose its wires. Fortunately NUCC had a literal bucketful of them and I put one to use on this project. An electrical testing meter helped me find the right wires to use, and we were in business.

Arduino VGAX breadboard

The results were impressive in that a humble 8-bit microcontroller could produce color VGA signals. But they were not very useful in the fact that this particular library is not capable of generating full screen video, only part of the screen was filled. I thought I might have done something wrong, but the FAQ covered “How do I center the picture” so this was completely expected.

I would prefer to use the whole screen in my project, so my search for signal generation must continue elsewhere. But seeing VGAX up and running started gears turning in Emily’s brain. She had a few project ideas that might involved VGA. Today’s work gave a few more data points on technical feasibility, so some of those ideas might get dusted off in the near future. Stay tuned. In the meantime, I’ll continue my VGA exploration with a Teensy microcontroller.

Sparklecon 2020: Sawppy’s First Day

I brought Sawppy to Sparklecon VII because I’m telling the story of Sawppy’s life so far. It’s also an environment where a lot of people would appreciate the little miniature Mars rover running amongst them.

Sparklecon 2020 2 Sawppy near battlebot arena

Part of it was because a battlebot competition was held at Sparklecon, with many teams participating. I’m not entirely sure what the age range of participants were, because some of the youngest may just be siblings dragged along for the ride and the adults may be supervising parents. While Sawppy is not built for combat, some of the participants still have enough of a general interest of robotics to took a closer look at Sawppy.

Sparklecon 2020 3 Barb video hosting survey

First talk I attended was Barb relaying her story of investigating video hosting. Beginning of 2020 ushered in some very disruptive changes in YouTube policies of how they treat “For Kids” video. But as Barb explains, this is less about swear words in videos and more about Google tracking. Many YouTube content authors including Barb were unhappy with the changes, so Barb started looking elsewhere.

Sparklecon 2020 4 Sawppy talk

The next talk I was present for was my own, as I presented Sawppy’s story. Much of the new material in this edition were the addition of pictures and stories of rovers built by other people around the country and around the world. Plus we recorded a cool climbing capability demonstration:

Sparklecon 2020 5 Emily annoying things

Emily gave a version of the talk she gave at Supercon. Even though some of us were at Supercon, not all of us were able to make it to her talk. And she brought different visual aids this time around, so even people who were at the Supercon talk had new things to play with.

Sparklecon 2020 6 8 inch floppy drive

After we gave our talks, the weight was off our shoulders and we started exploring the rest of the con. During some conversation, Dual-D of NUCC dug up an old school eight inch floppy drive. Here I am failing to insert a 3.5″ floppy disk in that gargantuan device.

Sparklecon 2020 7 sand table above

Last year after Supercon I saw photographs of a sand table and was sad that I missed it. This year I made sure to scour all locations to make sure I can find it if it was present. I found it in the display area of the Plasmatorium drawing “SPARKLE CON” in the sand.

Sparklecon 2020 8 sand table below

Here’s the mechanism below – two stepper motors with belts control the works.

Sparklecon 2020 9 tesla coil winding on lathe

There are full sized manual (not CNC) lathe and mill at 23b shop, but I didn’t get to see them run last year. This year we got to see a Tesla coil winding get built on the lathe.

For last year’s Sparklecon Day 2 writeup, I took a picture of a rather disturbing Barbie doll head transplanted on top of a baseball trophy. And I hereby present this year’s disturbing transplant.

Sparklecon 2020 WTF

Sawppy has no idea what to do about this… thing.