Mars 2020 Perseverance Surface Operations Begin

I’m interrupting my story of micro Sawppy evolution today to send congratulations to the Mars 2020 entry/descent/landing (EDL) team on successful mission completion! As I type this, telemetry confirms the rover is on the surface and the first image from a hazard camera has been received showing the surface of Mars.

Personally, I was most nervous about the components which are new for this rover, specifically the Terrain Relative Navigation (TRN) system. Not that the rest of the EDL was guaranteed to work, but at least many of the systems were proven to work once with Curiosity EDL. As I read about the various systems, TRN stood out to be a high-risk and high-reward step forward for autonomous robotic exploration.

When choosing Mars landing sites, past missions had to pick areas that are relatively flat with minimal obstacles to crash into. Unfortunately those properties also make for a geologically uninteresting area for exploration. Curiosity rover spent a lot of time driving away from its landing zone towards scientifically informative landscapes. This was necessary because the landing site is dictated by a lot of factor beyond the mission’s control, adding uncertainty to where the actual landing site will be.

TRN allows Perseverance to explore areas previously off-limits by turning landing from a passive into an active process by adding an element of control. Instead of just accepting a vague location dictated by unknown Martian winds and other elements of uncertainty, TRN has cameras that will look at the terrain and can maneuver the rover to a safe location avoiding the nastier (though probably interesting!) parts of the landscape. While it has a set of satellite pictures for reference, they were taken at much higher altitude than what it will see through its own cameras. Would it get confused? Would it be unable to make up its mind? Would it confidently choose a bad landing site? There are so many ways TRN can go wrong, but the rewards of TRN success means a far more scientifically productive mission making the risk worthwhile. And once it works, TRN successors will let future missions go places they couldn’t have previously explored. It is a really big deal.

Listening to the mission coverage, I was hugely relieved to hear “TRN has landing solution.” For me that was almost as exciting as hearing the rover is on the ground and seeing an image from one of the navigation hazard cameras. The journey is at an end, the adventure is just beginning.

[UPDATE: Video footage of Perseverance landing has shown another way my Sawppy rovers successfully emulated behavior of real Mars exploration rovers.]

“Surface operations begin” signals transition to the main mission on the surface of another planet. A lot of scientists are gearing up to get to work, and I return to my little rovers.

Remaining To-Do For My Next Unity 3D Adventure

I enjoyed my Unity 3D adventure this time around, starting from the LEGO microgame tutorials through to the Essentials pathway and finally venturing out and learning pieces at my own pace in my own order. My result for this Unity 3D session was Bouncy Bouncy Lights and while I acknowledge it is a beginner effort, it was more than I had accomplished on any of my past adventures in Unity 3D. Unfortunately, once again I find myself at a pause point, without a specific motivation to do more with Unity. But there are a few items still on the list of things that might be interesting to explore for the future.

The biggest gap I see in my Unity skill is creating my own unique assets. Unity supports 2D and 3D creations, but I don’t have art skills in either field. I’ve dabbled in Inkscape enough that I might be able to build some rudimentary things if I need to, and for 3D meshes I could import STL so I could apply my 3D printing design CAD skills. But the real answer is Blender or similar 3D geometry creation software, and that’s an entirely different learning curve to climb.

Combing through Unity documentation, I learned of a “world building” tool called ProBuilder. I’m not entirely sure exactly where it fits in the greater scheme of things, but I can see it has tools to manipulate meshes and even create them from scratch. It doesn’t claim to be a good tool for doing so, but supposedly it’s a good tool for whipping up quick mockups and placeholders. Most of the information about ProBuilder is focused on UV mapping, but I didn’t know it at the start. All ProBuilder documentation assume I already knew what UV meant, and all I could tell is that UV didn’t mean ultraviolet in this context. Fortunately searching for UV in context of 3D graphics gives me this Wikipedia article on UV mapping. There is a dearth of written documentation for ProBuilder, what little I found all point to a YouTube playlist. Maybe I’ll find the time to sit through them later.

I skimmed through the Unity Essentials sections on audio because Bouncy Bouncy Lights was to be silent, so audio is still on the to-do list. And like 2D/3D asset creation, I’m neither a musician nor a sound engineer. But if I ever come across motivation to climb this learning curve I know where to go to pick up where I left off. I know I have a lot to learn since meager audio experimentation already produced one lesson: AudioSource.Play would stop any prior occurrences of the sound. If I want the same sound to overlap each other I have to use AudioSource.PlayOneShot.

Incorporating video is an interesting way to make Unity scenes more dynamic, without adding complexity to the scene or overloading the physics or animation engine. There’s a Unity Learn tutorial about this topic, but I found that video assets are not incorporated in WebGL builds. The documentation said video files must be hosted independently for playback by WebGL, which adds to the hosting complications if I want to go down that route.

WebGL
The Video Clip Importer is not used for WebGL game builds. You must use the Video Player component’s URL option.

And finally, I should set aside time to learn about shaders. Unity’s default shader is effective, but it has become quite recognizable and there are jokes about the “Unity Look” of games that never modified default shader properties. I personally have no problem with this, as long as the gameplay is good. (I highly recommend Overcooked game series, built in Unity and have the look.) But I am curious about how to make a game look distinctive, and shaders are the best tool to do so. I found a short Unity Learn tutorial but it doesn’t cover very much before dumping readers into the Writing Shaders section of the manual. I was also dismayed to learn that we don’t have IntelliSense or similar helpfulness in Visual Studio when writing shader files. This is going to be a course of study all on its own, and again I await good motivation for me to go climb that learning curve.

I enjoyed this session of Unity 3D adventure, and I really loved that I got far enough this time to build my own thing. I’ve summarized this adventure in my talk to ART.HAPPENS, hoping that others might find my experience informative in video form in addition to written form on this blog. I’ve only barely scratched the surface of Unity. There’s a lot more to learn, but that’ll be left to future Unity adventures because I’m returning to rover work.

Venturing Beyond Unity Essentials Pathway

To help beginners learn how to create something simple from scratch, Unity Learn set up the Essentials pathway which I followed. Building from an empty scene taught me a lot of basic tasks that were already done for us in the LEGO microgame tutorial template. Enough that I felt confident enough to start building my own project for ART.HAPPENS. It was a learning exercise, running into one barrier after another, but I felt confident I knew the vocabulary to search for answers on my own.

Exercises in the Essentials pathway got me started on the Unity 3D physics engine, with information about setting up colliders and physics materials. Building off the rolling ball exercise, I created a big plane for balls to bounce around in and increased the bounciness for both ball and plane. The first draft was a disaster, because unlike real life it is trivial to build a perfectly flat plane in a digital world, so the balls keep bouncing in the same place forever. I had to introduce a tilt to make the bounces more interesting.

But while bouncing balls look fun (title image) they weren’t quite good enough. I thought adding a light source might help but that still wasn’t interesting enough. Switching from ball to cube gave me a clearly illuminated surface with falloff in brightness, which I thought looked more interesting than a highlight point on a ball. However, cubes don’t roll and would stop on the plane. For a while I was torn: cubes look better but spheres move better. Which way should I go? Then a stroke of realization: this is a digital world and I can change the rules if I want. So I used a cube model for visuals, but attached a sphere model for physics collisions. Now I have objects that look like cubes but bounce like balls. Something nearly impossible in the real world but trivial in the digital world.

To make these lights show up better, I wanted a dark environment. This was a multi-step procedure. First I did the obvious: delete the default light source that came with the 3D project template. Then I had to look up environment settings to turn off the “Skybox”. That still wasn’t dark, until I edited camera settings to change default color to black. Once everything went black I noticed the cubes weren’t immediately discernable as cubes anymore so I turned the lights back up… but decided it was more fun to start dark and turned the lights back off.

I wanted to add user interactivity but realized the LEGO microgame used an entirely different input system than standard Unity and nothing on the Essentials path taught me about user input. Searching around on Unity Learn I got very confused with contradictory information until I eventually figured out there are two Unity user input systems. There’s the “Input Manager” which is the legacy system and its candidate replacement “Input System Package” which is intended to solve problems with the old system. Since I had no investment in the old system, I decided to try the new one. Unfortunately even tthough there’s a Unity Learn session, I still found it frustrating as did others. I got far enough to add interactivity to Bouncy Bouncy Lights but it wasn’t fun. I’m not even sure I should be using it yet, seeing how none of the microgames did. Now that I know enough to know what to look for, I could see that the LEGO microgame used the old input system. Either way, there’s more climbing of the learning curve ahead. [UPDATE: After I wrote this post, but before I published it, Unity released another tutorial for the new Input Manager. Judging by this demo, Bouncy Bouncy Lights is using input manager incorrectly.]

The next to-do item was to add the title and interactivity instructions. After frustration with exploring a new input system, I went back to LEGO microgame and looked up exactly how they presented their text. I learned it was a system called TextMesh Pro and thankfully it had a Unity Learn section and a PDF manual was installed as part of the asset download. Following those instructions it was straightforward to put up some text using the default font. After my input system frustration, I didn’t want to get much more adventurous than that.

I had debated when to present the interactivity instructions. Ideally I would present them just as the audience got oriented and recognizes the default setup. Possibly start getting bored and ready to move on, so I can give them interactivity to keep their attention. But I have no idea when that would be. When I read the requirement that the title of the piece should be in the presentation, I added that as a title card before showing the bouncing lights. And once I added a title card, it was easy to add another card with the instructions to be shown before the bouncing lights. The final twist was the realization I shouldn’t present them as static cards that fade out: since I already had all these physical interactions in place, they are presented as falling bouncing objects in their own right.

The art submission instructions said to put in my name and a way for people to reach me, so I put my name and newscrewdriver.com at the bottom of the screen using TextMesh Pro. Then it occurred to me the URL should be a clickable link, which led me down the path of finding out how an Unity WebGL title can interact with the web browser. There seemed to be several different deprecated ways to do it, but they all point to the current recommended approach and now my URL is clickable! For fun, I added a little spotlight effect when the mouse cursor is over the URL.

The final touch is to modify the presentation HTML to suit the Gather virtual space used by ART.HAPPENS. By default Unity WebGL build generates an index.html file that puts the project inside a fixed-size box. Outside that box is the title and a button to go full screen. I didn’t want the full screen option for presenting this work in Gather, but I wanted to fill my <iframe> instead of a little box within it. My CSS layout skills are still pretty weak and I couldn’t figure it out on my own, but I found this forum thread which taught me to replace the <body> tag with the following:

  <body>
      <div class="webgl-content" style="width:100%; height:100%">
        <div id="unityContainer" style="width:100%; height:100%">
        </div>
      </div>
  </body>

I don’t understand why we need to put 100% styles on two elements before it works, but hopefully I will understand whenever I get around to my long-overdue study session on CSS layout. The final results of my project can be viewed at my GitHub Pages hosting location. Which is a satisfying result but there are a lot more to Unity to learn.

Notes on Unity Essentials Pathway

As far as Unity 3D creations go, my Bouncy Bouncy Lights project is pretty simple, as expected of a beginner’s learning project. My Unity (re)learning session started with their LEGO microgame tutorial, but I didn’t want to submit a LEGO-derived Unity project for ART.HAPPENS. (And it might not have been legal under the LEGO EULA anyway.) So after completing the LEGO microgame tutorial and its suggested Creative Mods exercises, I still had more to learn.

The good news is that Unity Learn has no shortage of instruction materials, the bad news is a beginner gets lost on where to start. To help with this, they’ve recently (or at least since the last time I investigated Unity) rolled out the concept of “Pathways” which organize a set of lessons targeted for a particular audience. People looking for something after completing their microgame tutorial is sent towards the Unity Essentials Pathway.

Before throwing people into the deep pool that is Unity Editor, the Essentials Pathway starts by setting us up with a lot of background information in the form of video interview clips with industry professionals using Unity. I preferred to read instead of watching videos, but I wanted to hear these words of wisdom so I sat through them. I loved that they allocated time to assure beginners that they’re not alone if they found Unity Editor intimidating at first glance. The best part was the person who claimed their first experience was taking one look and said “Um, no.” Closed Unity, and didn’t return for several weeks.

Other interviews covered the history of Unity, and how it enabled creation of real-time interactive content, and the tool evolved alongside the industry. There were also information for people who are interested in building a new career using Unity. Introducing terminology and even common job titles that can be used to query on sites like LinkedIn. I felt this section offered more applicable advise for this job field more than I ever received in college for my job field. I was mildly amused and surprised to see Unity classes ended with a quiz to make sure I understood everything.

After this background we are finally set loose on Unity Editor starting from scratch. Well, an empty 3D project template which is as close to scratch as I cared to get. The template has a camera and a light source but not much else. Unlike the microgames which are already filled with assets and code. This is what I wanted to see: how do I start from geometry primitives and work my way up, pulling from Unity Asset store as needed to for useful prebuilt pieces. One of the exercises was to make a ball roll down a contraption of our design (title image) and I paid special attention to this interaction. The Unity physics engine was the main reason I chose to study Unity instead of three.js or A-Frame and it became the core for Bouncy Bouncy Lights.

I’ve had a lot of experience writing in C# code, so I was able to quickly breeze through C# scripting portions of Unity Essentials. But I’m not sure this is enough to get a non-coder up and running on Unity scripting. Perhaps Unity decided they’re not a coding boot camp and didn’t bother to start at the beginning. People who have never coded before will need to go elsewhere before coming back to Unity scripting, and a few pointers might be nice.

I skimmed through a few sections that I decided was unimportant for my ART.HAPPENS project. Sound was one of them: very important for an immersive gaming experience, but my project will be silent because the Gather virtual space have a video chatting component and I didn’t want my sounds to interfere with people talking. Another area I quickly skimmed through were using Unity for 2D games, which is not my goal this time but perhaps I’ll return later.

And finally, there were information pointing us to Unity Connect and setting up a profile. At first glance it looked like Unity tried to set up a social network for Unity users, but it is shutting down with portions redistributed to other parts of the Unity network. I had no investment here so I was unaffected, but it made me curious how often Unity shuts things down. Hopefully not as often as Google who have become infamous for doing so.

I now have a basic grasp on this incredibly capable tool, and it’s time to start venturing beyond guided paths.

Bouncy Bouncy Lights

My motivation for going through Unity’s LEGO microgame tutorial (plus associated exercises) was to learn Unity Editor in the hopes of building something for ART.HAPPENS, a community virtual art show. I didn’t expect to build anything significant with my meager skills, but I made something with the skill I have. It definitely fit with the theme of everyone sharing works that they had fun with, and learned from. I arrived at something I felt was a visually interesting interactive experience which I titled Bouncy Bouncy Lights and, if selected, should be part of the exhibition opening today. If it was not selected, or if the show has concluded and its site taken down, my project will remain available at my own GitHub Pages hosting location.

There are still a few traces of my original idea, which was to build a follow-up to Glow Flow. Something colorful with Pixelblaze-controlled LED lights. But I decided to move from the physical to digital domain so now I have random brightly colored lights in a dark room each reflecting off an associated cube. But by default there isn’t enough for the viewer to immediately see the whole cube, just the illuminated face. I want them to observe the colorful lights moving around for a bit before they recognized what’s happening, prompting the delight of discovery.

Interactivity comes in two forms: arrow keys will change the angle of the platform, which will change the direction of the bouncing cubes. There is a default time interval for new falling cubes. I chose it so that there’ll always be a few lights on screen, but not so many to make the cubes obvious. The user can also press space bar to add lights faster than the default interval. If the space bar is held down, the extra lights will add enough illumination to make the cubes obvious and they’ll frequently collide with each other. I limited it to a certain rate because the aesthetics change if too many lights all jump in. Thankfully I don’t have to worry about things like ensuring sufficient voltage supply for lights when working in the digital world, but too many lights in the digital world add up to white washing out the individual colors to a pastel shade. And too many cubes interfere with bouncing, and we get an avalanche of cubes trying to get out of each other’s way. It’s not the look I want for the project, but I left in a way to do it as an Easter egg. Maybe people would enjoy bringing it up once in a while for laughs.

I’m happy with how Bouncy Bouncy Lights turned out, but I’m even happier with it as a motivation for my journey learning how to work with a blank Unity canvas.

Notes on Unity LEGO Microgame Creative Mods

Once a Unity 3D beginner completed a tightly-scripted microgame tutorial, we are directed towards a collection of “Creative Mods”. These suggested exercises build on top of what we created in the scripted tutorial. Except now individual tasks are more loosely described, and we are encouraged to introduce our own variations. We are also allowed to experiment freely, as the Unity Editor is no longer partially locked down to keep us from going astray. The upside of complete freedom is balanced by the downside of easily shooting ourselves in the foot. But now we know enough to not do that, or know how to fix it if we do. (In theory.)

Each of the Unity introductory microgames have their own list of suggested modifications, and since I just completed the LEGO microgame I went through the LEGO list. I was mildly surprised to see this list grow while I was in the middle of doing it — as of this writing, new suggested activities are still being added. Some of these weren’t actually activities at all, such as one entirely focused on a PDF (apparently created from PowerPoint) serving as a manual for the list of available LEGO Behaviour Bricks. But most of the others introduce something new and interesting.

In addition to the LEGO themed Unity assets from the initial microgame tutorial, others exist for us to import and use in our LEGO microgame projects. There was a Christmas-themed set with Santa Claus (causing me to run through Visual Studio 2019 installer again from Unity Hub to get Unity integration), a set resembling LEGO City except it’s on a tropical island, a set for LEGO Castles, and my personal favorite: LEGO Space. Most of my personal LEGO collection were from their space theme and I was happy to see a lot of my old friends available for play in the digital world.

When I noticed the list of activities grew while I was working on them, it gave me a feeling this was a work-in-progress. That feeling continued when I imported some of these asset collections and fired up their example scene. Not all of them worked correctly, mostly centered around how LEGO pieces attached to each other especially the Behaviour Bricks. Models detach and break apart at unexpected points. Sometimes I could fix it by using the Unity Editor to detach and re-attach bricks, but not always. This brick attachment system was not a standard Unity Editor but an extension built for the LEGO microgame theme, and I guess there are still some bugs to be ironed out.

The most exciting part of the tutorial was an opportunity to go beyond the LEGO prefab assets they gave us and build our own LEGO creations for use in Unity games. A separate “Build your own Enemy” tutorial gave us instructions on how to build with LEGO piece by piece within Unity Editor, but that’s cumbersome compared to using dedicated LEGO design software like BrickLink Studio and exporting the results to Unity. We don’t get to use arbitrary LEGO pieces, we have to stay within a prescribed parts palette, but it’s still a lot of freedom. I immediately built myself a little LEGO spaceship because old habits die hard.

I knew software like BrickLink Studio existed but this was the first time I sat down and tried to use one. The parts palette was disorienting, because it was completely unlike how I work with LEGO in the real world. I’m used to pawing through my bin of parts looking for the one I want, not selecting parts from a menu organized under an unfamiliar taxonomy. I wanted my little spaceship to have maneuvering thrusters, something I add to almost all of my LEGO space creations, but it seems to be absent from the approved list. (UPDATE: A few days later I found it listed under “3963 Brick, Modified 1 x 1 with 3 Loudspeakers / Space Positioning Rockets”) The strangest omission seem to be wheels. I see a lot of parts for automobiles, including car doors and windshields and even fender arches. But the only wheels I found in the approved list are steering wheels. I doubt they would include so many different fender arches without wheels to put under them, but I can’t find a single ground vehicle wheel in the palette! Oversight, puzzling intentional choice, or my own blindness? I lean towards the last but for now it’s just one more reason for me to stick with spaceships.

My little LEGO spaceship, alongside many other LEGO microgame Creative Mods exercises (but not all since the list is still growing) was integrated into my variant of the LEGO microgame and uploaded as “Desert Dusk Demo“. The first time I uploaded, I closed the window and panicked because I didn’t copy down the URL and I didn’t know how to find it again. Eventually I figured out that everything I uploaded to Unity Play is visible at https://play.unity.com/discover/mygames.

But since the legal terms of LEGO microgame assets are restricted to that site, I have to do something else for my learn-and-share creation for ART.HAPPENS. There were a few more steps I had to take there before I had my exhibit Bouncy Bouncy Lights.

Notes on Unity LEGO Microgame Tutorial

To help Unity beginners get their bearings inside a tremendously complex and powerful tool, Unity published small tutorials called microgames. Each of them represent a particular game genre, with the recently released LEGO microgame as the default option. Since I love LEGO, I saw no reason to deviate from this default. These microgame tutorials are implemented as Unity project templates that we can launch from Unity’s Hub launcher, they’re just filled out with far more content than the typical Unity empty project template.

Once a Unity project was created with the LEGO microgame template (and after we accepted all the legal conditions of using these LEGO digital assets) we see the complex Unity interface. Well aware of how intimidating it may look to a beginner, the tutorial darkened majority of options and highlighted just the one we need for that step in the tutorial. Which got me wondering: the presence of these tutorial microgames imply the Unity Editor UI itself can be scripted and controlled, how is that done? But that’s not my goal today so I set that observation aside.

The LEGO microgame starts with the basics: how to save our progress and how to play test the game in its current state. The very first change is adjusting a single variable, our character’s movement speed, and test its results. We are completely on rails at this point: the Unity Editor is locked off so I couldn’t change any other character variable, and I couldn’t even proceed unless I changed the character speed to exactly the prescribed value. This is a good way to make sure beginners don’t inadvertently change something, since we’d have no idea how to fix it yet!

Following chapters of the tutorial gradually open up the editor, allowing us to use more and more editor options and giving us gradually more latitude to change the microgame as we liked. We are introduced to the concept of “assets” which are pieces we use to assemble our game. In an ideal world they snap together like LEGO pieces, and in the case of building this microgame occasionally they actually do represent LEGO pieces.

Aside from in-game objects, the LEGO minigame also allows us to define and change in-game behavior using “Behaviour Bricks”: Assets that look just like another LEGO block in game, except they are linked to Unity code behind the scenes giving them more functionality than just a static plastic brick. I appreciated how it makes game development super easy, as the most literal implementation of “object-oriented programming” I have ever seen. However, I was conscious of the fact these behavior bricks are limited to the LEGO microgame environment. Anyone who wishes to venture beyond would have to learn entirely different ways to implement Unity behavior and these training wheels will be of limited help.

The final chapter of this LEGO microgame tutorial ended with walking us through how to build and publish our project to Unity Play, their hosting service for people to upload their Unity projects. I followed those steps to publish my own LEGO microgame, but what’s online now isn’t just the tutorial. It also included what they called “Creative Mods” for a microgame.

Unity Tutorial LEGO Microgame

Once I made the decision to try learning Unity again, it was time to revisit Unity’s learning resources. This was one aspect that I appreciated about Unity: they have continuously worked to lower their barrier to entry. Complete beginners are started on tutorials that walk us through building microgames, which are prebuilt Unity projects that show many of the basic elements of a game. Several different microgames are available, each representing a different game genre, so a beginner can choose whichever one that appeals to them.

But first an ambitious Unity student had to install Unity itself. Right now Unity releases are named by year much like other software like Ubuntu. Today, the microgame tutorials tell beginners to install version 2019.4 but did not explain why. I was curious why they tell people to install a version that is approaching two years old so I did a little digging. The answer is that Unity designates specific versions as LTS (Long Term Support) releases. Unity LTS is intended to be a stable and reliable version, with the best library compatibility and the most complete product documentation. More recent releases may have shiny new features, but a beginner wouldn’t need them and it makes sense to start with the latest LTS. Which, as of this writing, is 2019.4.

I vaguely recall running through one of these microgame exercises on an earlier attempt at Unity. I chose the karting microgame because I had always liked driving games. Gran Turismo on Sony PlayStation (the originals in both cases, before either got numbers) was what drew me into console gaming. But I ran out of steam on the karting microgame and those lessons did not stick. Since I’m effectively starting from scratch, I might as well start with a new microgame, and the newest hotness released just a few months ago is the LEGO microgame. Representing third-person view games like Tomb Raider and, well, the LEGO video games we can buy right now!

I don’t know what kind of business arrangement behind the scenes made it possible to have digital LEGO resources in our Unity projects, but I am thankful it exists. And since Unity doesn’t own the rights to these assets, the EULA for starting a LEGO microgame is far longer than for the other microgames using generic game assets. I was not surprised to find clauses forbidding use of these assets in commercial projects, but I was mildly surprised that we are only allowed to host them on Unity’s project hosting site. We can’t even host them on our own sites elsewhere. But the most unexpected clause in the EULA is that all LEGO creations depicted in our minigames must be creatable with real LEGO bricks. We are not allowed to invent LEGO bricks that do not exist in real life. I don’t find that restriction onerous, just surprising but made sense in hindsight. I’m not planning to invent an implausible LEGO brick in my own tutorial run so I should be fine.

Checking In on Unity 3D

Deciding to participating in ART.HAPPENS is my latest motivation to look at Unity 3D, something I’ve done several times. My first look was almost five years ago, and my most recent look was about a year and a half ago in the context of machine learning. Unity is a tremendously powerful tool and I’ve gone through a few beginner tutorials, but I never got as far as building my own Unity project. Will that finally change this time?

My previous look at Unity was motivated by an interest in getting into the exciting world of machine learning, specifically in the field of reinforcement learning. That line of investigation did not get very far, but as most machine learning tools are focused on Linux there was the question of Unity’s Linux support. Not just to build a game (which is supported) but also to run the Unity editor itself on Linux. My investigation was right around the time Unity Editor for Linux entered beta with expectation for release in 2020, but that has been pushed to 2021.

For my current motivation, it’s not as important to run the editor on Linux. I can just as easily create something fun and interactive by running Unity on Windows. Which led to the next question: could I output something that can work inside an <iframe> hosted within Gather, the virtual space for ART.HAPPENS? On paper the answer is yes. Unity has had the ability to render content using WebGL for a while, and their code has matured alongside browser support for WebGL. But even better is the development (and even more importantly, browser adoption) of WebAssembly for running code in a browser. This results in Unity titles that are faster to download and to execute than the previous approach of compiling Unity projects to JavaScript. These advancements are far more encouraging than what Unity competitor Unreal Engine has done, which was to boot HTML5 support out of core to a community project. Quite a sharp contrast to Unity’s continued effort to make web output a first class citizen among all of its platforms, and this gives me the confidence to proceed and dive in to the latest Unity tutorial for beginners: LEGO!

ART.HAPPENS Motivates Return to Unity 3D

I’ve been talking about rovers on this blog for several weeks nonstop. I thought it would be fun to have a micro Sawppy rover up and running in time for Perseverance landing on February 18th, but I don’t think I’ll make that self-imposed deadline. I have discovered I cannot sustain “all rovers all the time” and need a break from rover work. I’m not abandoning the micro rover project, I just need to switch to another project for a while as a change of pace.

I was invited to participate in ART.HAPPENS, a community art show. My first instinct was to say “I’m not an artist!” but I was gently corrected. This is not the fancy schmancy elitist art world, it is the world of people having fun and sharing their works kind of world. Yes, some of the people present are bona fide artists, but I was assured anyone who wants to share something done for the sheer joy of creating can join in the fun.

OK then, I can probably take a stab at it. Most of my projects are done to accomplish a specific purpose or task, so it’s a rare break to not worry about meeting objectives and build something for fun. My first line of thought was to build a follow-up to Glow Flow, something visually pleasing and interactive built out of 3D printed parts and illuminated by colorful LEDs controlled with a Pixelblaze. It’s been on my to-do list to explore more ideas on how else to use a Pixelblaze.

Since we’re in the middle of a pandemic, this art show is a virtual affair. I learned that people will be sharing photos and videos of their projects and shown in a virtual meeting space called Gather. Chosen partially because the platform was built to be friendly to all computer skill levels, Gather tries to eliminate friction of digital gatherings.

I poked my head into Gather and saw an aesthetic that reminded me of old Apple //e video games that used a top-down tiled view. For those old games, it was a necessity due to the limited computing power and memory of an old Apple computer. And those same traits are helpful here to build a system with minimal hardware requirements.

Sharing photos and videos of something like Glow Flow would be fun, but wouldn’t be the full experience. Glow Flow looked good but the real fun comes from handling it with our own hands. I was willing to make some compromises in the reality of the world today, until I noticed how individual projects will be shared as web content that will be hosted in an <iframe>. That changes the equation. Because it meant I could build something interactive after all. If I have control of content inside an <iframe>, I can build interactive web content for this show.

I briefly looked at a few things that might have been interesting, like three.js and A-Frame. But as I read documentation for those platforms, my enthusiasm dampened by shortcomings I came across. For example, building experiences incorporating physics simulation seems to be a big can of worms on those platforms. Eventually I decided: screw it, if I’m going to do this, I’m going to go big. It’s time to revisit Unity 3D.

Sewing Machine at CRASHspace Wearables Wednesdays

I brought a “naked” sewing machine to the February 2020 edition of Wearables Wednesdays. Wearables Wednesdays is a regularly occurring monthly meetup at CRASHspace LA, a makerspace in Culver City whose membership includes a lot of people I love to chat and hack with. But Culver City is a nontrivial drive from my usual range. So as much as I would love to frequently drop by and hang out, in reality I only visit at most once a month.

The sewing machine belongs to Emily who received it as a gift from its previous owner. That owner retired the machine due to malfunction and didn’t care to have it repaired. At our most recent Disassembly Academy, one of the teams worked through the puzzle of nondestructively removing its outer plastic enclosure. There were several very deviously hidden fasteners holding large plastic pieces in place.

Puzzling through all the interlocked mechanisms consumed most of the evening. Towards the end, Emily soldered a power cable (liberated from another appliance present at the event) to run its motor, which was the state I brought in to Wearables Wednesdays.

This event was focused on wearables, so everyone has some level of experience with a sewing machine. And it is also an audience who have experience and interest in mechanical design, so it was a perfect crowd for poking around a sewing machine’s guts.

When the outer enclosure was removed, a broken-off partial gear fell out. The rest of the gear was found to be part of the mechanism for selecting a sewing pattern. At the end of Disassembly Academy, our hypothesis for machine retirement was because of its inability to change patterns due to this broken gear.

Further exploration at CRASHspace has updated the hypothesis: there is indeed a problem in pattern selection, but probably not because of this broken gear. We can see the large mechanical cam mechanism that serves as read-only memory for patterns, and we can see the follower mechanism that can read one of several patterns encoded on that cam. However, pushing on the internal parts of the mechanism, we couldn’t get the follower to move to a different track.

New hypothesis: There is a problem in the pattern mechanism but it’s not the gear. The pattern selection knob was turned forcefully to try to push past the problem, but that force broke the little gear. It was a victim and not the root cause.

Exploratory adventures of this sewing machine will continue at some future point. In the meantime, we have a comparison reference from a friend who owns a sewing machine that predated fancy pattern features.

MatterHackers 3D Printing And Space Event

Even though Santa Monica is technically in the same greater LA metropolitan area as my usual cruising range, the infamous LA traffic requires a pretty significant effort for me to attend events in that area. One such event worth the effort was the “3D Printing and Space” event hosted by MatterHackers, Ultimaker, and Spaceport LA.

Like the previous MatterHackers event I attended, there is a nominal main event that is only part of the picture. Just as interesting and valuable is the time to mingle and chat with people and learn about their novel applications of 3D printing. Sometimes there is a show-and-tell area for people to bring in their projects, but it wasn’t clear from event publicity materials if there would be one at this event. I decided to traveled to Santa Monica via public transit, which meant Sawppy couldn’t come with me, which was just as well since the exhibit area was minimal and mostly occupied by items brought by members of the speaking panel.

I started off on the wrong foot by mistaking Matthew Napoli of Made in Space for someone else. Thankfully he was gracious and I learned his company built and operates the 3D printer on board the international space station. It was tremendously novel news a few years ago, and the company has continued to evolve technology and widen applications. Just for novelty’s sake I tried printing that wrench on my Monoprice Mini some time ago, with very poor results. Fortunately the Made in Space printer on board ISS is a significantly more precise printer, and Matthew Napolo brought a ground-printed counterpart for us to play with. It was, indeed, far superior to what I had printed at home. A question he had to answer several times throughout the night is whether FDM 3D printing in space still require support materials, which we use to hold melted filament up against gravity. The answer is that (1) their testing found that even though there’s no gravity, extruded filament nozzle has momentum that needs to be accounted for, and (2) Made in Space design their “production” parts to not require support material when printed either on earth or in space.

On an adjacent table were several 3D printed mounting brackets brought by Christine Gebara. Each of them had identical mounting points, but they had drastically different structural members connecting them. Their shape appeared to have been dictated by numerical evolution algorithms becoming available under several names. Autodesk calls theirs “generative design“. Learning how to best take advantage of such structures is something Christine Gebara confirmed was under active development at JPL.

Kevin Zagorski of Virgin Orbit brought something I didn’t recognize beyond the fact it had bolt patterns and fittings to connect to other things. During the discussion he explained it was part of a test rocket engine. While the auxiliary connecting pieces are either commodity parts or conventionally machined, the center somewhat tubular structure was 3D printed by a metal sintering(?) printer. 3D printing allowed them to fabricate a precise interior profile for the structure, and the carbon deposits inside a testament to the fact this piece was test-fired. He also described a development I was previously unaware of: they are using machines that has both additive and subtractive tooling. This meant they can build parts of a metal structure, move in with cutters or grinders to obtain a desired surface finish on the interior of that structure, before proceeding to build remaining parts. This allows them to get the best of both worlds: geometries that would be difficult to make by machining alone, but with interior surface finishes that would be difficult to make with 3D printing alone. Sadly he believes these machines satisfy a very narrow and demanding niche, so this capability is unlikely to propagate to consumer machines.

I didn’t know about Spaceport L.A. until this event, but I had been dimly aware of a cluster of “New Space” companies in the area. Southern California has been a hotbed of aerospace engineering for as long as that has been a field of engineering, though there have been some painful periods of transition such as severe industry downsizing at the end of the Cold War following collapse of the Soviet Union. But with SpaceX serving as the poster child for a new generation of space companies, a new community is forming and Spaceport L.A. wants to be the community hub for everyone in the area.

But even though some portray “Old Space” companies as dinosaurs doomed to extinction, in reality they are full of smart engineers who have no intention of being left behind. Representative of that was Andrew Kwas from Northrup Grumman and the entourage he brought with him. He said several times that the young Northrup Grumman engineers in his group will take the company into the future. It was fun to speak with a few of them as they had set up shop at one of the tables presenting pieces from their 3D printing test and research. One of them (I wish I remembered her name) gave me my first insight into support materials for laser sintering metal 3D printing. I thought that, since these parts were formed out of a bed of metal powder, it would not need support materials. It turns out I was wrong, and support materials are still required for mechanical hold and also for thermal dissipation. I don’t know if I’ll ever have the chance to design for laser sintering printing, but that was a valuable first lesson.

And last but not least, I got to talk to Kitty Yeung about her projects that express love of space through 3D printing. It’s a little different from the other speakers present as she’s not dealing with space flight hardware, but they are an important part of the greater community for space enthusiasm. In between esoteric space hardware, it’s great to see projects that are immediately relatable to hobbyists present.

I look forward to the next MatterHackers public event.

Sparklecon 2020 Day 2: Arduino VGAX

Unlike the first day of Sparklecon 2020, I had no obligations on the second day so I was a lot more relaxed and took advantage of the opportunity to chat and socialize with others. I brought Sawppy back for day two and the cute little rover made more friends. I hope that even if they don’t decide to build their own rover, Sawppy’s new friends might pass along information to someone who would.

I also brought some stuff to tinker at the facilities made available by NUCC. Give me a table, a power strip, and WiFi and I can get a lot of work done. And having projects in progress is always a great icebreaker for fellow hardware hackers to come up and ask what I’m doing.

Last night I was surprised to learn that one of the lighting panels at NUCC is actually the backlight of an old computer LCD monitor. The LCD is gone, leaving the brilliant white background illuminating part of the room. That motivated me to dust off the giant 30-inch monitor I had with a bizarre failure mode making it useless as a computer monitor. I wasn’t quite willing to modify it destructively just yet, but I did want to explore the idea of using the monitor as a lighting panel. Preserving the LCD layer, I can illuminate things selectively without overly worrying about the pixel accuracy problems that made it useless as a monitor.

The next decision was the hardest: what hardware platform to use? I brought two flavors of Arduino Nano, two flavors of Teensy, and a Raspberry Pi. There were solutions for ESP32 as well, but I didn’t bring my dev board. I decided to start at the bottom of the ladder and started searching for Arduino libraries that generate VGA signals.

I found VGAX, which can pump out a very low resolution VGA signal of 160 x 80 pixels. The color capability is also constrained, limited to a few solid colors that reminded me of old PC CGA graphics. Perhaps they share similar root causes!

To connect my Arduino Nano to my monitor, I needed to sacrifice a VGA cable and cut it in half to expose its wires. Fortunately NUCC had a literal bucketful of them and I put one to use on this project. An electrical testing meter helped me find the right wires to use, and we were in business.

Arduino VGAX breadboard

The results were impressive in that a humble 8-bit microcontroller could produce color VGA signals. But they were not very useful in the fact that this particular library is not capable of generating full screen video, only part of the screen was filled. I thought I might have done something wrong, but the FAQ covered “How do I center the picture” so this was completely expected.

I would prefer to use the whole screen in my project, so my search for signal generation must continue elsewhere. But seeing VGAX up and running started gears turning in Emily’s brain. She had a few project ideas that might involved VGA. Today’s work gave a few more data points on technical feasibility, so some of those ideas might get dusted off in the near future. Stay tuned. In the meantime, I’ll continue my VGA exploration with a Teensy microcontroller.

Sparklecon 2020: Sawppy’s First Day

I brought Sawppy to Sparklecon VII because I’m telling the story of Sawppy’s life so far. It’s also an environment where a lot of people would appreciate the little miniature Mars rover running amongst them.

Sparklecon 2020 2 Sawppy near battlebot arena

Part of it was because a battlebot competition was held at Sparklecon, with many teams participating. I’m not entirely sure what the age range of participants were, because some of the youngest may just be siblings dragged along for the ride and the adults may be supervising parents. While Sawppy is not built for combat, some of the participants still have enough of a general interest of robotics to took a closer look at Sawppy.

Sparklecon 2020 3 Barb video hosting survey

First talk I attended was Barb relaying her story of investigating video hosting. Beginning of 2020 ushered in some very disruptive changes in YouTube policies of how they treat “For Kids” video. But as Barb explains, this is less about swear words in videos and more about Google tracking. Many YouTube content authors including Barb were unhappy with the changes, so Barb started looking elsewhere.

Sparklecon 2020 4 Sawppy talk

The next talk I was present for was my own, as I presented Sawppy’s story. Much of the new material in this edition were the addition of pictures and stories of rovers built by other people around the country and around the world. Plus we recorded a cool climbing capability demonstration:

Sparklecon 2020 5 Emily annoying things

Emily gave a version of the talk she gave at Supercon. Even though some of us were at Supercon, not all of us were able to make it to her talk. And she brought different visual aids this time around, so even people who were at the Supercon talk had new things to play with.

Sparklecon 2020 6 8 inch floppy drive

After we gave our talks, the weight was off our shoulders and we started exploring the rest of the con. During some conversation, Dual-D of NUCC dug up an old school eight inch floppy drive. Here I am failing to insert a 3.5″ floppy disk in that gargantuan device.

Sparklecon 2020 7 sand table above

Last year after Supercon I saw photographs of a sand table and was sad that I missed it. This year I made sure to scour all locations to make sure I can find it if it was present. I found it in the display area of the Plasmatorium drawing “SPARKLE CON” in the sand.

Sparklecon 2020 8 sand table below

Here’s the mechanism below – two stepper motors with belts control the works.

Sparklecon 2020 9 tesla coil winding on lathe

There are full sized manual (not CNC) lathe and mill at 23b shop, but I didn’t get to see them run last year. This year we got to see a Tesla coil winding get built on the lathe.

For last year’s Sparklecon Day 2 writeup, I took a picture of a rather disturbing Barbie doll head transplanted on top of a baseball trophy. And I hereby present this year’s disturbing transplant.

Sparklecon 2020 WTF

Sawppy has no idea what to do about this… thing.

Watching Operation Of Electron Microscope Live Was Surprisingly Interesting

It’s always amazing to see what people bring to the Hackaday Superconference. I think the audience would appreciate my project Sawppy, but I didn’t bring my rover to Supercon for two reasons. First, Sawppy is somewhat unwieldy and bulky and second, I expect to be pretty busy as part of event staff helping out on badge logistics.

The second reason held true throughout the weekend, but I was put to shame on the first front because Adam McCombs (Twitter @nanographs) brought a scanning electron microscope. I never thought they were very portable and I was right, but that didn’t stop Adam! It occupied what little open space there was in the DesignLab shop area. I’ve seen SEM imagery and thought it might be fun to take a closer look, but what I didn’t realize was how cool it was to watch one in operation.

I never got time at the operator console, but I watched others turn knobs at their disposal. I had not known how many different parameters were adjustable to highlight different features on the sample. When we see a published picture generated from a SEM, an operator has already adjusted these knobs to the appropriate settings. Seeing less-than practiced operators adjust them live and experiment to see what works was mesmerizing.

I was also surprised at how feedback is visible immediately. It was explained to me the whole machine is a very analog process. The path from the electron beams striking the sample to picture on operator console CRT has no digital frame buffers or processing inserting delay. Every once in a while an image is recorded to the adjacent laptop, and that process consumes several seconds, but the knob-twiddling is effectively instantaneous on CRT as are interactions with the sample. I saw some small specs of dust dance around and initially thought it was due to air movement, but then I learned the sample is held in a vacuum. What is moving the dust? The electron beam!

My mind evaluates this technology from the perspective of an optical camera, and from that perspective the available range of magnification is astounding. Traversing several orders of magnitude of magnification with a single twist of a knob. I saw no indication that a SEM has any equivalent of focus or depth of field limitations: everything in the image is always razor sharp. I was not surprised to see panning across the sample, but I was surprised to see tilt was an option as well to see some items from different perspectives.

Watching a SEM in operation was not something I knew I needed to see until I saw it. The pictures afterwards are a great reminder, but no match for the live experience. The opportunity doesn’t come often, but if one is available I highly recommend it.

 

Sawppy at DTLA Maker Faire 2019

Sawppy returned to the downtown Los Angeles Mini Maker Faire for 2019 as a roaming exhibit. This is a change from last year where Sawppy was part of a rover themed booth with other JPL Open Source Rovers. Sadly this year we were missing representation from the JPL Open Source Rover project, none of the three rovers from last year were present this year.

Los Angeles Maker Faire has grown even more this year and spilled into the street, specifically 5th Street adjacent to the library which was shut down for the event to make room for an additional row of exhibits. Many of the larger booths were out here, including a robot combat arena and a few car projects like the Eggscape Eggsperience.

There was forecast for rain, which dampened things literally and otherwise. Fortunately Sawppy is prepared for rain with a rain coat developed for Maker Faire San Mateo earlier in the year, so the light rain was not a problem.

I have fun showing Sawppy to interested attendees, but it is also an opportunity to chat with other like-minded exhibitors. I started trying to strike up conversation with people as soon as I got in line to check in as a maker. It turns out I was behind a member of the Air Quality Management District’s Air Quality Sensor Performance Evaluation Center. They were here at Maker Faire to tell people about the availability of low-cost air quality sensors. Both for AQMD’s own purposes and as something that could be fun for makers to tinker with. They brought a few sensors for show and I asked if Sawppy could act as a mobile air quality sensor for a day… and they said yes!

Even though no JPL OSR builds were present, Sawppy was not the only rover there but most of the others were static 3D-printed models. Probably from here. The one I found actually interesting is a motorized version that was done as an example application of the 3DoT board by Humans for Robots.

It was a fun day of adventure for Sawppy, topped off with a shout-out from Make!

Freebie Supercon SAO from Zio.cc

Digging through some old piles, found this advertisement freebie given out at Supercon 2018. (This was handed out by one of the attendees and not part of the conference goody bag.) The board already has all the surface mount pieces, I just need to solder the two through-hole components: the LED and the SAO header. It should be a short soldering project, might as well give it a shot.

 

With writing on both sides, I realized it wasn’t obvious which side each component should be soldered to. Well, I wasn’t going to use it as a badge SAO anyway, so it didn’t really matter. I chose arbitrary directions.

zio.cc Supercon 2018 SAO 40 connector

I’m not familiar with this “Qwiic” connector. It looks like something these guys are trying to promote as an interconnect for an ecosystem of components. I guess they saw Seeed Studio’s Grove Connectors and decided they had a better idea? This little giveaway didn’t exactly entice me to dive in to their system, but it did let me know it existed and to look it over. I guess mission accomplished for this little freebie giveaway.

 

I used my bench power supply to deliver 3.3 volts to the input pins. The LED lit up and that’s when I learned it was a fast color-changing LED. The lens is frosted instead of clear like the ones I’ve been using for fun, but the same basic idea.

It lights, it’s fine.

Sawppy Attends MatterHackers Modern Creators Night

MatterHackers is a local supplier for 3D printing. They also carry other products catering to the same hobbyist-grade audience for small scale fabrication, such as laser cutters and CNC engravers. I’ve bought much of my supplies — including the PETG filament used to print the current iteration of Sawppy — from MatterHackers.

I’ve met some of the people of MatterHackers at Yuri’s Night, and been extended an invitation to visit their headquarters where there’s a showroom area. But I have yet to take them up on that invitations because, while MatterHackers is within driving distance, it is a nontrivial drive accounting for LA traffic. Going to that area is basically a day trip, and it kept not happening.

But when Sawppy was invited to be at their Modern Creators Night event, that was enough motivation for me to pack up and take that trip. At the event, I learned it is a scaled-up version of an occasional meetup MatterHackers used to hold at their main office, but there was no longer enough space there. Their business has grown, as has the size of the crowds. Hence – a new event at a venue with a new format.

Sawppy arrived and started roaming the exhibit area as rovers tend to do. But when it came time for the speakers to present, Sawppy was presented with a table for static exhibit. I made some signage I could tape to the table, to explain Sawppy when I’m not present to tell the story.

Sawppy on small MatterHackers display table

There were a lot of 3D prints on exhibit by other makers, and one of them decided to loan their little printed model of Curiosity to keep Sawppy company during speaker presentations. I was happily surprised when I saw this little guy, and didn’t meet its owner until later. I regret to say I’ve already forgotten her name, but her generous and appropriate loan is very much appreciated!

Sawppys little friend

A Day At CRASH Space LA

Visiting CRASH Space LA has been on my to-do list ever since I was introduced to Barb and Jay at Maker Faire 2018. We’ve seen each other at numerous events since then and it was pretty ridiculous that I see them more often in San Mateo than our home town. The problem is that they are on the far side of the basin. Going to Culver City in the context of infamous LA traffic meant a visit is not “I’ll just stop by.” It is a day trip kind of expedition! Finally Emily and I visited during their “Mega Take Apart & Swap Meet” day this month.

I brought some things but Emily brought more, and they were more interesting. She tore into some sort of retired dental surgery tool with components indicating high voltage operation. We’re not sure why a dentist would need high voltage in our mouths, and we didn’t much care. (Or chose not to think about it.) What’s inside is far more interesting.

I dug into another box Emily had brought. Some sort of power supply that had all appearance of being an one-off homebuilt project.

After the take-apart event Emily went off to visit friends living in that part of town. I stuck around CRASH Space for their Video Dim Sum event where I learned there are a lot of very odd things available on YouTube. I expected this, so it was a 100% success on that front. However my taste rarely aligned with the people who submitted videos so my overall entertainment-to-time ratio was pretty poor. I did learn some interesting things that I would not have otherwise, so it was still a fun thing to try at least once.

Out of all the videos that were shown, just one of them were memorable and compelling enough for me to go find and rewatch. Here it is, my personal winner of Video Dim Sum:

Padadena Chalk Festival 2019

This past weekend was spent looking over artists working intently at Paseo Colorado for Pasadena Chalk Festival 2019. I feel it is to chalk artists what badge hacking at Supercon are for electronics people. Since I never graduated beyond the kindergarten stage of chalk art, I learned about surprising variety of tools and techniques for applying chalk to concrete. As someone who loves to learn about behind-the-scenes of every creation, it’s fun to revisit certain favorite pieces to see them progress through the weekend.

There were many original works, but most of my attention were focused on recreations of animated characters and scenes I’ve already seen. A notable departure from this pattern was a large mural depicting space exploration including my favorite Mars rover Curiosity:

Monsters, Inc. characters by Jazlyn Jacobo:

Kiki’s Delivery Service:

Aladdin’s Genie and Carpet play a game of chess. Drawn by Jen:

A scene from Toy Story 4 teaser, drawn in front of the theater which will be playing the movie next weekend. Drawn by Gus Moran:

Lion Kings Simba and Mufasa by Kathleen Sanders. This was quite fitting since it was also Father’s day:

Grandfather and grandson from Coco feature in this highly detailed composition by Patty Gonzalez:

Other works I liked:

This artist, who is drawing a chalk portrait of Luke Skywalker as X-Wing pilot, brought along a 3D prop in the form of a full-sized R2-D2.

Chalk festival R2D2

The most novel piece was by C.Nick in the Animation Alley. Here I expected to find artists working with animated characters… I was delighted to find an actual animated chalk drawing.

Chalk festival C Nick tinkerbell

Chalk-Tinkerbell