Remaining To-Do For My Next Unity 3D Adventure

I enjoyed my Unity 3D adventure this time around, starting from the LEGO microgame tutorials through to the Essentials pathway and finally venturing out and learning pieces at my own pace in my own order. My result for this Unity 3D session was Bouncy Bouncy Lights and while I acknowledge it is a beginner effort, it was more than I had accomplished on any of my past adventures in Unity 3D. Unfortunately, once again I find myself at a pause point, without a specific motivation to do more with Unity. But there are a few items still on the list of things that might be interesting to explore for the future.

The biggest gap I see in my Unity skill is creating my own unique assets. Unity supports 2D and 3D creations, but I don’t have art skills in either field. I’ve dabbled in Inkscape enough that I might be able to build some rudimentary things if I need to, and for 3D meshes I could import STL so I could apply my 3D printing design CAD skills. But the real answer is Blender or similar 3D geometry creation software, and that’s an entirely different learning curve to climb.

Combing through Unity documentation, I learned of a “world building” tool called ProBuilder. I’m not entirely sure exactly where it fits in the greater scheme of things, but I can see it has tools to manipulate meshes and even create them from scratch. It doesn’t claim to be a good tool for doing so, but supposedly it’s a good tool for whipping up quick mockups and placeholders. Most of the information about ProBuilder is focused on UV mapping, but I didn’t know it at the start. All ProBuilder documentation assume I already knew what UV meant, and all I could tell is that UV didn’t mean ultraviolet in this context. Fortunately searching for UV in context of 3D graphics gives me this Wikipedia article on UV mapping. There is a dearth of written documentation for ProBuilder, what little I found all point to a YouTube playlist. Maybe I’ll find the time to sit through them later.

I skimmed through the Unity Essentials sections on audio because Bouncy Bouncy Lights was to be silent, so audio is still on the to-do list. And like 2D/3D asset creation, I’m neither a musician nor a sound engineer. But if I ever come across motivation to climb this learning curve I know where to go to pick up where I left off. I know I have a lot to learn since meager audio experimentation already produced one lesson: AudioSource.Play would stop any prior occurrences of the sound. If I want the same sound to overlap each other I have to use AudioSource.PlayOneShot.

Incorporating video is an interesting way to make Unity scenes more dynamic, without adding complexity to the scene or overloading the physics or animation engine. There’s a Unity Learn tutorial about this topic, but I found that video assets are not incorporated in WebGL builds. The documentation said video files must be hosted independently for playback by WebGL, which adds to the hosting complications if I want to go down that route.

WebGL
The Video Clip Importer is not used for WebGL game builds. You must use the Video Player component’s URL option.

And finally, I should set aside time to learn about shaders. Unity’s default shader is effective, but it has become quite recognizable and there are jokes about the “Unity Look” of games that never modified default shader properties. I personally have no problem with this, as long as the gameplay is good. (I highly recommend Overcooked game series, built in Unity and have the look.) But I am curious about how to make a game look distinctive, and shaders are the best tool to do so. I found a short Unity Learn tutorial but it doesn’t cover very much before dumping readers into the Writing Shaders section of the manual. I was also dismayed to learn that we don’t have IntelliSense or similar helpfulness in Visual Studio when writing shader files. This is going to be a course of study all on its own, and again I await good motivation for me to go climb that learning curve.

I enjoyed this session of Unity 3D adventure, and I really loved that I got far enough this time to build my own thing. I’ve summarized this adventure in my talk to ART.HAPPENS, hoping that others might find my experience informative in video form in addition to written form on this blog. I’ve only barely scratched the surface of Unity. There’s a lot more to learn, but that’ll be left to future Unity adventures because I’m returning to rover work.

Venturing Beyond Unity Essentials Pathway

To help beginners learn how to create something simple from scratch, Unity Learn set up the Essentials pathway which I followed. Building from an empty scene taught me a lot of basic tasks that were already done for us in the LEGO microgame tutorial template. Enough that I felt confident enough to start building my own project for ART.HAPPENS. It was a learning exercise, running into one barrier after another, but I felt confident I knew the vocabulary to search for answers on my own.

Exercises in the Essentials pathway got me started on the Unity 3D physics engine, with information about setting up colliders and physics materials. Building off the rolling ball exercise, I created a big plane for balls to bounce around in and increased the bounciness for both ball and plane. The first draft was a disaster, because unlike real life it is trivial to build a perfectly flat plane in a digital world, so the balls keep bouncing in the same place forever. I had to introduce a tilt to make the bounces more interesting.

But while bouncing balls look fun (title image) they weren’t quite good enough. I thought adding a light source might help but that still wasn’t interesting enough. Switching from ball to cube gave me a clearly illuminated surface with falloff in brightness, which I thought looked more interesting than a highlight point on a ball. However, cubes don’t roll and would stop on the plane. For a while I was torn: cubes look better but spheres move better. Which way should I go? Then a stroke of realization: this is a digital world and I can change the rules if I want. So I used a cube model for visuals, but attached a sphere model for physics collisions. Now I have objects that look like cubes but bounce like balls. Something nearly impossible in the real world but trivial in the digital world.

To make these lights show up better, I wanted a dark environment. This was a multi-step procedure. First I did the obvious: delete the default light source that came with the 3D project template. Then I had to look up environment settings to turn off the “Skybox”. That still wasn’t dark, until I edited camera settings to change default color to black. Once everything went black I noticed the cubes weren’t immediately discernable as cubes anymore so I turned the lights back up… but decided it was more fun to start dark and turned the lights back off.

I wanted to add user interactivity but realized the LEGO microgame used an entirely different input system than standard Unity and nothing on the Essentials path taught me about user input. Searching around on Unity Learn I got very confused with contradictory information until I eventually figured out there are two Unity user input systems. There’s the “Input Manager” which is the legacy system and its candidate replacement “Input System Package” which is intended to solve problems with the old system. Since I had no investment in the old system, I decided to try the new one. Unfortunately even tthough there’s a Unity Learn session, I still found it frustrating as did others. I got far enough to add interactivity to Bouncy Bouncy Lights but it wasn’t fun. I’m not even sure I should be using it yet, seeing how none of the microgames did. Now that I know enough to know what to look for, I could see that the LEGO microgame used the old input system. Either way, there’s more climbing of the learning curve ahead. [UPDATE: After I wrote this post, but before I published it, Unity released another tutorial for the new Input Manager. Judging by this demo, Bouncy Bouncy Lights is using input manager incorrectly.]

The next to-do item was to add the title and interactivity instructions. After frustration with exploring a new input system, I went back to LEGO microgame and looked up exactly how they presented their text. I learned it was a system called TextMesh Pro and thankfully it had a Unity Learn section and a PDF manual was installed as part of the asset download. Following those instructions it was straightforward to put up some text using the default font. After my input system frustration, I didn’t want to get much more adventurous than that.

I had debated when to present the interactivity instructions. Ideally I would present them just as the audience got oriented and recognizes the default setup. Possibly start getting bored and ready to move on, so I can give them interactivity to keep their attention. But I have no idea when that would be. When I read the requirement that the title of the piece should be in the presentation, I added that as a title card before showing the bouncing lights. And once I added a title card, it was easy to add another card with the instructions to be shown before the bouncing lights. The final twist was the realization I shouldn’t present them as static cards that fade out: since I already had all these physical interactions in place, they are presented as falling bouncing objects in their own right.

The art submission instructions said to put in my name and a way for people to reach me, so I put my name and newscrewdriver.com at the bottom of the screen using TextMesh Pro. Then it occurred to me the URL should be a clickable link, which led me down the path of finding out how an Unity WebGL title can interact with the web browser. There seemed to be several different deprecated ways to do it, but they all point to the current recommended approach and now my URL is clickable! For fun, I added a little spotlight effect when the mouse cursor is over the URL.

The final touch is to modify the presentation HTML to suit the Gather virtual space used by ART.HAPPENS. By default Unity WebGL build generates an index.html file that puts the project inside a fixed-size box. Outside that box is the title and a button to go full screen. I didn’t want the full screen option for presenting this work in Gather, but I wanted to fill my <iframe> instead of a little box within it. My CSS layout skills are still pretty weak and I couldn’t figure it out on my own, but I found this forum thread which taught me to replace the <body> tag with the following:

  <body>
      <div class="webgl-content" style="width:100%; height:100%">
        <div id="unityContainer" style="width:100%; height:100%">
        </div>
      </div>
  </body>

I don’t understand why we need to put 100% styles on two elements before it works, but hopefully I will understand whenever I get around to my long-overdue study session on CSS layout. The final results of my project can be viewed at my GitHub Pages hosting location. Which is a satisfying result but there are a lot more to Unity to learn.

Notes on Unity Essentials Pathway

As far as Unity 3D creations go, my Bouncy Bouncy Lights project is pretty simple, as expected of a beginner’s learning project. My Unity (re)learning session started with their LEGO microgame tutorial, but I didn’t want to submit a LEGO-derived Unity project for ART.HAPPENS. (And it might not have been legal under the LEGO EULA anyway.) So after completing the LEGO microgame tutorial and its suggested Creative Mods exercises, I still had more to learn.

The good news is that Unity Learn has no shortage of instruction materials, the bad news is a beginner gets lost on where to start. To help with this, they’ve recently (or at least since the last time I investigated Unity) rolled out the concept of “Pathways” which organize a set of lessons targeted for a particular audience. People looking for something after completing their microgame tutorial is sent towards the Unity Essentials Pathway.

Before throwing people into the deep pool that is Unity Editor, the Essentials Pathway starts by setting us up with a lot of background information in the form of video interview clips with industry professionals using Unity. I preferred to read instead of watching videos, but I wanted to hear these words of wisdom so I sat through them. I loved that they allocated time to assure beginners that they’re not alone if they found Unity Editor intimidating at first glance. The best part was the person who claimed their first experience was taking one look and said “Um, no.” Closed Unity, and didn’t return for several weeks.

Other interviews covered the history of Unity, and how it enabled creation of real-time interactive content, and the tool evolved alongside the industry. There were also information for people who are interested in building a new career using Unity. Introducing terminology and even common job titles that can be used to query on sites like LinkedIn. I felt this section offered more applicable advise for this job field more than I ever received in college for my job field. I was mildly amused and surprised to see Unity classes ended with a quiz to make sure I understood everything.

After this background we are finally set loose on Unity Editor starting from scratch. Well, an empty 3D project template which is as close to scratch as I cared to get. The template has a camera and a light source but not much else. Unlike the microgames which are already filled with assets and code. This is what I wanted to see: how do I start from geometry primitives and work my way up, pulling from Unity Asset store as needed to for useful prebuilt pieces. One of the exercises was to make a ball roll down a contraption of our design (title image) and I paid special attention to this interaction. The Unity physics engine was the main reason I chose to study Unity instead of three.js or A-Frame and it became the core for Bouncy Bouncy Lights.

I’ve had a lot of experience writing in C# code, so I was able to quickly breeze through C# scripting portions of Unity Essentials. But I’m not sure this is enough to get a non-coder up and running on Unity scripting. Perhaps Unity decided they’re not a coding boot camp and didn’t bother to start at the beginning. People who have never coded before will need to go elsewhere before coming back to Unity scripting, and a few pointers might be nice.

I skimmed through a few sections that I decided was unimportant for my ART.HAPPENS project. Sound was one of them: very important for an immersive gaming experience, but my project will be silent because the Gather virtual space have a video chatting component and I didn’t want my sounds to interfere with people talking. Another area I quickly skimmed through were using Unity for 2D games, which is not my goal this time but perhaps I’ll return later.

And finally, there were information pointing us to Unity Connect and setting up a profile. At first glance it looked like Unity tried to set up a social network for Unity users, but it is shutting down with portions redistributed to other parts of the Unity network. I had no investment here so I was unaffected, but it made me curious how often Unity shuts things down. Hopefully not as often as Google who have become infamous for doing so.

I now have a basic grasp on this incredibly capable tool, and it’s time to start venturing beyond guided paths.

Bouncy Bouncy Lights

My motivation for going through Unity’s LEGO microgame tutorial (plus associated exercises) was to learn Unity Editor in the hopes of building something for ART.HAPPENS, a community virtual art show. I didn’t expect to build anything significant with my meager skills, but I made something with the skill I have. It definitely fit with the theme of everyone sharing works that they had fun with, and learned from. I arrived at something I felt was a visually interesting interactive experience which I titled Bouncy Bouncy Lights and, if selected, should be part of the exhibition opening today. If it was not selected, or if the show has concluded and its site taken down, my project will remain available at my own GitHub Pages hosting location.

There are still a few traces of my original idea, which was to build a follow-up to Glow Flow. Something colorful with Pixelblaze-controlled LED lights. But I decided to move from the physical to digital domain so now I have random brightly colored lights in a dark room each reflecting off an associated cube. But by default there isn’t enough for the viewer to immediately see the whole cube, just the illuminated face. I want them to observe the colorful lights moving around for a bit before they recognized what’s happening, prompting the delight of discovery.

Interactivity comes in two forms: arrow keys will change the angle of the platform, which will change the direction of the bouncing cubes. There is a default time interval for new falling cubes. I chose it so that there’ll always be a few lights on screen, but not so many to make the cubes obvious. The user can also press space bar to add lights faster than the default interval. If the space bar is held down, the extra lights will add enough illumination to make the cubes obvious and they’ll frequently collide with each other. I limited it to a certain rate because the aesthetics change if too many lights all jump in. Thankfully I don’t have to worry about things like ensuring sufficient voltage supply for lights when working in the digital world, but too many lights in the digital world add up to white washing out the individual colors to a pastel shade. And too many cubes interfere with bouncing, and we get an avalanche of cubes trying to get out of each other’s way. It’s not the look I want for the project, but I left in a way to do it as an Easter egg. Maybe people would enjoy bringing it up once in a while for laughs.

I’m happy with how Bouncy Bouncy Lights turned out, but I’m even happier with it as a motivation for my journey learning how to work with a blank Unity canvas.

Notes on Unity LEGO Microgame Creative Mods

Once a Unity 3D beginner completed a tightly-scripted microgame tutorial, we are directed towards a collection of “Creative Mods”. These suggested exercises build on top of what we created in the scripted tutorial. Except now individual tasks are more loosely described, and we are encouraged to introduce our own variations. We are also allowed to experiment freely, as the Unity Editor is no longer partially locked down to keep us from going astray. The upside of complete freedom is balanced by the downside of easily shooting ourselves in the foot. But now we know enough to not do that, or know how to fix it if we do. (In theory.)

Each of the Unity introductory microgames have their own list of suggested modifications, and since I just completed the LEGO microgame I went through the LEGO list. I was mildly surprised to see this list grow while I was in the middle of doing it — as of this writing, new suggested activities are still being added. Some of these weren’t actually activities at all, such as one entirely focused on a PDF (apparently created from PowerPoint) serving as a manual for the list of available LEGO Behaviour Bricks. But most of the others introduce something new and interesting.

In addition to the LEGO themed Unity assets from the initial microgame tutorial, others exist for us to import and use in our LEGO microgame projects. There was a Christmas-themed set with Santa Claus (causing me to run through Visual Studio 2019 installer again from Unity Hub to get Unity integration), a set resembling LEGO City except it’s on a tropical island, a set for LEGO Castles, and my personal favorite: LEGO Space. Most of my personal LEGO collection were from their space theme and I was happy to see a lot of my old friends available for play in the digital world.

When I noticed the list of activities grew while I was working on them, it gave me a feeling this was a work-in-progress. That feeling continued when I imported some of these asset collections and fired up their example scene. Not all of them worked correctly, mostly centered around how LEGO pieces attached to each other especially the Behaviour Bricks. Models detach and break apart at unexpected points. Sometimes I could fix it by using the Unity Editor to detach and re-attach bricks, but not always. This brick attachment system was not a standard Unity Editor but an extension built for the LEGO microgame theme, and I guess there are still some bugs to be ironed out.

The most exciting part of the tutorial was an opportunity to go beyond the LEGO prefab assets they gave us and build our own LEGO creations for use in Unity games. A separate “Build your own Enemy” tutorial gave us instructions on how to build with LEGO piece by piece within Unity Editor, but that’s cumbersome compared to using dedicated LEGO design software like BrickLink Studio and exporting the results to Unity. We don’t get to use arbitrary LEGO pieces, we have to stay within a prescribed parts palette, but it’s still a lot of freedom. I immediately built myself a little LEGO spaceship because old habits die hard.

I knew software like BrickLink Studio existed but this was the first time I sat down and tried to use one. The parts palette was disorienting, because it was completely unlike how I work with LEGO in the real world. I’m used to pawing through my bin of parts looking for the one I want, not selecting parts from a menu organized under an unfamiliar taxonomy. I wanted my little spaceship to have maneuvering thrusters, something I add to almost all of my LEGO space creations, but it seems to be absent from the approved list. (UPDATE: A few days later I found it listed under “3963 Brick, Modified 1 x 1 with 3 Loudspeakers / Space Positioning Rockets”) The strangest omission seem to be wheels. I see a lot of parts for automobiles, including car doors and windshields and even fender arches. But the only wheels I found in the approved list are steering wheels. I doubt they would include so many different fender arches without wheels to put under them, but I can’t find a single ground vehicle wheel in the palette! Oversight, puzzling intentional choice, or my own blindness? I lean towards the last but for now it’s just one more reason for me to stick with spaceships.

My little LEGO spaceship, alongside many other LEGO microgame Creative Mods exercises (but not all since the list is still growing) was integrated into my variant of the LEGO microgame and uploaded as “Desert Dusk Demo“. The first time I uploaded, I closed the window and panicked because I didn’t copy down the URL and I didn’t know how to find it again. Eventually I figured out that everything I uploaded to Unity Play is visible at https://play.unity.com/discover/mygames.

But since the legal terms of LEGO microgame assets are restricted to that site, I have to do something else for my learn-and-share creation for ART.HAPPENS. There were a few more steps I had to take there before I had my exhibit Bouncy Bouncy Lights.

Notes on Unity LEGO Microgame Tutorial

To help Unity beginners get their bearings inside a tremendously complex and powerful tool, Unity published small tutorials called microgames. Each of them represent a particular game genre, with the recently released LEGO microgame as the default option. Since I love LEGO, I saw no reason to deviate from this default. These microgame tutorials are implemented as Unity project templates that we can launch from Unity’s Hub launcher, they’re just filled out with far more content than the typical Unity empty project template.

Once a Unity project was created with the LEGO microgame template (and after we accepted all the legal conditions of using these LEGO digital assets) we see the complex Unity interface. Well aware of how intimidating it may look to a beginner, the tutorial darkened majority of options and highlighted just the one we need for that step in the tutorial. Which got me wondering: the presence of these tutorial microgames imply the Unity Editor UI itself can be scripted and controlled, how is that done? But that’s not my goal today so I set that observation aside.

The LEGO microgame starts with the basics: how to save our progress and how to play test the game in its current state. The very first change is adjusting a single variable, our character’s movement speed, and test its results. We are completely on rails at this point: the Unity Editor is locked off so I couldn’t change any other character variable, and I couldn’t even proceed unless I changed the character speed to exactly the prescribed value. This is a good way to make sure beginners don’t inadvertently change something, since we’d have no idea how to fix it yet!

Following chapters of the tutorial gradually open up the editor, allowing us to use more and more editor options and giving us gradually more latitude to change the microgame as we liked. We are introduced to the concept of “assets” which are pieces we use to assemble our game. In an ideal world they snap together like LEGO pieces, and in the case of building this microgame occasionally they actually do represent LEGO pieces.

Aside from in-game objects, the LEGO minigame also allows us to define and change in-game behavior using “Behaviour Bricks”: Assets that look just like another LEGO block in game, except they are linked to Unity code behind the scenes giving them more functionality than just a static plastic brick. I appreciated how it makes game development super easy, as the most literal implementation of “object-oriented programming” I have ever seen. However, I was conscious of the fact these behavior bricks are limited to the LEGO microgame environment. Anyone who wishes to venture beyond would have to learn entirely different ways to implement Unity behavior and these training wheels will be of limited help.

The final chapter of this LEGO microgame tutorial ended with walking us through how to build and publish our project to Unity Play, their hosting service for people to upload their Unity projects. I followed those steps to publish my own LEGO microgame, but what’s online now isn’t just the tutorial. It also included what they called “Creative Mods” for a microgame.

Unity Tutorial LEGO Microgame

Once I made the decision to try learning Unity again, it was time to revisit Unity’s learning resources. This was one aspect that I appreciated about Unity: they have continuously worked to lower their barrier to entry. Complete beginners are started on tutorials that walk us through building microgames, which are prebuilt Unity projects that show many of the basic elements of a game. Several different microgames are available, each representing a different game genre, so a beginner can choose whichever one that appeals to them.

But first an ambitious Unity student had to install Unity itself. Right now Unity releases are named by year much like other software like Ubuntu. Today, the microgame tutorials tell beginners to install version 2019.4 but did not explain why. I was curious why they tell people to install a version that is approaching two years old so I did a little digging. The answer is that Unity designates specific versions as LTS (Long Term Support) releases. Unity LTS is intended to be a stable and reliable version, with the best library compatibility and the most complete product documentation. More recent releases may have shiny new features, but a beginner wouldn’t need them and it makes sense to start with the latest LTS. Which, as of this writing, is 2019.4.

I vaguely recall running through one of these microgame exercises on an earlier attempt at Unity. I chose the karting microgame because I had always liked driving games. Gran Turismo on Sony PlayStation (the originals in both cases, before either got numbers) was what drew me into console gaming. But I ran out of steam on the karting microgame and those lessons did not stick. Since I’m effectively starting from scratch, I might as well start with a new microgame, and the newest hotness released just a few months ago is the LEGO microgame. Representing third-person view games like Tomb Raider and, well, the LEGO video games we can buy right now!

I don’t know what kind of business arrangement behind the scenes made it possible to have digital LEGO resources in our Unity projects, but I am thankful it exists. And since Unity doesn’t own the rights to these assets, the EULA for starting a LEGO microgame is far longer than for the other microgames using generic game assets. I was not surprised to find clauses forbidding use of these assets in commercial projects, but I was mildly surprised that we are only allowed to host them on Unity’s project hosting site. We can’t even host them on our own sites elsewhere. But the most unexpected clause in the EULA is that all LEGO creations depicted in our minigames must be creatable with real LEGO bricks. We are not allowed to invent LEGO bricks that do not exist in real life. I don’t find that restriction onerous, just surprising but made sense in hindsight. I’m not planning to invent an implausible LEGO brick in my own tutorial run so I should be fine.

Checking In on Unity 3D

Deciding to participating in ART.HAPPENS is my latest motivation to look at Unity 3D, something I’ve done several times. My first look was almost five years ago, and my most recent look was about a year and a half ago in the context of machine learning. Unity is a tremendously powerful tool and I’ve gone through a few beginner tutorials, but I never got as far as building my own Unity project. Will that finally change this time?

My previous look at Unity was motivated by an interest in getting into the exciting world of machine learning, specifically in the field of reinforcement learning. That line of investigation did not get very far, but as most machine learning tools are focused on Linux there was the question of Unity’s Linux support. Not just to build a game (which is supported) but also to run the Unity editor itself on Linux. My investigation was right around the time Unity Editor for Linux entered beta with expectation for release in 2020, but that has been pushed to 2021.

For my current motivation, it’s not as important to run the editor on Linux. I can just as easily create something fun and interactive by running Unity on Windows. Which led to the next question: could I output something that can work inside an <iframe> hosted within Gather, the virtual space for ART.HAPPENS? On paper the answer is yes. Unity has had the ability to render content using WebGL for a while, and their code has matured alongside browser support for WebGL. But even better is the development (and even more importantly, browser adoption) of WebAssembly for running code in a browser. This results in Unity titles that are faster to download and to execute than the previous approach of compiling Unity projects to JavaScript. These advancements are far more encouraging than what Unity competitor Unreal Engine has done, which was to boot HTML5 support out of core to a community project. Quite a sharp contrast to Unity’s continued effort to make web output a first class citizen among all of its platforms, and this gives me the confidence to proceed and dive in to the latest Unity tutorial for beginners: LEGO!

ART.HAPPENS Motivates Return to Unity 3D

I’ve been talking about rovers on this blog for several weeks nonstop. I thought it would be fun to have a micro Sawppy rover up and running in time for Perseverance landing on February 18th, but I don’t think I’ll make that self-imposed deadline. I have discovered I cannot sustain “all rovers all the time” and need a break from rover work. I’m not abandoning the micro rover project, I just need to switch to another project for a while as a change of pace.

I was invited to participate in ART.HAPPENS, a community art show. My first instinct was to say “I’m not an artist!” but I was gently corrected. This is not the fancy schmancy elitist art world, it is the world of people having fun and sharing their works kind of world. Yes, some of the people present are bona fide artists, but I was assured anyone who wants to share something done for the sheer joy of creating can join in the fun.

OK then, I can probably take a stab at it. Most of my projects are done to accomplish a specific purpose or task, so it’s a rare break to not worry about meeting objectives and build something for fun. My first line of thought was to build a follow-up to Glow Flow, something visually pleasing and interactive built out of 3D printed parts and illuminated by colorful LEDs controlled with a Pixelblaze. It’s been on my to-do list to explore more ideas on how else to use a Pixelblaze.

Since we’re in the middle of a pandemic, this art show is a virtual affair. I learned that people will be sharing photos and videos of their projects and shown in a virtual meeting space called Gather. Chosen partially because the platform was built to be friendly to all computer skill levels, Gather tries to eliminate friction of digital gatherings.

I poked my head into Gather and saw an aesthetic that reminded me of old Apple //e video games that used a top-down tiled view. For those old games, it was a necessity due to the limited computing power and memory of an old Apple computer. And those same traits are helpful here to build a system with minimal hardware requirements.

Sharing photos and videos of something like Glow Flow would be fun, but wouldn’t be the full experience. Glow Flow looked good but the real fun comes from handling it with our own hands. I was willing to make some compromises in the reality of the world today, until I noticed how individual projects will be shared as web content that will be hosted in an <iframe>. That changes the equation. Because it meant I could build something interactive after all. If I have control of content inside an <iframe>, I can build interactive web content for this show.

I briefly looked at a few things that might have been interesting, like three.js and A-Frame. But as I read documentation for those platforms, my enthusiasm dampened by shortcomings I came across. For example, building experiences incorporating physics simulation seems to be a big can of worms on those platforms. Eventually I decided: screw it, if I’m going to do this, I’m going to go big. It’s time to revisit Unity 3D.

Micro Sawppy Beta 3 Differential Link

Several design changes in Micro Sawppy Beta 3 (MSB3) allowed me to experiment with different suspension rocker designs: the multi-piece rocker with deployment pivot and the single-piece rocker for straightforward assembly. One of those enabling changes was a rework of the link between the rocker and the differential, continuing the experiment started with MSB1 and MSB2 of using 3D printed living hinges for this critical linkage.

Enabling a single-piece rocker was not an explicitly goal of this rework, that was just a convenient side effect that I took advantage of. The real focus of this iteration was to make the living hinge itself a small disposable component that can be easily replaced. Because micro Sawppy is targeted for an audience with entry-level printers, I can’t assume rover builders can print with flexible filament like TPU. So if I want to use living hinge in my design, I have to account for the fact that beginner-friendly 3D printing materials like PLA will fatigue and break with some regularity.

If the living hinges of MSB1 and MSB2 should break, it would mean reprinting and replacing some very large parts. In contrast, MSB3 hinge is a tiny little part that clips into larger surrounding rigid parts. Furthermore, it is the same part at all four joints. We would only need to keep one style on hand in replacement inventory, which can be used to fix any of the four joints that might break.

An associated benefit is that we can print the hinge with different settings to find the best tradeoff between flexibility and durability. Looking at the hinge in the orientation on the 3D printer bed, it is easy to scale the design’s print height and test how a particular batch of filament behaves. I printed these at heights varying from 2mm to 6mm and I tried the 2mm first. It would be the most flexible, and I wanted to see how long it would last.

The answer: less than 30 seconds! Oh well, at least now I know. For this particular type of material (MatterHackers Build Series PLA) 2mm was too fragile. 6mm was overly rigid and interfered with proper rocker-bogie operation. 4mm seems to be sufficiently durable but it was still too stiff to let the rocker-bogie operate as smoothly as I like. Even though these data points weren’t terribly encouraging, I’m glad MSB3 made it really easy to adjust this particular parameter for experimentation.

But at this point I’m suffering from rover fatigue and need a break. I’ll return to micro Sawppy later (UPDATE: it is later) but right now I’m going to go play with Unity 3D for about a week.

Micro Sawppy Beta 3 Suspension Rocker With Deploy Pivot

One of the main project goals of Micro Sawppy Beta 3 (MSB3) is making a 3D-printed rover design that is easier to build than Sawppy V1. Reducing the number of parts is one way to support that goal, and that’s why making the suspension rocker as a single 3D printed piece was a good thing to try and something to keep as a strong candidate for the final design.

But I’m a tinkerer, and it’s hard to resist the temptation of doing more. Especially for features that are tempting and have a good chance of being actually useful. The Rocker Deploy Pivot (RDP) is one such feature, a hinge on the suspension rocker that lets Curiosity and Perseverance fold up for their trip to Mars. I thought replicating the RDP on my own rover would help make it more portable in addition to the nerd cred factor. If so, that might be a worthwhile gain in exchange for increased parts count and assembly complexity.

Since MSB1 and MSB2 suspension rocker had to be a multi-piece design anyway, I started exploring the idea of building a RDP but it was never functional for those rovers. MSB3 is the first iteration that can practically fold up for transport. Collapsing the suspension like this does not change the length or width by much, but it cuts height by almost half. In this form, MSB3 can conceivably be small enough to be carried in a backpack.

Like many of the mechanisms that are newly designed for MSB3, it is probably bulkier than it needs to be. It added two more M3 screws In addition to the screw holding the rocker bearing. This is plenty strong so I have margin to cut back on the structure, maybe eliminating one of those extra screws and move the remaining screw closer to the pivot point. Both will help reduce bulk and reduce load on its link to the differential.

Micro Sawppy Beta 3 Suspension Rocker (Single Piece)

My rover Micro Sawppy Beta 3 (MSB3)’s suspension bogie attaches to its rocker assembly. Just for the sake of experiment, I designed and printed a single-piece rover suspension rocker. Rovers MSB1 and MSB2 had a complex multi-piece rocker design because its many mounting points were not coplanar and I couldn’t figure out how to align everything along 3D printing layers for maximum strength.

I redesigned the steering servo and rotational bearing mounting mechanisms for MSB3. As a result, it was almost possible to get everything lined up. The lone exception was that the front corner was still not easily aligned with the rest, but it could be a long sloping shape that spreads its load across a large surface area of print layers. It still won’t be as strong as if everything were lined up on the same layer, but it should not be an immediate structural disaster. It will, however, require printing with supports and accepting the rough surface that results.

I could help hide this poor surface finish, by aligning the print so it is on the inner surface of the suspension member. It’s not out of sight, however, since the front corner wheel stick out the front and thus this surface is always visible.

Every 3D printing slicer handles supports slightly differently. This particular test piece was sliced with MatterControl on default PLA settings and auto-generated supports. The flat portions (near steering servo and joint) had a rough surface but at least that section separated cleanly. The angled section (between steering joint and rocker pivot bearing) was a mess. That section of support did not separate at all, the surface visible here is a combination of forcibly tearing plastic apart and cutting the remainder free with a blade. I would not call this a complete success. Nevertheless the result does appear structurally sound. I would be confident using it in a rover design as there’s enough surface area across the diagonal section to hold together well, except under gross abuse.

This experiment proved that a single-piece suspension rocker is feasible. Practical, even. But I was not content to leave it at that and built a multi-piece suspension rocker in an effort to improve rover portability.

Micro Sawppy Beta 3 Suspension Bogie

A quick strength test of SG90 micro servo found that it was pretty fragile, but before I worry too much about mitigating that problem I wanted to get more real world runtime on Micro Sawppy Beta 3 (MSB3) to see if such work is even necessary. In order to get that real world runtime, I should focus on continuing design work on the rest of the rover. Next stop: the suspension bogie.

Most of changes to MSB3 bogie reflects the switch to using TT gearmotors to drive the middle wheel. Compared to those changes, work to accommodate new steering joint and servo mount were relatively minor. The final change was mechanically simple but has a big impact: following the precedent set by steering bearings of MSB3, the bogie bearings mount were also changed around. Now, instead of a M3 fastener running through the center of bearings as a rotational axle, the M3 fastener now securely bolts down the center to be static relative to the bogie and the motion is around the outside.

Again the major advantage is making this rover much easier to build and maintain. We no longer have to worry about the amount of torque used, as it no longer affects smoothness of bearing motion. A rover builder can torqued down these M3 fasteners to their heart’s content and it wouldn’t affect rover behavior.

As part of this change, the C-shaped bracket to prevent bogie over-rotation is no longer a simple clip-on piece of plastic as it was on MSB1 and MSB2. It is now a structurally important part of the rocker-bogie joint, supporting the weight of outer bogie bearing. There’s a good chance this bracket on MSB3 is bulkier than necessary, but I wanted to verify the idea works before slimming it down. It’s already taken way more effort than I thought it would take to get this point.

I need to sit down and simplify the bracket’s geometry. Its shape is dictated by the bogie’s allowable rotational range of motion, and I think that math got away from me. There are far too many numbers and guide lines in the CAD sketch. It took several small test samples to work through this geometry and my mechanical intuition says there should be an easier way. Fortunately the bracket is small and quick to print making iterations fast. Hopefully that mythical “easier way” will come to reality once I dedicate some time to stew on the problem for a bit. For right now, I’m content to use this design for attachment to MSB3’s suspension rocker, starting with the single-piece version.

SG90 Micro Servo Strength Test

The weakest point in my Sawppy V1 rover were its shaft couplers, but that bug also became an accidental feature: when the system is abused, being the weakest link meant it is a sacrificial element to break before anything more expensive would. I am now designing the servo steering mechanism for Micro Sawppy Beta 3 (MSB3) and wanted to know: do I also have an (accidental) abuse-absorbing feature on the little rover?

This was something fairly easy to test and, thanks to the low cost of SG90 plastic gear micro servos, I was willing to sacrifice one for this knowledge. I put together a MSB3 prototype steering assembly and wired the servo to power and control signal, telling to hold straight center. If I wanted to be scientific I would use a torque wrench or something to quantify the forces as I test, but I didn’t have the proper equipment so I just used my hand.

The answer is no, nothing else in this steering assembly for MSB3 will step up and absorb abuse on behalf of the steering servos. It took surprisingly little force from my wrist before I heard something snap. I disassembled the gearbox to see not one but two failures: A tooth is bent (but still attached) on the final output gear, and another tooth has broken off from the adjacent meshing gear.

Since I didn’t use an instrument to quantify the breaking force, I don’t have objective numbers to post here. Subjectively I felt the breaking point was beyond what I would reasonably expect from a little rover in normal roaming, which is good for rovers rolling along minding their own business. But I want micro Sawppy to be something a teacher can introduce to their class, and I didn’t think the breaking point was beyond the capabilities of curious/destructive children, and so the fragility worries me.

I’m sure MG90 metal gear servos would fare better, but would they be strong enough? I’ve already sacrificed a few MG90s to experimentation and I’ll want to set up proper instrumentation before I sacrifice another. And obviously a MG90 is more expensive than a SG90. Is it better to use affordable SG90 servos and replace them as needed? Or is it better to go straight to MG90 for higher durability? Answering this question definitively would require real world usage data. See how often SG90s require replacement, and whether the replacement rate is greater or less than the up front-cost increase of using MG90 servos. In order to obtain this real world usage data, I’ll have to keep working on the rest of the rover.

Micro Sawppy Beta 3 Steering Trim

Getting a pair of bearings aligned with the wheel contact patch is a feature on all Sawppy rovers, so that in itself wasn’t new. But Micro Sawppy Beta 3 (MSB3) did try a new way to mount bearings, and I’m optimistic it’ll make future Sawppy designs easier to build. On top of that steering joint, I’ve also modified the servo attachment for another feature I wanted to put on my rover designs: mechanical steering trim adjustment.

Sawppy V1 had a software-based trim adjustment system, where the center point of each steering servo could be adjusted by modifying a value in a text file. I though it was something that I could set once and forget but I was wrong. The potentiometers used to sense position inside a servo motor drifts over time (and temperature, and humidity, and phase of the moon…) so in reality steering trim had to be adjusted fairly frequently.

For MSB3 the servo horn bundled with a micro servo is mounted on a 3D-printed piece with an arched hook in the front. I could then fasten it to my steering assembly with a single screw. Steering trim adjustment becomes a matter of loosening that screw, sliding to a different point within that arc, and tightening the screw back down.

Another advantage of this design is that, unlike MSB1 and MSB2, the servo is freed from handling any structural loads. It is now responsible solely for steering just as servos were for Sawppy V1. It is a feature I’m glad I could bring to a smaller scale. I thought about going one step further on ease of assembly, and tried a few clip-on servo mounting brackets for tool-less assembly and replacement. (Pictured.) But even though these little servos assert little torque, there is enough to distort 3D-printed plastic and affect steering accuracy so I returned to the concept of a screw-down servo bracket.

But the experience did remind me of one thing about Sawppy V1: I didn’t like using heat-set inserts on shaft couplings because they would slip and break. However, that slippage and breakage did have an advantage when Sawppy is stressed beyond design limits. When a child decided to break my rover, the coupler broke before that abuse was transmitted into the servo. What would happen to this little rover servo?

Micro Sawppy Beta 3 Steering Bearings

The third revision of my little rover prototypes, Micro Sawppy Beta 3 (MSB3), uses TT gear motors and matching wheels for the six-wheel-drive aspect of its rover suspension. Using commodity components solves a lot of problems, but now I have to integrate them. The starting point is a rectangular bracket that bolts onto two top mounting points of a TT gearbox, and angles over the wheel to a hole for a bearing that is aligned with where the wheel touches the ground. (“Contact patch”) This alignment is necessary in order for the wheel to pivot in place. If the rotational axis of the bearing is not aligned, then any rotation would drag the wheel across an arc instead of pivoting on the axis of rotation.

Sawppy V1 had the same steering axis alignment requirement, and I used 8mm shafts running through the center of 608 bearings. For MSB3 I’ve turned that design around: instead of transmitting the rotational force through the center of the bearing, now the bearing center remains static while steering forces are transmitted around it. M3 fasteners bolt a pair of bearings to the front end of the suspension rocker, one top and one bottom. The steering mechanism takes the general shape of a C holding above and below this pair of bearings.

MSB1 and MSB2 ran M3 screw through the center of bearings in their suspension members, but that used the M3 fastener as a rotational axle much like how Sawppy V1 used 8mm shafts. This meant the smoothness of the rotation is sensitive to how tightly the fastener was torqued down. Too loosely, and it rattles. Too tightly, and the fastener would cause the bearings to bind up. Which defeats the purpose of using ball bearings to begin with! I had a few other ideas on how to address these problems, but I decided to try this one first. Inverting the roles was less dependent on 3D printer precision and hence easier to build.

Part of making micro Sawppy easier to build is to avoid similar looking parts that are not interchangeable. For MSB3 I designed the two parts of this steering C-shaped assembly so it can be used on all four corners. That is to say, a builder will print four copies of the same design. Rather than Sawppy V1 which had distinct front-left, front-right, rear-left, and rear-right corner steering components people might (and did) inadvertently swap causing confusion. This is something I wanted to mitigate by stamping reference numbers on parts, but it’s better to design so the user doesn’t have to squint to make out part numbers at all.

In order to make a part usable in multiple directions, it is symmetric front-back as well as side-to-side. Adding provisions to allow the parts to be attached in multiple directions also added unnecessary bulk, and I haven’t made up my mind whether the tradeoff is worthwhile. Clever designers know how to design parts so they are clearly unambiguous, I might want to tackle that as a challenge instead. In the meantime I’ll leave this as-is and proceed to steering servo installation and associated adjustment.

Micro Sawppy Beta 3 Wheel

The major motivation to build Micro Sawppy Beta 3 (MSB3), leaving MSB2 behind, was the increasing complexity of MSB2 wheels and I wanted a rover that is simple to build. Switching to using commodity TT gearmotor plus its associated wheels for the rover’s six wheels goes a long way. By using a gearbox that is already designed to support and drive a matching wheel, we eliminate custom modification and any worries about robustness of said modifications.

When I looked inside a TT gearbox I didn’t see any ball bearings, but I did see support structure that should suffice for a rover at this scale. At a very minimum, the rover would be as robust as every other robot design that use these wheels.

But like everything in design and engineering, there are tradeoffs. The biggest disappointment (and why I was reluctant to use these things to begin with) is that poor ground clearance. Comparing MSB3 against MSB2 and MSB1, we see ground clearance has degraded with iteration. There were good reasons to take each of these steps but it’s not the direction I wanted to go. Even with feedback that this feature isn’t ranked highly on many other people’s rover builds. Fortunately(?) I think this is as far as I have to compromise on this topic, at the moment I don’t know of anything that might further degrade clearance for future rovers.

Another problem is the fact these wheels don’t much resemble real Mars rover wheels. I was really proud of Sawppy’s wheel design and how I was able to scaled down for MSB1 and MSB2, but now I’m leaving them behind. While there’s the possibility to replace these generic wheels with 3D printed rover wheels, it would only be cosmetic. Given the geometry here, I don’t see a good way to restore functional ground clearance. But at least these wheels come with rubber tires, which was one of the biggest non-Mars-authentic feature requests for Sawppy.

A TT gearbox offers three attachment points. Two near the motor and one at the opposite end. While it would be most robust to use all three points, for MSB3 I decided on simplicity and used only the two close to its motor. The mounting bracket is angled to meet up with the steering axis, which is aligned with the middle of the wheel so the wheel could pivot in place.

Micro Sawppy Beta 3

Gaining a basic understanding of how to use the L298N module to control DC motors was an important confidence booster for modifying my little rover plans: Switch the rover’s six-wheel drive from micro servos modified for continuous rotation to a different approach: TT gearmotors that are designed from the start to drive wheels. They even come with one of the most frequent Sawppy requests: soft rubber wheels that are expected to have better traction on earthly surfaces. Steering the little rover’s four corner wheels remain the duty of micro servos, which would be doing their designed job of rotational position actuators.

The first rover chassis to emerge from this new plan is Micro Sawppy Beta 3 (MSB3) shown here next to its predecessors MSB1 and MSB2. Changing the wheel drive mechanism has a huge impact on the overall design, as this family portrait shows. A new rocker-bogie suspension implementation is the focus here, relegating the body back to a simple box as it was on MSB1.

The next few posts will cover some of the highlights of mechanical design changes for MSB3. Changing the wheel drive motor required a completely new steering knuckle design for the corner wheels. Which became an opportunity to design a multi-piece mechanism that would allow me to quickly adjust steering trim at mechanical level. Increasing the number of parts does make the mechanism more complex, but I am optimistic it would actually prove to be easier to build. MSB3 also represented a new way to use 623 bearings that hopefully eliminates the sensitivity to tightening torque. And the revamped rocker features an improved adaptation of the real rover’s rocker deploy pivot allowing the rover to fold up for transport. And finally, the differential linkage is a further evolution of using 3D printing to avoid having to scour remote control hobby catalogues for an adjustable suspension turnbuckle.

This tour of MSB3 starts at the same place as its predecessors, from the wheels. Then we’ll work our way up.

Circuit Schematic of Generic L298N Driver Board

As a learning exercise, I decided to generate my own documentation for commodity L298N motor driver modules available wherever hobbyist electronics are sold. The first step was to catalogue all the components mounted on board, and now I analyze the circuit board layout to see how they are connected together. And as much as I like to do things digitally, for projects like this I really appreciate the flexibility and immediacy of a sheet of paper to scribble on.

I could probably do it with the actual device in hand and a blank sheet of paper, but this time around I decided to create my own visual guide. I took photos that are as directly square-on as I could front and back. I scaled them to the same size, and printed them side by side on a sheet of paper leaving room for me to write notes. Once printed, I folded the paper in half while holding it up to a light source so I could line up the front and the back. Then I started following copper traces and scribbling my notes.

Fortunately this was a relatively simple circuit that mostly followed data sheet recommendations. I quickly confirmed the eight diodes were present to dump excess power into the +12V and GND planes. The two electrolytic capacitors are there for the +12V and +5V power planes respectively. IN1 through IN4 and OUT1 through OUT4 are straightforward direct routes. I also confirmed the optional current sensing resistors were absent, those pins were tied directly to ground. Furthermore, there was no provision to make adding current sensing resistors easy. People who want to perform current sensing are probably better off using another module.

A few traces were buried under components so their paths had to be teased out via probing with a continuity meter. The jumpers on ENA and ENB do indeed tie them high to the +5V power plane. The third jumper enable/disable the onboard 78M05 regulator. When the jumper is in place, it connects the +12V power plane to the input pin of 78M05. Which can then supply 500mA of current to +5V plane. Since the L298 itself draws less than 100mA, the remainder capacity can be tapped via the +5V screw terminal to perhaps drive a microcontroller. When the jumper is removed, regulator input is disconnected from the +12V plane and the +5V screw terminal becomes an input port to accept external power. The LED and current-limiting resistor is connected to the +5V plane and will illuminate when +5V power is present.

Aside from the silkscreened text proclaiming +12V, I found nothing to limit motor power supply to +12V. As far as I can tell it can be anywhere from 7 to 35V when using the onboard 78M05 regulator. If the regulator jumper is removed and L298N is running on external logic power, the lower limit is dictated by the L298N which can function with as low as 4V. The upper limit of a L298N is 45V with peaks of 50V, but the capacitors and 78M05 used on this module are listed with 35V maximums. Personally I’m unlikely to use anything higher than two 12V lead-acid batteries in series, which would be 28.8V fully charged and comfortably under that limit.

As a part of this self-assigned exercise, I also practiced drawing a schematic using the electronics design component of Autodesk Fusion 360. I think I’ve captured all of the information above, though I’m sure this schematic violates a bunch of conventions and make electronic engineer eyes twitch. (I’ve had to read software code written by electrical engineers so I have some idea what the mirror image is like.) And while I try to put lots of comments into my software source code, I haven’t learned how to best document a schematic. Until I learn more about that world, this blog post represents my best effort for this round.

Armed with this knowledge, I felt confident enough to embark on designing a micro rover to use TT gearbox with its DC motor, leading to Micro Sawppy Beta 3.

Components of Generic L298N Motor Driver Module

The internet is a great resource, but sometimes I want the experience of doing something on my own. This is why after buying a batch of generic L298N motor driver modules, I decided to sit down and understand what I have on hand instead of just downloading someone else’s documentation.

The main attraction with the big heat sink is the L298 itself, specifically the L298N variant. Flanking it, four on each side, are small modules labelled “M7”. Since the datasheet said an array of four diodes each are required for A and B sides, seeing eight of something on the board makes them candidates for those diodes. A search for “M7 Diode” indicates they are 1N4007 diodes.

A single rectangular package is etched with ST logo and designation 78M05. Its datasheet describes it as a voltage regulator delivering 5V at nominal 500mA. Input voltage can be up to 35V, but must be at least 7V. Two cylindrical assemblies are likely electrolytic capacitors. The numbers indicate 220uF at up to 35V, matching the maximum limit of 78M05. L298 datasheet required a capacitor between motor voltage supply and ground, and another capacitor between logic voltage supply and ground, so that fits.

Two blue screw terminal blocks on either side are motor output connections. They are labeled OUT1 and OUT2 on one side and OUT3 and OUT4 on the other, designations straight from the L298N data sheet. Also straight from the data sheet are control signals IN1, IN2, IN3, and IN4. There are jumpers on ENA and ENB. My hypothesis is that they are tied to +5V to stay in the enabled state by default, allowing motor direction control going full speed forward, full speed reverse, and brake stop. If an application wants control over enabled state, we can remove jumpers and connect the enable lines for PWM speed control.

The third screw terminal block has three posts labeled +5V, GND, and +12V. GND is obvious enough, and given the presence of 78M05, my first guess is that the +5V terminal gives the option of tapping its output to drive our microcontroller. But it is also possible it is a +5V input to bypass the 78M05. There is a jumper nearby to disconnect something, possibly the 78M05? A small surface mount LED and accompanying current limiting resistor probably indicate power on one of the power rails. Finally the +12V label is mysterious, since everything I see with a voltage limit can go up to +35V and I see no reason to constrain it to +12V.

Looking over the list of expected components, I am left with two missing: there are no candidates for current sense resistors. Time to trace through this circuit and see if I can find them.