No Further Unity Projects

I’ve just learned some very valuable electronics lessons, and it was helped by KiCad the free open-source electronics design software. It’s a very large suite of tools but having a specific need in front of me (capture reverse engineered schematics of a circuit board) helped me stay focus and get to the “know enough to do what I need to do” point quickly. I’ve also been learning FreeCAD, another piece of free open-source software, but I haven’t reached that point yet. And now I’m adding another piece of open-source software on the “learn enough to do what I need” list: Godot Engine.

Godot is an alternative to Unity 3D, both offering a game development environment and runtime engine. Unity has lots of learning resources online. It adopts new development paradigms like data-oriented programming, new tools like machine learning, and supports new platforms like virtual reality. It also used to have very beginner-friendly terms and conditions, letting aspiring indie game developer hobbyists play around for free and letting small startups launch. Originally the pitch was: “share your success”. Only after a game studio is successful would Unity start requiring payment. Unfortunately, Unity as a business is changing for the worse.

Recently there’s been an uproar in the game development industry as Unity announced new pricing policies to go into effect at the start of 2024. While price increases happen across the economy on everything we buy, this particular case was deeply antagonizing.

  1. Instead of paying royalties on successful games, it could levy a fee upon every game installation regardless of whether it is a revenue-generating context. This means, for example, successful royalty-paying game studios will be charged for every installation of a free demo whether it turns into a sale or not.
  2. Even though it doesn’t take effect until next year, the new policies could apply retroactively. Games currently in development will be held to different terms than what the project started with. Even worse, it applies to games that have already been released!
  3. Before the announcement, these changes were previewed with a few game studios to get their feedback. After receiving some very negative feedback, Unity went ahead anyway.
  4. The worst part: Unity pulled this stunt once before in 2019 and got flamed for it. They walked back those changes and promised “we heard you” and won’t do it again. Now in 2023, they did it again.

Why is this happening? Money, of course! Unity went public in 2020, which meant a management structure incentivized to “maximize shareholder value”. And the most obvious way to do that was to squeeze game developers for as much as they will tolerate. The proposed 2019 changes were originally intended to improve Unity’s financial outlook pre-IPO but backfired. And now it is obvious Unity’s management has failed to learn the lesson.

As of this writing, Unity is on a damage-control footing walking back their announcement. Again. Will it work a second time? I don’t know. It hasn’t escaped people’s notice that the same management mindset that drove headfirst into this train wreck is still in charge. [Update: CEO has resigned, but the board of directors and senior management are still there.] Notably absent from the current retraction is any legally binding obligation preventing them from trying yet again after this current storm blows over.

So “Fool me once”, and all that. Unity’s largest competitor is Unreal Engine whose licensing terms aren’t as generous, but they also lack a history of such underhanded tactics at changing said terms. Unreal will likely pick up Unity customers who need a mature toolset with leading-edge performance and quality. For those without such requirements, like small indie game studios and aspiring game developer hobbyists, maybe none of these Unity changes affect us today. But we should all be deeply concerned that Unity’s free tier may gradually become crippled in the future if not disappear entirely. Thus, alternatives like Godot Engine deserves a look.

Mid 2022 Snapshot of Unity DOTS Transition

As a curious hobbyist, I can learn Unity at my own pace choosing topics that interest me. More importantly, I have the luxury of pausing when I’m more interested in learning something else. No game development shipping deadline to meet, just a few Unity projects for fun here and there. This meant I first learned about Unity DOTS at the end of 2021, and I had to catch up to what has happened since. Since Unity’s DOTS transition is still in progress, the information on this page will quickly become outdated.

My sources are Unity blog posts tagged with DOTS, plus two YouTube playlists:

  1. Unite Copenhagen 2019 – DOTS. This playlist of 17 videos was at Unity’s own conference in 2019, where they invited people to start sinking their teeth into DOTS. A lot of future-looking discussion about goals and aims, but there were enough tools and support infrastructure to start experimentation. (As opposed to Unite LA 2018, which was more of a DOTS introduction as fewer tools were available for testing.)
  2. Unity at GDC 2022. This playlist of 17 videos spanned Unity presentations at Game Developer Conference earlier this year. Not all of the videos on this list involve DOTS. But those that did, gave us an update on how things have progressed (or not progressed) since 2019.

Given that information, my understanding today: Unity DOTS adoption is to improve performance of Unity titles on modern hardware while still preserving Unity’s flexibility, existing codebase, and friendliness to users. Especially beginners!

“Improved performance” is usually shown off by demonstrating huge complex scenes, but at the core it aims to better align Unity runtime code with how modern multicore CPUs go about their work. Yes, this would allow huge and complex scenes previously impossible, but it also means reducing power and resources consumed to deliver scenes built today. This is especially important for those publishing titles for battery-powered cell phones and tablets.

But that is runtime code. At design time, Unity will stay with the current GameObject-oriented design paradigm. All those tutorials out there will still work, all the components on Unity Marketplace could still be used. Paraphrasing a presenter: “Design time stays people-friendly, DOTS changes the runtime to become computer-friendly.” Key to this duality is a procedure that was called “conversion” but has since been renamed “baking” to translate a set of GameObject data for use via Entities and Components. GameObject code are converted to Systems that execute on said data. These systems ideally work in units that can be compiled to native code with the Burst compiler and scheduled by the Jobs system for distribution across all available CPU cores. But if that’s too big of a leap to make in one conversion, Unity aims to support intermediate levels of functionality so developers can adopt DOTS piecemeal as they are ready and do so in places that make sense.

Of course, it is possible to do dive into the deep end of the pool and directly create Entities/Components /Systems. However, the Unity editor is not (yet?) focused on supporting that workflow, current work is focused on helping the user base make this “baking” transition over the next few years. Which means certain Unity initiatives for a fully DOTS era may be put on hold.

Learning DOTS, the biggest mental hurdle for me has been around Entities. Conceptually, Unity GameObject are already composed of components. Though actual code differs between two different schools of components, it was a close-enough concept for me to understood. It was similarly easy for me to comprehend that executable code logic move to systems to work on those components. From there, it was easy for me to conclude GameObject are converted to Entities, but that is wrong. (Or at least, hinders maximizing potential of DOTS.) I’m still struggling with this myself and I hope for an “A-ha!” moment at some point in the future.

Notes on “Hardspace:Shipbreaker” Release

Just before 2021 ended I bought the game Hardspace:Shipbreaker in an incomplete early-access state. I had a lot of fun despite its flaws. In May, the game exited early-access to become an official release, followed quickly by a 1.1 release. This post documents a few observations from an enthusiastic player.

The best news: many annoying bugs were fixed! A few examples:

  • Temperature Control Units no longer invisibly attach ship exterior to interior.
  • Waste Disposal Units are no longer glued to adjacent plates.
  • Armor Plates could be separated for barge recycling separately from the adjacent hull plate, which goes to the processor.

Sadly, not all of my annoyance points were fixed. The worst one is “same button for picking up a part and pushing it away”. That is still the case, and I still occasionally blast some parts off into space when I intended to grab them, which meant I have to waste time chasing them down.

The most charming new feature are variations on ship interiors. The 0.7 release had variations on exterior livery that corresponded to fictional companies that owned and used these ships, but the interior had been generically identical. Now there are a few cosmetic variations, and I was most amused by the green carpet in old passenger liners. It gave me a real fun 70s vibe in a futuristic spaceship.

The most useful new feature is the ability to save partial ship salvage progress. Version 0.7 lacked this feature and it meant once we started a ship, we were committed to keeping the game running until we were done. (Either by playing through multiple shifts in one sitting or leaving the computer on and running the game until we could return.) Saving ship progress allows us to save and quit the game and return to our partially complete ship later. This feature noticeably lengthens game load and save times, but I think it is a worthwhile tradeoff.

In version 0.7, the single-player campaign plotline only went to an Act II cliffhanger. It now has an Act III conclusion, but that did not make the plot more appealing to me. The antagonist went too far and entered the realm of annoying caricature. Note I did not say “unrealistic” because there definitely exist people who climb into positions of power in order to abuse others. I’ve had to deal with that in real workplaces and didn’t enjoy having that in my fictional workplace. I was also disappointed with the storybook depiction of unionization, real life union-busting is far more brutal. Though I don’t particularly need to experience that in my entertainment, either. But aside from imposing some pauses in the shipbreaking action, the single player plotline does not impact the core game loop of taking ships apart. Lastly: the “little old space truck” side quest now ties into the conclusion, because getting it fixed up is your ticket out of that hellhole.

Based on earlier information, the development team should now be focused on releasing this title for game consoles. I’ve been playing it using a game controller on my PC and found it made an acceptable tradeoff with its own upsides and downsides relative to keyboard-and-mouse. I hope it will do well on consoles, I want to see more puzzle-solving teardown games on the market.

But the reason I started playing this game at all was because I had been learning about Unity game engine’s new Data Oriented Technology Stack (DOTS) and wanted to see an application of it in action. As much as I enjoyed the game, I hadn’t forgotten the educational side of my project.

Unity Without OpenGL ES 2.0 Is All Pink on Fire TV Stick

I dug up my disassembled Fire TV Stick (second generation) and it is now back up and running again. This was an entertaining diversion in its own right, but during the update and Android Studio onboarding process, I kept thinking: why might I want to do this beyond “to see if I could”? This was the same question I asked myself when I investigated the Roku Independent Developer’s Kit just before taking apart some Roku devices. For a home tinkerer, what advantage did they have over a roughly comparable Raspberry Pi Zero? I didn’t have a good answer for Roku, but I have a good answer for Fire TV: because it is an Android device, and Android is a target platform for Unity. Raspberry Pi and Roku IDK, in comparison, are not.

I don’t know if this will be useful for me personally, but at the very least I could try installing my existing Unity project Bouncy Bouncy Lights on the device. Loading up Unity Hub, I saw that Unity had recently released 2021 LTS so I thought I might as well upgrade my project before installing Unity Android target platform tools. Since Bouncy Bouncy Lights was a very simple Unity project, there were no problems upgrading. Then I could build my *.apk file which I could install on my Fire TV just like introductory Android Studio projects. There were no error messages upon installation, but upon launch I got a warning: “Your device does not match the hardware requirements of this application.” What’s the requirement? I didn’t know yet, but I got a hint when I chose to continue anyway: everything on screen rendered a uniform shade of pink.

Going online for answers, I found many different problems and solutions for Unity rendering all pink. I understand pink-ness is a symptom of something going wrong in the Unity graphics rendering pipeline, and it is a symptom that can have many different causes. Without a single solution, further experiment and diagnosis is required.

Most of the problems (and solutions) are in the Unity “Edit”/”Project Settings…”/”Player”/”Other Settings” menu. This Unity forum thread with the same “hardware requirements” error message suggests checking to ensure “Auto Graphics API” is checked (it was) and “Rendering Path” to Linear (no effect). This person’s blog post was also dealing with a Fire TV and their solution was checking “Auto Graphics API” which I am already doing. But what if I uncheck that box? What does this menu option do (or not do?)

Unchecking that box unveils a list of two Graphics APIs: Vulkan and OpenGLES3. Hmm, I think I see the problem here. Fire TV Stick second generation hardware specification page says it only supported OpenGL ES 2.0. Digging further into Unity documentation found that OpenGL ES 2.0 support is deprecated and not included by default, but we could add it to a project if we need it. Clicking the plus sign allowed us to add it as a graphics API for use in our Unity app:

Once OpenGL ES 2.0 is included in the project as a fallback graphics API, I could rebuild the *.apk file and install the updated version.

I got colors back! It is no longer all pink, and cubes that are pink look like they’re supposed to be pink. So the cubes look fine, but all color has disappeared from the platform. It was supposed to have splotches of color cast by randomly colored lights attached to each block.

Instead of showing different colors, it has apparently averaged to a uniform gray. I guess this is where an older graphics API gets tripped up and why we want newer APIs for best results. But at least it is better than a screen full of pink, even if the solution in my case was to uncheck “Auto Graphics API”. The opposite of what other people have said online! Ah well.

Notes on “Hardspace: Shipbreaker” 0.7

I have spent entirely too much time playing Hardspace: Shipbreaker, but it’s been very enjoyable time spent. As of this writing, it is a Steam Early Access title and still in development. The build I’ve been playing is V.0.7.0.217552 dated December 20th, 2021. (Only a few days before I bought it on Steam.) The developers have announced their goal to take it out of Early Access and formally release in Spring 2022. Comments below from my experience do not necessarily reflect the final product.

The game can be played in career mode, where ship teardowns are accompanied by a storyline campaign. My 0.7 build only went up to act 2, the formal release should have an act 3. Personally, I did not find the story compelling. This fictional universe placed the player as an indentured servant toiling for an uncaring mega-corporation, and that’s depressing. It’s too close to the real world of capitalism run amok.

Career mode has several difficulty settings. I started with the easiest “Open Shift” that removes the stress of managing consumables like my spacesuit oxygen. It also removes the time limit of a “shift” which is fifteen minutes. After I moved up to “Standard” difficulty, the oxygen limit is indeed stressful. But I actually started appreciating the fifteen-minute limit timer because it encourages me to take a break from this game.

Whatever the game mode (career, free play or competitive race) the core game is puzzle-solving: How to take apart a spaceship quickly and efficiently to maximize revenue. My workspace is a dockyard in earth orbit, and each job takes apart a ship and sort them into one of three recycle bins:

  1. Barge: equipment kept intact. Examples: flight terminal computers, temperature control units, power cells, reactors.
  2. Processor: high value materials. Examples: exterior hull plates, structural members.
  3. Furnace: remainder of materials. Example: interior trim.

We don’t need to aim at these recycle bins particularly carefully, as they have an attraction field to suck in nearby objects. Unfortunately, these force fields are also happy to pull in objects we didn’t intend to deposit. Occasionally an object would fall just right between the bins and they would steal from each other!

I haven’t decided if the hungry processors/furnaces is a bug, or an intended challenge to the game. There are arguments to be made either way. However, the physics engine in the game exhibit behavior that are definitely bugs. Personally, what catches me off guard the most are small events with outsized effects. The most easily reproducible artifact is to interact with a large ship fragment. Our tractor beam can’t move a hull segment several thousand kilograms in mass. But if we use the same tractor beam to pick up a small 10 kilogram component and rub it against the side of the hull segment, the hull segment starts moving.

Another characteristic of the physics engine is that everything has infinite tensile strength. As long as there is a connection, no matter how small, the entire assembly remains rigid. It means when we try to cut the ship in half, each half weighting tens of thousands of kilograms, we could overlook one tiny thing holding it all together. My most frustrating experience was a piece of fabric trim. A bolt of load-bearing fabric holding the ship together!

But at least that’s something I can look for and see connected onscreen. Even more frustrating are bugs where ship parts are held together by objects that are visibly apart on screen. Like a Temperature Control Unit that doesn’t look attached to an exterior hull plate, but it had to be removed from its interior mount at which point both the TCU and the hull are free to move. Or the waste disposal unit that rudely juts out beyond its allotted square.

Since the game is under active development, I see indications of game mechanics that was not available to me. It’s not clear to me if these are mechanisms that used to exist and removed, or if they are promised and yet to come. Example: there were multiple mentions of using coolant to put out fires, and I could collect coolant canisters, but I don’t see how I can apply coolant to things on fire. Another example: there are hints that our cutter capability can be upgraded, but I encountered no upgrade opportunity and must resort to demolition charges. (Absent an upgrade, it’s not possible to cut directly into hull as depicted by game art.) We also have a side-quest to fix up a little space truck, but right now nothing happens when the quest is completed.

The ships being dismantled are one of several types, so we know roughly what to expect. However, each ship includes randomized variations so no two ships are dismantled in exactly the same way. This randomization is occasionally hilarious. For example, sometimes the room adjacent to the reactor has a window and computers to resemble a reactor control room. But sometimes the room is set up like crew quarters with chairs and beds. It must be interesting to serve on board that ship, as we bunk down next to a big reactor through the window and its radioactive warning symbols.

There are a few user interface annoyances. The “F” key is used to pick up certain items in game. But the same key is also used to fire a repulsion field to push items away. Depending on the mood of the game engine, sometimes I press “F” to pick up an item only to blast it away instead and I have to chase it down.

But these are all fixable problems and I look forward to the official version 1.0 release. In the meantime I’m still having lots of fun playing in version 0.7. And maybe down the line the developers will have the bandwidth to explore putting this game in virtual reality.

Spaceship Teardowns in “Hardspace: Shipbreaker”

While studying Unity’s upcoming Data-Oriented Technology Stack (DOTS) I browsed various resources on the Unity landing page for this technology preview. Several game studios have already started using DOTS in their titles and Unity showcased a few of them. One of the case studies is Hardspace:Shipbreaker, and it has consumed all of my free time (and then some.)

I decided to look into this game because the name and visuals were vaguely familiar. After playing a while I remembered I first saw it on Scott Manley’s YouTube channel. He made that episode soon after the game was available on Steam. But the game has changed a lot in the past year, as it is an “Early Access Game” that is still undergoing development. (Windows only for now, with goal of eventually on Xbox and PlayStation consoles.) I assume a lot of bugs have been stamped out in the past year, as it has been mostly smooth sailing in my play. It is tremendously fun even in its current incomplete state.

Hardspace:Shipbreaker was the subject of an episode of Unity’s “Behind the Game” podcast. Many aspects of developing this game were covered, and towards the end the developers touched on how DOTS helped them solve some of their performance problems. As covered in the episode, the nature of the game means they couldn’t use many of the tried-and-true performance tricks. Light sources move around, so they couldn’t pre-render lights and shadows. The ships break apart in unpredictable ways (especially when things start going wrong) there can be a wide variation in shapes and sizes of objects in the play area.

I love teardowns and taking things apart. I love science fiction. This game is a fictional world where we play a character that tears down spaceships for a living. It would be a stretch to call this game “realistic” but it does have its own set of realism-motivated rules. As players, we learn to work within the constraints set by these rules and devise plans to tear apart these retired ships. Do it safely so we don’t die. And do it fast because time is money!

This is a novel puzzle-solving game and I’m having a great time! If “Spaceship teardown puzzle game” sounds like fun, you’ll like it too. Highly recommended.

[Title image from Hardspace: Shipbreaker web site]

Unity-Python Communication for ML-Agents: Good, Bad, and Ugly

I’ve only just learned that Unity DOTS exists and it seems like something interesting to learn as an approach for utilizing resources on modern multicore computers. But based on what I’ve learned so far, adopting DOTS by itself won’t necessarily solve the biggest bottleneck in Unity ML-Agents as per this forum thread: the communication between Unity and Python.

Which is unfortunate, because this mechanism is also a huge strength of this system. Unity is a native code executable with modules written in C# and compiled, while deep learning neural network frameworks like TensorFlow and PyTorch runs under a Python interpreted environment. The easiest and most cross-platform friendly way for these two types of software to interact is via network ports even though data is merely looped back to the same computer and not sent over a network.

With a documented communication protocol, it allowed ML-Agents components to evolve independently as long as they conform to the same protocol. This was why they were able to change the default deep learning framework from TensorFlow to PyTorch between ML-Agents version 1.0 and 2.0 but without breaking backwards compatibility. (They did it in release 10, in case it’s important) Developers who prefer to use TensorFlow could continue doing so, they are not forced to switch to PyTorch as long as everyone talks the same language.

Functional, capable, flexible. What’s not to love? Well, apparently “performance”. I don’t know the details for Unity ML-Agents bottlenecks but I do know “fast” for a network protocol is a snail’s pace compared to high performance inter-process communications mechanisms such as shared memory.

To work around the bottleneck, the current recommendations are to manually stack things up in parallel. Starting at the individual agent level: multiple agents can train in parallel, if the environment supports it. This explains why the 3D Ball Balancing example scene has twelve agents. If the environment doesn’t support it, we can manuall copy the same training environment several times in the scene. We can see this in the Crawler example scene, which has ten boxes one for each crawler. Beyond that, we now have the capability to run multiple Unity instances in parallel.

All of these feel… suboptimal. The ML-Agents team is aware of the problem and working on solutions but have nothing to announce yet. I look forward to seeing their solution. In the meantime, learning about DOTS has sucked up all of my time. No, not learning… I got sucked into Hardspace:Shipbreaker, a Unity game built with DOTS.

Unity DOTS = Data Oriented Technology Stack

Looking over resources for Unity ML-Agents toolkit for reinforcement learning AI algorithms, I’ve come across multiple discussion threads about how it has difficulties scaling up to take advantage of modern multicore computers. This is not just a ML-Agents challenge, this is a Unity-wide challenge. Arguably even a software development industry-wide challenge. When CPUs stopped getting faster clock rates and started gaining more cores, games have had problem taking advantage of them. Historically while a game engine is running, there is one CPU core running at max. The remaining cores may help out with a few auxiliary tasks but mostly sit idle. This is why gamers have been focused on single-core performance in CPU benchmarks.

Having multiple CPUs running in parallel isn’t new, nor are the challenges of leveraging that power in full. From established software toolkits to leading edge theoretical research papers, there are many different approaches out there. Reading these Unity forum posts, I learned that Unity is working on a big revamp under the umbrella of DOTS: Data-Oriented Technology Stack.

I came across the DOTS acronym several times without understanding what it was. But after it came up in the context of better physics simulation and a request for ML-Agents to adopt DOTS, I knew I couldn’t ignore the acronym anymore.

I don’t know a whole lot yet, but I’ve got the distinct impression that working under DOTS will require a mental shift in programming. There were multiple references to Unity developers saying it took some time for the concepts to “click”, so I expect some head-scratching ahead. Here are some resources I plan to use to get oriented:

DOTS is Unity’s implementation of Data-oriented Design, a more generalized set of concepts that helps write code that will run well on modern machines with many cores and multiple levels of memory caches. An online eBook for Data-oriented Design is available, which might be good to read so I can see if I want to adopt these concepts in my own programming projects outside of Unity.

And just to bring this full circle: it looks like the ML-Agents team has already started DOTS work as well. However it’s not clear to me how DOTS will (or will not) help with the current gating performance factor: Unity environment’s communication with PyTorch (formerly TensorFlow) running in a Python environment.

Miscellaneous Gems from ML-Agents Resources

I browsed through ML-Agents GitHub issues and forums looking for an explanation why there hasn’t been a release in half a year, I came up empty handed on an answer. But the time is not all wasted since I found a few other scattered tidbits that might be useful for the future.

The simulation time scale used in most examples is 20, and is the default used by the mlagents-learn script. If the script is not used, the simulation runs in real time and will feel very slow. The tradeoff here is accuracy of physics simulation, as per the comment “If you go too fast, the physics gets kind of wonky, and sometimes objects/agents will go through each other.”

The official “Hummingbird” tutorial on Unity Learn targets the LTS build of Unity with ML-Agents version 1. Looks like the goal is to update it to work with a new release of ML-Agents, and instructions to get a preview has been posted.

Issue #4129 is pretty old but there is a lot of detail here about why Unity ML-Agents doesn’t necessarily benefit from GPU accelerated neural network training. Since then, ML-Agents has switched from TensorFlow to PyTorch but many of the points might still apply.

But never mind the GPU, ML-Agents can’t even make full use of multicore CPUs like this person’s 16-core Threadripper. The Unity person who responded explained this is on the list of things to improve but there’s nothing to show yet. In the meantime, there are workarounds.

Also on the subject of utilizing parallel hardware, here’s a request for Unity to use Google’s Brax physics engine. Based on my experience I was certain the answer was “No” (the physics engine isn’t something that can “just” be switched around) but this question was valuable for two reasons. One, it led me to look up and learn about Google Brax, a physics engine for reinforcement learning that runs on Google TPUs and CUDA GPUs. And second, Unity actually is investing in the work towards a faster physics engine “for the DOTS platform.” Um… what’s DOTS? Time for some reading.

Browsing ML-Agents Resources: GitHub and Forums

I had taken a quick look back through evolution of Unity’s ML-Agents from initial prerelease announcement to the present day. The features have been good, but looking at the dates highlighted a conspicuous gap: there have been no releases or announcements in the second half of 2021. This is notable because ML-Agents had been on a rapid releases schedule, with versions coming out every few weeks, since the original prerelease announcement. But things had come to a screeching halt — why?

[Update: Release 19 became available on 2022/1/14, but I have yet to find information on why there was a long delay between 18 and 19.]

In the absence of official announcements, I went poking around through other publicly available information. The first resource was a brief look at the GitHub commit history for the ML-Agents repository. We’ve been warned the “main” branch can be unstable and I thought that meant it was the active development branch. This is apparently false, as I found hints of additional working branches that are out of public view. For example, the original commit for “Deterministic actions” mentioned “release 19” even though the latest release is currently 18. These mentions of “release 19” had to be removed as merge #5637. In the description, it mentioned the branch “release_19_docs” which is not visible to us.

With development progress partially obscured from view, I moved on to looking at currently open GitHub issues. When doing a first glance at an unfamiliar repository, I first look at the most recent items (the default view) and then I sort by “Most Commented” to see the most popular items. Nothing caught my eye as a likely reason. However, it did eliminate the possibility that Unity had killed the project, because team members are still responding to issues. So that’s comforting.

The next resource was to browse Unity Forum for ML-Agents. Scrolling through issues between June and now, I saw multiple references to memory leaks. (“Unity ML 2.0.0 Memory leak” “Training is more and more slow” “Memory leak using imitation learning“) Returning to issues list, I saw #5458 “Memory Leak” is currently open for investigation. However, there’s no comment one way or another if this bug (or another) is holding up release 19, so my quest came up empty-handed. However, I did get the idea if I run into a memory leak with release 18, I can try downgrading to release 17 to see that helps. Plus I came across a bunch of other interesting pieces of information.

Notes on ML-Agents Development History (Part 2: Version 1.0 to Present)

Looking back at Unity blog posts and GitHub release notes, we can see ML-Agent’s evolution during the prerelease beta phase. From initial announcement leading up to an official version 1.0, they added many features promised in the original announcement, and made big architectural changes like how brains fit in the object hierarchy of a Unity project.

On 2020/5/12, ML-Agents reached an official version 1.0, with a package organization that is covered by version compatibility guarantees going forward in the future. This guarantee is significant, because it means users can have better confidence their own projects will function. It also means more work for Unity because any future large-scale architectural changes will have to be made in a compatible way.

Another change in ML-Agents development is that they’re no longer writing a Unity blog post for every release. I had thought this merely reflected a slower, more deliberate development with fewer changes to announce, but looking over release notes I still see plenty of significant changes. Given this, I suspect the lower blog traffic reflect a change in customer communication priorities inside the Unity organization. Perhaps they’ve moved on to YouTube videos or something? If so, that would be a shame, as I prefer the written word.

In any case, ML-Agents GitHub repository release notes made it clear development continued rapidly:

  • Release 2 (2020/5/20) has minor fixes and the current “Verified” build.
  • Release 3 (2020/6/10)
  • Release 4 (2020/7/15) added parameter randomization
  • Release 5 (2020/7/31)
  • Release 6 (2020/8/17) updated version requirements: Python now 3.6.1 and NumPy now 1.19.0 in sync with TensorFlow
  • Release 7 (2020/9/21) IActuator abstract classes for generic action spaces. Initial PyTorch implementation.
  • Release 8 (2020/10/14)
  • Release 9 (2020/11/3)
  • Release 10 (2020/11/19) Match3 environment (ML-Agents play Bejeweled!) and PyTorch is now the default.
  • Release 11 (2020/12/21)
  • Release 12 (2020/12/22)

The above releases were summarized in the ML-Agents 2020 End of Year recap blog post, and development continued through 2021:

  • Release 13 (2021/2/24) TensorFlow removed. (--torch-device=cpu to tell PyTorch to use CPU for training. This will be useful later.)
  • Release 14 (2021/3/8)
  • Release 15 (2021/3/17) BufferSensor for agents to observe variable number of entities. MultiAgentGroup interface for training multiple different agents simultaneously, and MA-POCA trainer for them.
  • Release 16 (2021/4/13)
  • Release 17 (2021/4/27): Minimum Unity up to 2019.4. API breaking changes. Multiple behaviors via HyperNetworks
  • Release 18 (2021/6/9): Added colab notebooks.

The version and API changes for release 17 smelled like preparation for a new version, which was confirmed by a blog post talking about training complex cooperative behaviors. This is all very exciting stuff, but I noticed development activity came to a screeching halt. After years of releases every few weeks (sometimes multiple times in a single month) there hasn’t been anything in the second half of 2021. I don’t know why but I poked around a bit to see if I can find clues.

[Update: Release 19 became available on 2022/1/14.]

Notes on ML-Agents Development History (Part 1: Up to Version 1.0)

I’ve just installed and tested basic functionality of Unity ML-Agents Release 18. And just before that, I did the same with Release 2 which is also referred to as “Verified 1.0.8”. I was surprised at the changes visible just between these releases. This made me curious about how this package evolved, and I went looking for information from its past.

Most of them were announced on Unity blog, but some just had GitHub release notes. Here is a compilation of links alongside a few highlights that caught my eye, follow these links for a complete list of changes:

2017/6/26: The earliest public information I could find was Unity announcing their intent to join in AI research and applications. Annoyingly, some of the linked blog posts have since disappeared, apparently in some sort of migration of their blog hosting system. For example the “second part of this blog series” link now leads to a 404 error.

2017/9/18: The ML-Agents Toolkit officially kicks off with version 0.1, describing a general architecture that I’m sure has since evolved and a long list of ambitious ideas they wanted to support. Many of them did come to be! Though of course not all of them, and some have since disappeared.

2017/12/8: Version 0.2 introduced curriculum learning, and launched a community challenge to motivate people to play with the toolkit.

2018/3/15: Version 0.3 introduced imitation learning, multi-brain training, and an optional poll model. Recurrent Neural Networks came in as part of a “Memory-Enhanced Agents” umbrella.

2018/6/18: Version 0.4 allowed training using the Unity editor, no longer requiring a compiled executable. An Udacity nanodegree was introduced, though sadly that’s too rich for my blood. More training environments were added, one (Pyramids) specifically demonstrates the “Curiosity” capability. Curiosity got its own blog post.

2018/9/11: Version 0.5 added a Gym interface and replicating a few environments from OpenAI Gym. Also expanded capability to enable/disable discrete actions, but not clear if it was related to OpenAI Gym.

2018/12/17: Version 0.6 is an architectural revamp changing how ml-agents AI brains fit in the Unity object hierarchy. Introduced “demonstration recorder” for off-line imitation learning. Is that still around?

2019/3/1: Version 0.7 is another big infrastructure change, switching runtime neural network inference from external TensorFlowSharp to Unity’s own Inference Engine (a.k.a. Barracuda) to support more Unity runtime platforms.

2019/4/15: Version 0.8 infrastructure change allows multiple Unity simulations to run in parallel on a single machine. Strange this is the recommended approach to take advantage of machines with many processing cores. (Later research found that Unity is working to improve multicore performance across the board, not just ml-agents, with something called DOTS.)

2019/8/1: Version 0.9 (release notes) is the first of two releases focused on throughput and efficiency.

2019/9/30: Version 0.10 finished what 0.9 started. Improving sample throughput (asynchronous environments) and sample efficiency via GAIL (0.9) and SAC (0.10) algorithms.

2019/11/4: Version 0.11 (release notes) changed again the brain’s place in Unity object hierarchy.

2019/12/2: Version 0.12 (release notes) moved from TensorFlow 1 to 2 via the TF1 compatible interfaces. It appears this work was never finished, ml-agents moved to PyTorch instead of finishing TF2 migration.

2020/1/8: Version 0.13 (release notes)

2020/2/28: Version 0.14 now has ability to train via adversarial self-play. Includes a short history of learning from self-play.

2020/3/6: Not a version, but this is when ml-agents got serious enough to get a course up on Unity Learn (Hummingbirds) as well an “AI for Beginners” course on Unity Learn Premium.

2020/3/18: Version 0.15 (release notes) wrapped up a lot of housekeeping in preparation for 1.0 release.

Development focus for ml-agents changed to more refinement after 1.0 release, along with corresponding reduction in blog announcements.

Notes on Installing Unity ML-Agents (Release 18)

I’m dipping my toes into playing with deep reinforcement learning via Unity’s ML-Agents package. I made my first run with the safest most mature option “Verified Package 1.0.8”, which mapped to Release 2 by the ML-Agents repository versioning scheme. No problems were encountered during installation and I was able to run the 3D balancing ball project in the Getting Started guide. From there I could either explore Release 2 further or try a more adventurous release. I chose the latter and proceeded to install ML-Agents Release 18.

Doing this experiment on the same machine meant I had to keep the two installations separate. Unity Hub is already well-suited to keeping distinct versions isolated so they could run in parallel (Unity 2019.4.25f1 for release 18) though there’s a potential point of conflict if Unity editors required different versions of Visual Studio Community Edition for editing code. On the Python side, Anaconda is well suited to keep Python environments separate. Since files are referenced by directory, though, I cloned the ml-agents GitHub repository separately for each release instead trying to switch back and forth within the same directory.

I very much appreciated Unity for their project documentation, as my installation and Getting Started process went just as smoothly. I didn’t expect to notice much different between release 2 and 18, but even just in installation and Getting Started I saw they’ve made changes. The biggest one that caught my eyes is that ml-agents switched from TensorFlow to PyTorch sometime during this time. There are other smaller changes, the most welcome one to me is a much more comprehensive collection of example configurations in the release_18 /config/ subdirectory. Release 2 had only a handful of files, release 18 had a far larger directory tree to let people (like me) have more than one starting point.

I’m not quite sure where to go from here, but given how well documented ml-agents appears to be, I thought it would be interesting to take a quick look back to see where they’ve been.

Notes on Installing Unity ML-Agents (Release 2)

I thought it would be fun to play with reinforcement learning via Unity ML-Agents. The official product landing page sends us to the ml-agents repository on GitHub. And just like every other repository, it’s always a good idea to look over the README.md to understand their branch organization. Especially before we start cloning anything.

And indeed, the README includes a handy chart of releases. As of this writing there are eight releases listed plus main which is labeled as unstable. I’m glad I didn’t blindly clone main! Of the eight stable releases, six are named “Release 13” to “Release 18” inclusive. The final two are named “Verified Package 1.0.7” and “Verified Package 1.0.8”.

The “Verified” label is a guarantee of safety in the lifecycle of Unity packages. Therefore the most recent release with the highest guarantee of functionality is “Verified Package 1.0.8”. In Unity’s world, these verified packages are good enough for commercial production use. If our needs aren’t quite that rigorous, we can use the builds labeled “Release.” These numbers are explained in the ML-Agents versioning page, and it’s something we can play with if we aren’t shouldering the weight of commercial Unity production.

I think I’m fine to play with more recent “Release” builds, but I wanted to start with the most guarantee build to make sure I can at least get that working. Which meant cloning the build labeled “Verified Package 1.0.8” and that maps to “Release 2.”

In order to open the Unity Project that is a part of this release, I wanted to get the version of Unity that exactly matched the version number in ProjectVersion.txt: 2018.4.17f1. If I tried to install Unity 2018 in Unity Hub, it offers me 2018.4.36f1 because that was the most recent supported version. In order to match version, I had to click the download archive link and look for 2018.4.17 under 2018 builds. (It was released February 11th, 2020.) Once found, I could click the “Unity Hub” button to prompt Unity Hub to install the build on my machine.

While Unity installed, I cloned the repository tagged release_2 and installed corresponding Python binaries. I encountered no problems following installation directions for this release, though there were slight modifications as I used Anaconda Individual to manage my Python virtual environments. I had the option of installing the locally cloned versions of the ml-agents-envs and ml-agents packages and I did so. I noticed that the installation had TensorFlow in CPU-only mode, but running without GPU acceleration is perfectly OK for a starting point.

Once Unity Editor 2018.4.17 was installed, I used it to open the Project directory of my cloned release_2 repository. It opened without errors. I proceeded to the Getting Started Guide for this release and verified I had basic functionality both running the pretrained 3D Balance Ball model and training a model of my own. The training was pretty quick, it took just under 8 minutes on the Core i5-7300HQ CPU of my Dell 7577 laptop plugged in to an AC power adapter.

Encouraged by this success, I proceeded to try Release 18 as well.

Switching Back to Unity ML-Agents

It was quite enlightening for me to read Deep Reinforcement Learning Doesn’t Work Yet. And to be honest, a little depressing as well. I was vaguely aware of the challenges involved but only in a general sense. Just small tidbits here and there over the past few years, as I looked at this field with interest. Now that I finally got around to looking at reinforcement learning in more detail, I realized that it was overly optimistic of me to expect all major problems to have been solved by now.

My original motivation for getting into reinforcement learning was to make my Sawppy an autonomous rover. Based on what I’ve learned so far, my original hopes for Sawppy intelligence via reinforcement learning is extremely ambitious and still quite far away. If I want to do some deep RL projects more likely to succeed in the near term, I probably shouldn’t put it on a real physical rover. In all likelihood, whatever can be accomplished on a real robot using deep reinforcement learning could be done faster and more easily with some other AI technique.

It would certainly be nice if some aspect of Sawppy intelligence will eventually result in a research project that can contribute to the state of the art. But I’m not so arrogant as to assume I can accomplish that feat and certainly not as my first project in reinforcement learning. I’ll aim for something simple as my starting point. Got to crawl before I can walk, and all that.

Transferring reinforcement learning from a simulator to work in the real world is still a lot to tackle. So I’m going to look at a simulated world and stay within that simulated world while I learn the ropes. And before I can realistically think about contributing to algorithm advancements, I should get familiar with applying existing implementations of reinforcement learning. All of these new priorities turned my attention back to the game world of Unity ML-Agents.

Unity Machine Learning Agents Almost Within My Reach

While poking around Google’s Machine Learning Crash Course, I found that they have released a TensorFlow library for building agents with deep reinforcement learning. This might be fun but I don’t know enough about the field to make use of that library yet. It also reminded me to take another look at game engine Unity 3D’s development in this area. A lot has happened!

I first took a quick glance at Unity ML-Agents more than two years ago. At the time, the project was still an experimental thing for Unity and a lot was still in flux. Since I didn’t know much about working in Unity or in reinforcement learning, that was too many variables in flux for my taste. A year later, Unity ML-Agents reached an official version 1.0, but it was still technically a preview technology. But not long after that they had become a “verified” package for use with Unity 2020.3 LTS build, signifying a mature tool. As part of being a verified package for use with Unity LTS, ML-Agents got some nice things like an official Unity technology landing page and a few pieces of curriculum have been posted to Unity Learn to help people get started.

The primary focus of Unity ML-Agents is for creating agents in the virtual world of a Unity game. Not necessarily for real-world robots which is where my interests lie. This is an important caveat because the Unity physics engine is not an accurate representation of the real world, and reinforcement learning agents are notorious for exploiting flaws in virtual engines to do “impossible” things. But that’s no reason to give up on Unity, which can still be a useful tool for robotics research. These caveats are just some tradeoffs amongst many more to keep in mind.

During this time that Unity evolved their ML-Agents library, I’ve occasionally dabbled in Unity with projects like Bouncy Bouncy Lights. I’m not bold enough to call myself a Unity developer yet, but I’m no longer completely overwhelmed by Unity editor user interface as I once were. I haven’t done much more in Unity because I haven’t felt particularly motivated to make games. But ML-Agents? That looks like pretty good motivation for me to put serious effort into understanding reinforcement learning.

Remaining To-Do For My Next Unity 3D Adventure

I enjoyed my Unity 3D adventure this time around, starting from the LEGO microgame tutorials through to the Essentials pathway and finally venturing out and learning pieces at my own pace in my own order. My result for this Unity 3D session was Bouncy Bouncy Lights and while I acknowledge it is a beginner effort, it was more than I had accomplished on any of my past adventures in Unity 3D. Unfortunately, once again I find myself at a pause point, without a specific motivation to do more with Unity. But there are a few items still on the list of things that might be interesting to explore for the future.

The biggest gap I see in my Unity skill is creating my own unique assets. Unity supports 2D and 3D creations, but I don’t have art skills in either field. I’ve dabbled in Inkscape enough that I might be able to build some rudimentary things if I need to, and for 3D meshes I could import STL so I could apply my 3D printing design CAD skills. But the real answer is Blender or similar 3D geometry creation software, and that’s an entirely different learning curve to climb.

Combing through Unity documentation, I learned of a “world building” tool called ProBuilder. I’m not entirely sure exactly where it fits in the greater scheme of things, but I can see it has tools to manipulate meshes and even create them from scratch. It doesn’t claim to be a good tool for doing so, but supposedly it’s a good tool for whipping up quick mockups and placeholders. Most of the information about ProBuilder is focused on UV mapping, but I didn’t know it at the start. All ProBuilder documentation assume I already knew what UV meant, and all I could tell is that UV didn’t mean ultraviolet in this context. Fortunately searching for UV in context of 3D graphics gives me this Wikipedia article on UV mapping. There is a dearth of written documentation for ProBuilder, what little I found all point to a YouTube playlist. Maybe I’ll find the time to sit through them later.

I skimmed through the Unity Essentials sections on audio because Bouncy Bouncy Lights was to be silent, so audio is still on the to-do list. And like 2D/3D asset creation, I’m neither a musician nor a sound engineer. But if I ever come across motivation to climb this learning curve I know where to go to pick up where I left off. I know I have a lot to learn since meager audio experimentation already produced one lesson: AudioSource.Play would stop any prior occurrences of the sound. If I want the same sound to overlap each other I have to use AudioSource.PlayOneShot.

Incorporating video is an interesting way to make Unity scenes more dynamic, without adding complexity to the scene or overloading the physics or animation engine. There’s a Unity Learn tutorial about this topic, but I found that video assets are not incorporated in WebGL builds. The documentation said video files must be hosted independently for playback by WebGL, which adds to the hosting complications if I want to go down that route.

WebGL
The Video Clip Importer is not used for WebGL game builds. You must use the Video Player component’s URL option.

And finally, I should set aside time to learn about shaders. Unity’s default shader is effective, but it has become quite recognizable and there are jokes about the “Unity Look” of games that never modified default shader properties. I personally have no problem with this, as long as the gameplay is good. (I highly recommend Overcooked game series, built in Unity and have the look.) But I am curious about how to make a game look distinctive, and shaders are the best tool to do so. I found a short Unity Learn tutorial but it doesn’t cover very much before dumping readers into the Writing Shaders section of the manual. I was also dismayed to learn that we don’t have IntelliSense or similar helpfulness in Visual Studio when writing shader files. This is going to be a course of study all on its own, and again I await good motivation for me to go climb that learning curve.

I enjoyed this session of Unity 3D adventure, and I really loved that I got far enough this time to build my own thing. I’ve summarized this adventure in my talk to ART.HAPPENS, hoping that others might find my experience informative in video form in addition to written form on this blog. I’ve only barely scratched the surface of Unity. There’s a lot more to learn, but that’ll be left to future Unity adventures because I’m returning to rover work.

Venturing Beyond Unity Essentials Pathway

To help beginners learn how to create something simple from scratch, Unity Learn set up the Essentials pathway which I followed. Building from an empty scene taught me a lot of basic tasks that were already done for us in the LEGO microgame tutorial template. Enough that I felt confident enough to start building my own project for ART.HAPPENS. It was a learning exercise, running into one barrier after another, but I felt confident I knew the vocabulary to search for answers on my own.

Exercises in the Essentials pathway got me started on the Unity 3D physics engine, with information about setting up colliders and physics materials. Building off the rolling ball exercise, I created a big plane for balls to bounce around in and increased the bounciness for both ball and plane. The first draft was a disaster, because unlike real life it is trivial to build a perfectly flat plane in a digital world, so the balls keep bouncing in the same place forever. I had to introduce a tilt to make the bounces more interesting.

But while bouncing balls look fun (title image) they weren’t quite good enough. I thought adding a light source might help but that still wasn’t interesting enough. Switching from ball to cube gave me a clearly illuminated surface with falloff in brightness, which I thought looked more interesting than a highlight point on a ball. However, cubes don’t roll and would stop on the plane. For a while I was torn: cubes look better but spheres move better. Which way should I go? Then a stroke of realization: this is a digital world and I can change the rules if I want. So I used a cube model for visuals, but attached a sphere model for physics collisions. Now I have objects that look like cubes but bounce like balls. Something nearly impossible in the real world but trivial in the digital world.

To make these lights show up better, I wanted a dark environment. This was a multi-step procedure. First I did the obvious: delete the default light source that came with the 3D project template. Then I had to look up environment settings to turn off the “Skybox”. That still wasn’t dark, until I edited camera settings to change default color to black. Once everything went black I noticed the cubes weren’t immediately discernable as cubes anymore so I turned the lights back up… but decided it was more fun to start dark and turned the lights back off.

I wanted to add user interactivity but realized the LEGO microgame used an entirely different input system than standard Unity and nothing on the Essentials path taught me about user input. Searching around on Unity Learn I got very confused with contradictory information until I eventually figured out there are two Unity user input systems. There’s the “Input Manager” which is the legacy system and its candidate replacement “Input System Package” which is intended to solve problems with the old system. Since I had no investment in the old system, I decided to try the new one. Unfortunately even tthough there’s a Unity Learn session, I still found it frustrating as did others. I got far enough to add interactivity to Bouncy Bouncy Lights but it wasn’t fun. I’m not even sure I should be using it yet, seeing how none of the microgames did. Now that I know enough to know what to look for, I could see that the LEGO microgame used the old input system. Either way, there’s more climbing of the learning curve ahead. [UPDATE: After I wrote this post, but before I published it, Unity released another tutorial for the new Input Manager. Judging by this demo, Bouncy Bouncy Lights is using input manager incorrectly.]

The next to-do item was to add the title and interactivity instructions. After frustration with exploring a new input system, I went back to LEGO microgame and looked up exactly how they presented their text. I learned it was a system called TextMesh Pro and thankfully it had a Unity Learn section and a PDF manual was installed as part of the asset download. Following those instructions it was straightforward to put up some text using the default font. After my input system frustration, I didn’t want to get much more adventurous than that.

I had debated when to present the interactivity instructions. Ideally I would present them just as the audience got oriented and recognizes the default setup. Possibly start getting bored and ready to move on, so I can give them interactivity to keep their attention. But I have no idea when that would be. When I read the requirement that the title of the piece should be in the presentation, I added that as a title card before showing the bouncing lights. And once I added a title card, it was easy to add another card with the instructions to be shown before the bouncing lights. The final twist was the realization I shouldn’t present them as static cards that fade out: since I already had all these physical interactions in place, they are presented as falling bouncing objects in their own right.

The art submission instructions said to put in my name and a way for people to reach me, so I put my name and newscrewdriver.com at the bottom of the screen using TextMesh Pro. Then it occurred to me the URL should be a clickable link, which led me down the path of finding out how an Unity WebGL title can interact with the web browser. There seemed to be several different deprecated ways to do it, but they all point to the current recommended approach and now my URL is clickable! For fun, I added a little spotlight effect when the mouse cursor is over the URL.

The final touch is to modify the presentation HTML to suit the Gather virtual space used by ART.HAPPENS. By default Unity WebGL build generates an index.html file that puts the project inside a fixed-size box. Outside that box is the title and a button to go full screen. I didn’t want the full screen option for presenting this work in Gather, but I wanted to fill my <iframe> instead of a little box within it. My CSS layout skills are still pretty weak and I couldn’t figure it out on my own, but I found this forum thread which taught me to replace the <body> tag with the following:

  <body>
      <div class="webgl-content" style="width:100%; height:100%">
        <div id="unityContainer" style="width:100%; height:100%">
        </div>
      </div>
  </body>

I don’t understand why we need to put 100% styles on two elements before it works, but hopefully I will understand whenever I get around to my long-overdue study session on CSS layout. The final results of my project can be viewed at my GitHub Pages hosting location. Which is a satisfying result but there are a lot more to Unity to learn.

Notes on Unity Essentials Pathway

As far as Unity 3D creations go, my Bouncy Bouncy Lights project is pretty simple, as expected of a beginner’s learning project. My Unity (re)learning session started with their LEGO microgame tutorial, but I didn’t want to submit a LEGO-derived Unity project for ART.HAPPENS. (And it might not have been legal under the LEGO EULA anyway.) So after completing the LEGO microgame tutorial and its suggested Creative Mods exercises, I still had more to learn.

The good news is that Unity Learn has no shortage of instruction materials, the bad news is a beginner gets lost on where to start. To help with this, they’ve recently (or at least since the last time I investigated Unity) rolled out the concept of “Pathways” which organize a set of lessons targeted for a particular audience. People looking for something after completing their microgame tutorial is sent towards the Unity Essentials Pathway.

Before throwing people into the deep pool that is Unity Editor, the Essentials Pathway starts by setting us up with a lot of background information in the form of video interview clips with industry professionals using Unity. I preferred to read instead of watching videos, but I wanted to hear these words of wisdom so I sat through them. I loved that they allocated time to assure beginners that they’re not alone if they found Unity Editor intimidating at first glance. The best part was the person who claimed their first experience was taking one look and said “Um, no.” Closed Unity, and didn’t return for several weeks.

Other interviews covered the history of Unity, and how it enabled creation of real-time interactive content, and the tool evolved alongside the industry. There were also information for people who are interested in building a new career using Unity. Introducing terminology and even common job titles that can be used to query on sites like LinkedIn. I felt this section offered more applicable advise for this job field more than I ever received in college for my job field. I was mildly amused and surprised to see Unity classes ended with a quiz to make sure I understood everything.

After this background we are finally set loose on Unity Editor starting from scratch. Well, an empty 3D project template which is as close to scratch as I cared to get. The template has a camera and a light source but not much else. Unlike the microgames which are already filled with assets and code. This is what I wanted to see: how do I start from geometry primitives and work my way up, pulling from Unity Asset store as needed to for useful prebuilt pieces. One of the exercises was to make a ball roll down a contraption of our design (title image) and I paid special attention to this interaction. The Unity physics engine was the main reason I chose to study Unity instead of three.js or A-Frame and it became the core for Bouncy Bouncy Lights.

I’ve had a lot of experience writing in C# code, so I was able to quickly breeze through C# scripting portions of Unity Essentials. But I’m not sure this is enough to get a non-coder up and running on Unity scripting. Perhaps Unity decided they’re not a coding boot camp and didn’t bother to start at the beginning. People who have never coded before will need to go elsewhere before coming back to Unity scripting, and a few pointers might be nice.

I skimmed through a few sections that I decided was unimportant for my ART.HAPPENS project. Sound was one of them: very important for an immersive gaming experience, but my project will be silent because the Gather virtual space have a video chatting component and I didn’t want my sounds to interfere with people talking. Another area I quickly skimmed through were using Unity for 2D games, which is not my goal this time but perhaps I’ll return later.

And finally, there were information pointing us to Unity Connect and setting up a profile. At first glance it looked like Unity tried to set up a social network for Unity users, but it is shutting down with portions redistributed to other parts of the Unity network. I had no investment here so I was unaffected, but it made me curious how often Unity shuts things down. Hopefully not as often as Google who have become infamous for doing so.

I now have a basic grasp on this incredibly capable tool, and it’s time to start venturing beyond guided paths.

Bouncy Bouncy Lights

My motivation for going through Unity’s LEGO microgame tutorial (plus associated exercises) was to learn Unity Editor in the hopes of building something for ART.HAPPENS, a community virtual art show. I didn’t expect to build anything significant with my meager skills, but I made something with the skill I have. It definitely fit with the theme of everyone sharing works that they had fun with, and learned from. I arrived at something I felt was a visually interesting interactive experience which I titled Bouncy Bouncy Lights and, if selected, should be part of the exhibition opening today. If it was not selected, or if the show has concluded and its site taken down, my project will remain available at my own GitHub Pages hosting location.

There are still a few traces of my original idea, which was to build a follow-up to Glow Flow. Something colorful with Pixelblaze-controlled LED lights. But I decided to move from the physical to digital domain so now I have random brightly colored lights in a dark room each reflecting off an associated cube. But by default there isn’t enough for the viewer to immediately see the whole cube, just the illuminated face. I want them to observe the colorful lights moving around for a bit before they recognized what’s happening, prompting the delight of discovery.

Interactivity comes in two forms: arrow keys will change the angle of the platform, which will change the direction of the bouncing cubes. There is a default time interval for new falling cubes. I chose it so that there’ll always be a few lights on screen, but not so many to make the cubes obvious. The user can also press space bar to add lights faster than the default interval. If the space bar is held down, the extra lights will add enough illumination to make the cubes obvious and they’ll frequently collide with each other. I limited it to a certain rate because the aesthetics change if too many lights all jump in. Thankfully I don’t have to worry about things like ensuring sufficient voltage supply for lights when working in the digital world, but too many lights in the digital world add up to white washing out the individual colors to a pastel shade. And too many cubes interfere with bouncing, and we get an avalanche of cubes trying to get out of each other’s way. It’s not the look I want for the project, but I left in a way to do it as an Easter egg. Maybe people would enjoy bringing it up once in a while for laughs.

I’m happy with how Bouncy Bouncy Lights turned out, but I’m even happier with it as a motivation for my journey learning how to work with a blank Unity canvas.