Unity Without OpenGL ES 2.0 Is All Pink on Fire TV Stick

I dug up my disassembled Fire TV Stick (second generation) and it is now back up and running again. This was an entertaining diversion in its own right, but during the update and Android Studio onboarding process, I kept thinking: why might I want to do this beyond “to see if I could”? This was the same question I asked myself when I investigated the Roku Independent Developer’s Kit just before taking apart some Roku devices. For a home tinkerer, what advantage did they have over a roughly comparable Raspberry Pi Zero? I didn’t have a good answer for Roku, but I have a good answer for Fire TV: because it is an Android device, and Android is a target platform for Unity. Raspberry Pi and Roku IDK, in comparison, are not.

I don’t know if this will be useful for me personally, but at the very least I could try installing my existing Unity project Bouncy Bouncy Lights on the device. Loading up Unity Hub, I saw that Unity had recently released 2021 LTS so I thought I might as well upgrade my project before installing Unity Android target platform tools. Since Bouncy Bouncy Lights was a very simple Unity project, there were no problems upgrading. Then I could build my *.apk file which I could install on my Fire TV just like introductory Android Studio projects. There were no error messages upon installation, but upon launch I got a warning: “Your device does not match the hardware requirements of this application.” What’s the requirement? I didn’t know yet, but I got a hint when I chose to continue anyway: everything on screen rendered a uniform shade of pink.

Going online for answers, I found many different problems and solutions for Unity rendering all pink. I understand pink-ness is a symptom of something going wrong in the Unity graphics rendering pipeline, and it is a symptom that can have many different causes. Without a single solution, further experiment and diagnosis is required.

Most of the problems (and solutions) are in the Unity “Edit”/”Project Settings…”/”Player”/”Other Settings” menu. This Unity forum thread with the same “hardware requirements” error message suggests checking to ensure “Auto Graphics API” is checked (it was) and “Rendering Path” to Linear (no effect). This person’s blog post was also dealing with a Fire TV and their solution was checking “Auto Graphics API” which I am already doing. But what if I uncheck that box? What does this menu option do (or not do?)

Unchecking that box unveils a list of two Graphics APIs: Vulkan and OpenGLES3. Hmm, I think I see the problem here. Fire TV Stick second generation hardware specification page says it only supported OpenGL ES 2.0. Digging further into Unity documentation found that OpenGL ES 2.0 support is deprecated and not included by default, but we could add it to a project if we need it. Clicking the plus sign allowed us to add it as a graphics API for use in our Unity app:

Once OpenGL ES 2.0 is included in the project as a fallback graphics API, I could rebuild the *.apk file and install the updated version.

I got colors back! It is no longer all pink, and cubes that are pink look like they’re supposed to be pink. So the cubes look fine, but all color has disappeared from the platform. It was supposed to have splotches of color cast by randomly colored lights attached to each block.

Instead of showing different colors, it has apparently averaged to a uniform gray. I guess this is where an older graphics API gets tripped up and why we want newer APIs for best results. But at least it is better than a screen full of pink, even if the solution in my case was to uncheck “Auto Graphics API”. The opposite of what other people have said online! Ah well.

Move Calculation Off Microcontroller to Node-RED

After I added MQTT for distributing data, I wanted to change how calculations were done. Using a microcontroller to read a voltage requires doing math somewhere along the line. The ADC (analog-to-digital) peripheral on a microcontroller will return an integer value suitable for hardware registers, and we have to convert that to a floating-point voltage value that makes sense to me. In my first draft ESP8266 voltage measurement node, getting this conversion right was an iterative process:

  • Take an ADC reading and convert to voltage using a conversion multiplier.
  • Comparing against voltage reading from my multimeter.
  • Calculate a better conversion factor.
  • Reflash ESP8266 with Arduino sketch that includes the new conversion factor.
  • Repeat.

The ESP8266 ADC is pretty noisy, with probable contributions from other things like temperature variations. So there was no single right conversion factor value, it varies through time. The best I can hope for is a pretty-close average tradeoff. While looking for that value, the loop of recalculating and uploading got to be pretty repetitious. I want to move that conversion work off of the Arduino so it can be more easily refined and updated.

One option is to move that work to the data consumption side. This means logging raw ADC values into InfluxDB and whoever queries that data is responsible for conversion. This preserves original unmodified measurement data allowing the consumers to be smart about dealing with jitter and such. I like that idea but not ready to dive into that sort of data analysis just yet.

To address both of these points, I pulled Node-RED into the mix. I’ve played with this flow computing tool earlier and I think my current project aligns well with the strengths of Node-RED. The voltage conversion process, specifically, is a type of data transformation people do so often in Node-RED that there is a standard node Range for this purpose. Performing voltage conversion in a Range node means I could fine-tune the conversion and update by clicking “Deploy” which is much less cumbersome than recompiling and uploading an Arduino sketch.

Node-RED also allows me to carry both the original and converted data through the flow. I use a Change node to save original ADC value to another property before using Range to convert ADC value to voltage. Now I have a Node-RED message with both original and converted data. Now I need to put that into the database, and I searched the public Node-RED library for “InfluxDB” and I decided to try node-red-contrib-stackhero-influxdb-v2 first since it explicitly supported version 2 of InfluxDB. I’m storing the raw ADC values now even though I’m not doing anything with it yet. The idea is to keep track so in the future I can explore voltage conversion on the data consumption side.

To test this new infrastructure design using MQTT and Node-RED, I’ll pull an ESP32 development board out of my pile of parts.


Here is my Node-RED function to package data in the format expected by node-red-contrib-stackhero-influxdb-v2 InfluxDB write node. Input data: msg.raw_ADC is the original ADC value, and msg.payload the voltage value converted by Range node:

var fields = {V: msg.payload, ADC: msg.raw_ADC};
var tags = {source: 'batt_monitor_02',location: 'lead_acid'};
var point = {measurement: 'voltage',
      tags: tags,
      fields: fields};
var root = {data: [point]};
msg.payload = root;
return msg;

My simple docker-compose.yml for running Node-RED:

version: "3.8"

services:
  nodered:
    image: nodered/node-red:latest
    restart: unless-stopped
    ports:
      - 1880:1880
    volumes:
      - ./data:/data

Routing Data Reports Through MQTT

Once a read-only Grafana dashboard was up and running, I had end-to-end data flow from voltage measurement to a graph of those measurements over time. This was a good first draft, from here I can pick and choose where to start refining the system. First thought: it was cool that an ESP8266 Arduino could log data straight to an InfluxDB2 server, but I don’t think that is the best way to go.

InfluxDB is a great database to track historical data, but sometimes I want just the most recent measurement. I don’t want to have to spin up a full InfluxDB client library and perform a query just to retrieve a single data point. That would be a ton of unnecessary overhead! Even worse, due to the overhead, not everything has an InfluxDB client library and I don’t want to be limited to the subset that does. And finally, tracking historical data is only one aspect of the system, at some point I want to take action based on data and measurements. InfluxDB doesn’t help at all for that.

To improve these fronts, I’m going to add a MQTT broker to my home network in the form of a docker container running Eclipse Mosquitto. MQTT is a simple publish/scribe system. The voltage measuring node I’ve built out of an ESP8266 is a publisher, and InfluxDB is a subscriber for that data. If I want the most recent measurement, I can subscribe to the same data source and see it at the same time InfluxDB does.

I’ve read the Hackaday MQTT primer, and I understand it is a popular tool for these types of projects. It’s been on my to-do list ever since and this is the test project for playing with it. Putting MQTT into this system lets me start small with a single publisher and a single subscriber. If I expand on this system, it is easy to add more to my home MQTT network.

As a popular and lightweight protocol, MQTT enjoys a large support ecosystem. While not everything has an InfluxDB client library, almost everything has a MQTT client library. Including ESP8266 Arduino, and InfluxDB in the form of a Telegraf plugin. But after looking over that page, I understand it is designed for direct consumption and has little (no?) options for data transformation. This is where Node-RED enters the discussion.


My simple docker-compose.yml for running Mosquitto:

version: "3.8"

services:
  mqtt-broker:
    image: eclipse-mosquitto:latest
    restart: unless-stopped
    ports:
      - 1883:1883
      - 9001:9001
    volumes:
      - ./config:/mosquitto/config
      - ./data:/mosquitto/data
      - ./log:/mosquitto/log

Using Grafana Despite Chronograf Integration

I’ve learned some valuable lessons making an ESP8266 Arduino log data into InfluxDB, giving me ideas on things to try for future iterations. But for the moment I’m getting data and I want to start playing with it. This means diving into Chronograf, the visualization charting component of InfluxDB.

In order to get data around the clock, I’ve changed my plans for monitoring solar panel voltage and connected the datalogging ESP8266 to the lead-acid battery array instead. This allows me to continue refining and experimenting with data at night when the solar panel generates no power. It also means the unnecessarily high power consumption also means the battery is being unnecessarily drained, but an ESP8266 on full power is still consuming only a small percentage of what my lead-acid battery array can deliver so I’m postponing that problem.

Chronograf was pretty easy to get up and running. Querying on the tags of my voltage measurement/table, plotting logged voltage values over time. This is without getting distracted by all the nifty toys Chronograf has to offer. Getting a basic graph allows me to explore how to present this data in some sort of dashboard, and here I ran into a problem. There doesn’t seem to be a way within Influx to present a Chronograf chart in a read-only manner. I found no access control on the data visualization dashboard, nor could I find access restriction options at an Influx users level. Not that I could create new users from the UI

Additional users cannot be created in the InfluxDB UI.

A search for more information online directed me to the Chronograf GitHub repository, where there is a reference to a Chronograf “Viewer” role. Unfortunately, that issue is several years old, and I think this feature got renamed sometime in the past few years. Today a search for “viewer role” on InfluxDB Chronograf documentation comes up empty.

The only access control I’ve found in InfluxDB is via API tokens, and I don’t know how that helps me when logged in to use Chronograf. The only way I know to utilize API tokens is from outside the system, which means firing up a separate Docker container running another visualization charting software package: Grafana. Then I could add InfluxDB as a data source with a read-only API token so a Grafana dashboard has no way to modify InfluxDB data. This feels very clumsy and I’m probably making a beginner’s mistake somewhere, but it gives me peace of mind to leave Grafana displaying on a screen without worry about my InfluxDB data integrity. This lets me see the data so I know the system is working end-to-end as I go back and rework how data is communicated.

For reference here is my very simple docker-compose.yml for my Grafana instance.

version: "3.8"

services:
  server:
    image: grafana/grafana:latest
    restart: unless-stopped
    ports:
      - 3000:3000
    volumes:
      - ./data:/var/lib/grafana

Notes on “Hardspace: Shipbreaker” 0.7

I have spent entirely too much time playing Hardspace: Shipbreaker, but it’s been very enjoyable time spent. As of this writing, it is a Steam Early Access title and still in development. The build I’ve been playing is V.0.7.0.217552 dated December 20th, 2021. (Only a few days before I bought it on Steam.) The developers have announced their goal to take it out of Early Access and formally release in Spring 2022. Comments below from my experience do not necessarily reflect the final product.

The game can be played in career mode, where ship teardowns are accompanied by a storyline campaign. My 0.7 build only went up to act 2, the formal release should have an act 3. Personally, I did not find the story compelling. This fictional universe placed the player as an indentured servant toiling for an uncaring mega-corporation, and that’s depressing. It’s too close to the real world of capitalism run amok.

Career mode has several difficulty settings. I started with the easiest “Open Shift” that removes the stress of managing consumables like my spacesuit oxygen. It also removes the time limit of a “shift” which is fifteen minutes. After I moved up to “Standard” difficulty, the oxygen limit is indeed stressful. But I actually started appreciating the fifteen-minute limit timer because it encourages me to take a break from this game.

Whatever the game mode (career, free play or competitive race) the core game is puzzle-solving: How to take apart a spaceship quickly and efficiently to maximize revenue. My workspace is a dockyard in earth orbit, and each job takes apart a ship and sort them into one of three recycle bins:

  1. Barge: equipment kept intact. Examples: flight terminal computers, temperature control units, power cells, reactors.
  2. Processor: high value materials. Examples: exterior hull plates, structural members.
  3. Furnace: remainder of materials. Example: interior trim.

We don’t need to aim at these recycle bins particularly carefully, as they have an attraction field to suck in nearby objects. Unfortunately, these force fields are also happy to pull in objects we didn’t intend to deposit. Occasionally an object would fall just right between the bins and they would steal from each other!

I haven’t decided if the hungry processors/furnaces is a bug, or an intended challenge to the game. There are arguments to be made either way. However, the physics engine in the game exhibit behavior that are definitely bugs. Personally, what catches me off guard the most are small events with outsized effects. The most easily reproducible artifact is to interact with a large ship fragment. Our tractor beam can’t move a hull segment several thousand kilograms in mass. But if we use the same tractor beam to pick up a small 10 kilogram component and rub it against the side of the hull segment, the hull segment starts moving.

Another characteristic of the physics engine is that everything has infinite tensile strength. As long as there is a connection, no matter how small, the entire assembly remains rigid. It means when we try to cut the ship in half, each half weighting tens of thousands of kilograms, we could overlook one tiny thing holding it all together. My most frustrating experience was a piece of fabric trim. A bolt of load-bearing fabric holding the ship together!

But at least that’s something I can look for and see connected onscreen. Even more frustrating are bugs where ship parts are held together by objects that are visibly apart on screen. Like a Temperature Control Unit that doesn’t look attached to an exterior hull plate, but it had to be removed from its interior mount at which point both the TCU and the hull are free to move. Or the waste disposal unit that rudely juts out beyond its allotted square.

Since the game is under active development, I see indications of game mechanics that was not available to me. It’s not clear to me if these are mechanisms that used to exist and removed, or if they are promised and yet to come. Example: there were multiple mentions of using coolant to put out fires, and I could collect coolant canisters, but I don’t see how I can apply coolant to things on fire. Another example: there are hints that our cutter capability can be upgraded, but I encountered no upgrade opportunity and must resort to demolition charges. (Absent an upgrade, it’s not possible to cut directly into hull as depicted by game art.) We also have a side-quest to fix up a little space truck, but right now nothing happens when the quest is completed.

The ships being dismantled are one of several types, so we know roughly what to expect. However, each ship includes randomized variations so no two ships are dismantled in exactly the same way. This randomization is occasionally hilarious. For example, sometimes the room adjacent to the reactor has a window and computers to resemble a reactor control room. But sometimes the room is set up like crew quarters with chairs and beds. It must be interesting to serve on board that ship, as we bunk down next to a big reactor through the window and its radioactive warning symbols.

There are a few user interface annoyances. The “F” key is used to pick up certain items in game. But the same key is also used to fire a repulsion field to push items away. Depending on the mood of the game engine, sometimes I press “F” to pick up an item only to blast it away instead and I have to chase it down.

But these are all fixable problems and I look forward to the official version 1.0 release. In the meantime I’m still having lots of fun playing in version 0.7. And maybe down the line the developers will have the bandwidth to explore putting this game in virtual reality.

Spaceship Teardowns in “Hardspace: Shipbreaker”

While studying Unity’s upcoming Data-Oriented Technology Stack (DOTS) I browsed various resources on the Unity landing page for this technology preview. Several game studios have already started using DOTS in their titles and Unity showcased a few of them. One of the case studies is Hardspace:Shipbreaker, and it has consumed all of my free time (and then some.)

I decided to look into this game because the name and visuals were vaguely familiar. After playing a while I remembered I first saw it on Scott Manley’s YouTube channel. He made that episode soon after the game was available on Steam. But the game has changed a lot in the past year, as it is an “Early Access Game” that is still undergoing development. (Windows only for now, with goal of eventually on Xbox and PlayStation consoles.) I assume a lot of bugs have been stamped out in the past year, as it has been mostly smooth sailing in my play. It is tremendously fun even in its current incomplete state.

Hardspace:Shipbreaker was the subject of an episode of Unity’s “Behind the Game” podcast. Many aspects of developing this game were covered, and towards the end the developers touched on how DOTS helped them solve some of their performance problems. As covered in the episode, the nature of the game means they couldn’t use many of the tried-and-true performance tricks. Light sources move around, so they couldn’t pre-render lights and shadows. The ships break apart in unpredictable ways (especially when things start going wrong) there can be a wide variation in shapes and sizes of objects in the play area.

I love teardowns and taking things apart. I love science fiction. This game is a fictional world where we play a character that tears down spaceships for a living. It would be a stretch to call this game “realistic” but it does have its own set of realism-motivated rules. As players, we learn to work within the constraints set by these rules and devise plans to tear apart these retired ships. Do it safely so we don’t die. And do it fast because time is money!

This is a novel puzzle-solving game and I’m having a great time! If “Spaceship teardown puzzle game” sounds like fun, you’ll like it too. Highly recommended.

[Title image from Hardspace: Shipbreaker web site]

Unity-Python Communication for ML-Agents: Good, Bad, and Ugly

I’ve only just learned that Unity DOTS exists and it seems like something interesting to learn as an approach for utilizing resources on modern multicore computers. But based on what I’ve learned so far, adopting DOTS by itself won’t necessarily solve the biggest bottleneck in Unity ML-Agents as per this forum thread: the communication between Unity and Python.

Which is unfortunate, because this mechanism is also a huge strength of this system. Unity is a native code executable with modules written in C# and compiled, while deep learning neural network frameworks like TensorFlow and PyTorch runs under a Python interpreted environment. The easiest and most cross-platform friendly way for these two types of software to interact is via network ports even though data is merely looped back to the same computer and not sent over a network.

With a documented communication protocol, it allowed ML-Agents components to evolve independently as long as they conform to the same protocol. This was why they were able to change the default deep learning framework from TensorFlow to PyTorch between ML-Agents version 1.0 and 2.0 but without breaking backwards compatibility. (They did it in release 10, in case it’s important) Developers who prefer to use TensorFlow could continue doing so, they are not forced to switch to PyTorch as long as everyone talks the same language.

Functional, capable, flexible. What’s not to love? Well, apparently “performance”. I don’t know the details for Unity ML-Agents bottlenecks but I do know “fast” for a network protocol is a snail’s pace compared to high performance inter-process communications mechanisms such as shared memory.

To work around the bottleneck, the current recommendations are to manually stack things up in parallel. Starting at the individual agent level: multiple agents can train in parallel, if the environment supports it. This explains why the 3D Ball Balancing example scene has twelve agents. If the environment doesn’t support it, we can manuall copy the same training environment several times in the scene. We can see this in the Crawler example scene, which has ten boxes one for each crawler. Beyond that, we now have the capability to run multiple Unity instances in parallel.

All of these feel… suboptimal. The ML-Agents team is aware of the problem and working on solutions but have nothing to announce yet. I look forward to seeing their solution. In the meantime, learning about DOTS has sucked up all of my time. No, not learning… I got sucked into Hardspace:Shipbreaker, a Unity game built with DOTS.

Unity DOTS = Data Oriented Technology Stack

Looking over resources for Unity ML-Agents toolkit for reinforcement learning AI algorithms, I’ve come across multiple discussion threads about how it has difficulties scaling up to take advantage of modern multicore computers. This is not just a ML-Agents challenge, this is a Unity-wide challenge. Arguably even a software development industry-wide challenge. When CPUs stopped getting faster clock rates and started gaining more cores, games have had problem taking advantage of them. Historically while a game engine is running, there is one CPU core running at max. The remaining cores may help out with a few auxiliary tasks but mostly sit idle. This is why gamers have been focused on single-core performance in CPU benchmarks.

Having multiple CPUs running in parallel isn’t new, nor are the challenges of leveraging that power in full. From established software toolkits to leading edge theoretical research papers, there are many different approaches out there. Reading these Unity forum posts, I learned that Unity is working on a big revamp under the umbrella of DOTS: Data-Oriented Technology Stack.

I came across the DOTS acronym several times without understanding what it was. But after it came up in the context of better physics simulation and a request for ML-Agents to adopt DOTS, I knew I couldn’t ignore the acronym anymore.

I don’t know a whole lot yet, but I’ve got the distinct impression that working under DOTS will require a mental shift in programming. There were multiple references to Unity developers saying it took some time for the concepts to “click”, so I expect some head-scratching ahead. Here are some resources I plan to use to get oriented:

DOTS is Unity’s implementation of Data-oriented Design, a more generalized set of concepts that helps write code that will run well on modern machines with many cores and multiple levels of memory caches. An online eBook for Data-oriented Design is available, which might be good to read so I can see if I want to adopt these concepts in my own programming projects outside of Unity.

And just to bring this full circle: it looks like the ML-Agents team has already started DOTS work as well. However it’s not clear to me how DOTS will (or will not) help with the current gating performance factor: Unity environment’s communication with PyTorch (formerly TensorFlow) running in a Python environment.

Miscellaneous Gems from ML-Agents Resources

I browsed through ML-Agents GitHub issues and forums looking for an explanation why there hasn’t been a release in half a year, I came up empty handed on an answer. But the time is not all wasted since I found a few other scattered tidbits that might be useful for the future.

The simulation time scale used in most examples is 20, and is the default used by the mlagents-learn script. If the script is not used, the simulation runs in real time and will feel very slow. The tradeoff here is accuracy of physics simulation, as per the comment “If you go too fast, the physics gets kind of wonky, and sometimes objects/agents will go through each other.”

The official “Hummingbird” tutorial on Unity Learn targets the LTS build of Unity with ML-Agents version 1. Looks like the goal is to update it to work with a new release of ML-Agents, and instructions to get a preview has been posted.

Issue #4129 is pretty old but there is a lot of detail here about why Unity ML-Agents doesn’t necessarily benefit from GPU accelerated neural network training. Since then, ML-Agents has switched from TensorFlow to PyTorch but many of the points might still apply.

But never mind the GPU, ML-Agents can’t even make full use of multicore CPUs like this person’s 16-core Threadripper. The Unity person who responded explained this is on the list of things to improve but there’s nothing to show yet. In the meantime, there are workarounds.

Also on the subject of utilizing parallel hardware, here’s a request for Unity to use Google’s Brax physics engine. Based on my experience I was certain the answer was “No” (the physics engine isn’t something that can “just” be switched around) but this question was valuable for two reasons. One, it led me to look up and learn about Google Brax, a physics engine for reinforcement learning that runs on Google TPUs and CUDA GPUs. And second, Unity actually is investing in the work towards a faster physics engine “for the DOTS platform.” Um… what’s DOTS? Time for some reading.

Browsing ML-Agents Resources: GitHub and Forums

I had taken a quick look back through evolution of Unity’s ML-Agents from initial prerelease announcement to the present day. The features have been good, but looking at the dates highlighted a conspicuous gap: there have been no releases or announcements in the second half of 2021. This is notable because ML-Agents had been on a rapid releases schedule, with versions coming out every few weeks, since the original prerelease announcement. But things had come to a screeching halt — why?

[Update: Release 19 became available on 2022/1/14, but I have yet to find information on why there was a long delay between 18 and 19.]

In the absence of official announcements, I went poking around through other publicly available information. The first resource was a brief look at the GitHub commit history for the ML-Agents repository. We’ve been warned the “main” branch can be unstable and I thought that meant it was the active development branch. This is apparently false, as I found hints of additional working branches that are out of public view. For example, the original commit for “Deterministic actions” mentioned “release 19” even though the latest release is currently 18. These mentions of “release 19” had to be removed as merge #5637. In the description, it mentioned the branch “release_19_docs” which is not visible to us.

With development progress partially obscured from view, I moved on to looking at currently open GitHub issues. When doing a first glance at an unfamiliar repository, I first look at the most recent items (the default view) and then I sort by “Most Commented” to see the most popular items. Nothing caught my eye as a likely reason. However, it did eliminate the possibility that Unity had killed the project, because team members are still responding to issues. So that’s comforting.

The next resource was to browse Unity Forum for ML-Agents. Scrolling through issues between June and now, I saw multiple references to memory leaks. (“Unity ML 2.0.0 Memory leak” “Training is more and more slow” “Memory leak using imitation learning“) Returning to issues list, I saw #5458 “Memory Leak” is currently open for investigation. However, there’s no comment one way or another if this bug (or another) is holding up release 19, so my quest came up empty-handed. However, I did get the idea if I run into a memory leak with release 18, I can try downgrading to release 17 to see that helps. Plus I came across a bunch of other interesting pieces of information.

Notes on ML-Agents Development History (Part 2: Version 1.0 to Present)

Looking back at Unity blog posts and GitHub release notes, we can see ML-Agent’s evolution during the prerelease beta phase. From initial announcement leading up to an official version 1.0, they added many features promised in the original announcement, and made big architectural changes like how brains fit in the object hierarchy of a Unity project.

On 2020/5/12, ML-Agents reached an official version 1.0, with a package organization that is covered by version compatibility guarantees going forward in the future. This guarantee is significant, because it means users can have better confidence their own projects will function. It also means more work for Unity because any future large-scale architectural changes will have to be made in a compatible way.

Another change in ML-Agents development is that they’re no longer writing a Unity blog post for every release. I had thought this merely reflected a slower, more deliberate development with fewer changes to announce, but looking over release notes I still see plenty of significant changes. Given this, I suspect the lower blog traffic reflect a change in customer communication priorities inside the Unity organization. Perhaps they’ve moved on to YouTube videos or something? If so, that would be a shame, as I prefer the written word.

In any case, ML-Agents GitHub repository release notes made it clear development continued rapidly:

  • Release 2 (2020/5/20) has minor fixes and the current “Verified” build.
  • Release 3 (2020/6/10)
  • Release 4 (2020/7/15) added parameter randomization
  • Release 5 (2020/7/31)
  • Release 6 (2020/8/17) updated version requirements: Python now 3.6.1 and NumPy now 1.19.0 in sync with TensorFlow
  • Release 7 (2020/9/21) IActuator abstract classes for generic action spaces. Initial PyTorch implementation.
  • Release 8 (2020/10/14)
  • Release 9 (2020/11/3)
  • Release 10 (2020/11/19) Match3 environment (ML-Agents play Bejeweled!) and PyTorch is now the default.
  • Release 11 (2020/12/21)
  • Release 12 (2020/12/22)

The above releases were summarized in the ML-Agents 2020 End of Year recap blog post, and development continued through 2021:

  • Release 13 (2021/2/24) TensorFlow removed. (--torch-device=cpu to tell PyTorch to use CPU for training. This will be useful later.)
  • Release 14 (2021/3/8)
  • Release 15 (2021/3/17) BufferSensor for agents to observe variable number of entities. MultiAgentGroup interface for training multiple different agents simultaneously, and MA-POCA trainer for them.
  • Release 16 (2021/4/13)
  • Release 17 (2021/4/27): Minimum Unity up to 2019.4. API breaking changes. Multiple behaviors via HyperNetworks
  • Release 18 (2021/6/9): Added colab notebooks.

The version and API changes for release 17 smelled like preparation for a new version, which was confirmed by a blog post talking about training complex cooperative behaviors. This is all very exciting stuff, but I noticed development activity came to a screeching halt. After years of releases every few weeks (sometimes multiple times in a single month) there hasn’t been anything in the second half of 2021. I don’t know why but I poked around a bit to see if I can find clues.

[Update: Release 19 became available on 2022/1/14.]

Notes on ML-Agents Development History (Part 1: Up to Version 1.0)

I’ve just installed and tested basic functionality of Unity ML-Agents Release 18. And just before that, I did the same with Release 2 which is also referred to as “Verified 1.0.8”. I was surprised at the changes visible just between these releases. This made me curious about how this package evolved, and I went looking for information from its past.

Most of them were announced on Unity blog, but some just had GitHub release notes. Here is a compilation of links alongside a few highlights that caught my eye, follow these links for a complete list of changes:

2017/6/26: The earliest public information I could find was Unity announcing their intent to join in AI research and applications. Annoyingly, some of the linked blog posts have since disappeared, apparently in some sort of migration of their blog hosting system. For example the “second part of this blog series” link now leads to a 404 error.

2017/9/18: The ML-Agents Toolkit officially kicks off with version 0.1, describing a general architecture that I’m sure has since evolved and a long list of ambitious ideas they wanted to support. Many of them did come to be! Though of course not all of them, and some have since disappeared.

2017/12/8: Version 0.2 introduced curriculum learning, and launched a community challenge to motivate people to play with the toolkit.

2018/3/15: Version 0.3 introduced imitation learning, multi-brain training, and an optional poll model. Recurrent Neural Networks came in as part of a “Memory-Enhanced Agents” umbrella.

2018/6/18: Version 0.4 allowed training using the Unity editor, no longer requiring a compiled executable. An Udacity nanodegree was introduced, though sadly that’s too rich for my blood. More training environments were added, one (Pyramids) specifically demonstrates the “Curiosity” capability. Curiosity got its own blog post.

2018/9/11: Version 0.5 added a Gym interface and replicating a few environments from OpenAI Gym. Also expanded capability to enable/disable discrete actions, but not clear if it was related to OpenAI Gym.

2018/12/17: Version 0.6 is an architectural revamp changing how ml-agents AI brains fit in the Unity object hierarchy. Introduced “demonstration recorder” for off-line imitation learning. Is that still around?

2019/3/1: Version 0.7 is another big infrastructure change, switching runtime neural network inference from external TensorFlowSharp to Unity’s own Inference Engine (a.k.a. Barracuda) to support more Unity runtime platforms.

2019/4/15: Version 0.8 infrastructure change allows multiple Unity simulations to run in parallel on a single machine. Strange this is the recommended approach to take advantage of machines with many processing cores. (Later research found that Unity is working to improve multicore performance across the board, not just ml-agents, with something called DOTS.)

2019/8/1: Version 0.9 (release notes) is the first of two releases focused on throughput and efficiency.

2019/9/30: Version 0.10 finished what 0.9 started. Improving sample throughput (asynchronous environments) and sample efficiency via GAIL (0.9) and SAC (0.10) algorithms.

2019/11/4: Version 0.11 (release notes) changed again the brain’s place in Unity object hierarchy.

2019/12/2: Version 0.12 (release notes) moved from TensorFlow 1 to 2 via the TF1 compatible interfaces. It appears this work was never finished, ml-agents moved to PyTorch instead of finishing TF2 migration.

2020/1/8: Version 0.13 (release notes)

2020/2/28: Version 0.14 now has ability to train via adversarial self-play. Includes a short history of learning from self-play.

2020/3/6: Not a version, but this is when ml-agents got serious enough to get a course up on Unity Learn (Hummingbirds) as well an “AI for Beginners” course on Unity Learn Premium.

2020/3/18: Version 0.15 (release notes) wrapped up a lot of housekeeping in preparation for 1.0 release.

Development focus for ml-agents changed to more refinement after 1.0 release, along with corresponding reduction in blog announcements.

Notes on Installing Unity ML-Agents (Release 18)

I’m dipping my toes into playing with deep reinforcement learning via Unity’s ML-Agents package. I made my first run with the safest most mature option “Verified Package 1.0.8”, which mapped to Release 2 by the ML-Agents repository versioning scheme. No problems were encountered during installation and I was able to run the 3D balancing ball project in the Getting Started guide. From there I could either explore Release 2 further or try a more adventurous release. I chose the latter and proceeded to install ML-Agents Release 18.

Doing this experiment on the same machine meant I had to keep the two installations separate. Unity Hub is already well-suited to keeping distinct versions isolated so they could run in parallel (Unity 2019.4.25f1 for release 18) though there’s a potential point of conflict if Unity editors required different versions of Visual Studio Community Edition for editing code. On the Python side, Anaconda is well suited to keep Python environments separate. Since files are referenced by directory, though, I cloned the ml-agents GitHub repository separately for each release instead trying to switch back and forth within the same directory.

I very much appreciated Unity for their project documentation, as my installation and Getting Started process went just as smoothly. I didn’t expect to notice much different between release 2 and 18, but even just in installation and Getting Started I saw they’ve made changes. The biggest one that caught my eyes is that ml-agents switched from TensorFlow to PyTorch sometime during this time. There are other smaller changes, the most welcome one to me is a much more comprehensive collection of example configurations in the release_18 /config/ subdirectory. Release 2 had only a handful of files, release 18 had a far larger directory tree to let people (like me) have more than one starting point.

I’m not quite sure where to go from here, but given how well documented ml-agents appears to be, I thought it would be interesting to take a quick look back to see where they’ve been.

Notes on Installing Unity ML-Agents (Release 2)

I thought it would be fun to play with reinforcement learning via Unity ML-Agents. The official product landing page sends us to the ml-agents repository on GitHub. And just like every other repository, it’s always a good idea to look over the README.md to understand their branch organization. Especially before we start cloning anything.

And indeed, the README includes a handy chart of releases. As of this writing there are eight releases listed plus main which is labeled as unstable. I’m glad I didn’t blindly clone main! Of the eight stable releases, six are named “Release 13” to “Release 18” inclusive. The final two are named “Verified Package 1.0.7” and “Verified Package 1.0.8”.

The “Verified” label is a guarantee of safety in the lifecycle of Unity packages. Therefore the most recent release with the highest guarantee of functionality is “Verified Package 1.0.8”. In Unity’s world, these verified packages are good enough for commercial production use. If our needs aren’t quite that rigorous, we can use the builds labeled “Release.” These numbers are explained in the ML-Agents versioning page, and it’s something we can play with if we aren’t shouldering the weight of commercial Unity production.

I think I’m fine to play with more recent “Release” builds, but I wanted to start with the most guarantee build to make sure I can at least get that working. Which meant cloning the build labeled “Verified Package 1.0.8” and that maps to “Release 2.”

In order to open the Unity Project that is a part of this release, I wanted to get the version of Unity that exactly matched the version number in ProjectVersion.txt: 2018.4.17f1. If I tried to install Unity 2018 in Unity Hub, it offers me 2018.4.36f1 because that was the most recent supported version. In order to match version, I had to click the download archive link and look for 2018.4.17 under 2018 builds. (It was released February 11th, 2020.) Once found, I could click the “Unity Hub” button to prompt Unity Hub to install the build on my machine.

While Unity installed, I cloned the repository tagged release_2 and installed corresponding Python binaries. I encountered no problems following installation directions for this release, though there were slight modifications as I used Anaconda Individual to manage my Python virtual environments. I had the option of installing the locally cloned versions of the ml-agents-envs and ml-agents packages and I did so. I noticed that the installation had TensorFlow in CPU-only mode, but running without GPU acceleration is perfectly OK for a starting point.

Once Unity Editor 2018.4.17 was installed, I used it to open the Project directory of my cloned release_2 repository. It opened without errors. I proceeded to the Getting Started Guide for this release and verified I had basic functionality both running the pretrained 3D Balance Ball model and training a model of my own. The training was pretty quick, it took just under 8 minutes on the Core i5-7300HQ CPU of my Dell 7577 laptop plugged in to an AC power adapter.

Encouraged by this success, I proceeded to try Release 18 as well.

Switching Back to Unity ML-Agents

It was quite enlightening for me to read Deep Reinforcement Learning Doesn’t Work Yet. And to be honest, a little depressing as well. I was vaguely aware of the challenges involved but only in a general sense. Just small tidbits here and there over the past few years, as I looked at this field with interest. Now that I finally got around to looking at reinforcement learning in more detail, I realized that it was overly optimistic of me to expect all major problems to have been solved by now.

My original motivation for getting into reinforcement learning was to make my Sawppy an autonomous rover. Based on what I’ve learned so far, my original hopes for Sawppy intelligence via reinforcement learning is extremely ambitious and still quite far away. If I want to do some deep RL projects more likely to succeed in the near term, I probably shouldn’t put it on a real physical rover. In all likelihood, whatever can be accomplished on a real robot using deep reinforcement learning could be done faster and more easily with some other AI technique.

It would certainly be nice if some aspect of Sawppy intelligence will eventually result in a research project that can contribute to the state of the art. But I’m not so arrogant as to assume I can accomplish that feat and certainly not as my first project in reinforcement learning. I’ll aim for something simple as my starting point. Got to crawl before I can walk, and all that.

Transferring reinforcement learning from a simulator to work in the real world is still a lot to tackle. So I’m going to look at a simulated world and stay within that simulated world while I learn the ropes. And before I can realistically think about contributing to algorithm advancements, I should get familiar with applying existing implementations of reinforcement learning. All of these new priorities turned my attention back to the game world of Unity ML-Agents.

Today I Learned: MuJoCo Is Now Free To Use

I’ve contemplated going through OpenAI’s guide Spinning Up in Deep RL. It’s one of many resources OpenAI made available, and builds upon the OpenAI Gym system of environments for training deep reinforcement learning agents. They range from very simple text-based environments, to 2D Atari games, to full 3D environments built with MuJoCo. Whose documentation explained that name is a shorthand for the type of interactions it simulates: “Multi Joint Dynamics with Contact”

I’ve seen MuJoCo mentioned in various research contexts, and I’ve inferred it is a better physics simulation than something that we would find in, say, a game engine like Unity. No simulation engine is perfect, they each make different tradeoffs, and it sounds like AI researchers (or at least those at OpenAI) believe MuJoCo to be the best one to use for training deep reinforcement learning agents with the best chance of being applicable to the real world.

The problem is that, when I looked at OpenAI Gym the first time, MuJoCo was expensive. This time around, I visited the MuJoCo page hoping that they’ve launched a more affordable tier of licensing, and there I got the news: sometime in the past two years (I didn’t see a date stamp) DeepMind has acquired MuJoCo and intend to release it as free open source software.

DeepMind was itself acquired by Google and, when the collection of companies were reorganized, it became one of several companies under the parent company Alphabet. At a practical level, it meant DeepMind had indirect access to Google money for buying things like MuJoCo. There’s lots of flowery wordsmithing about how opening MuJoCo will advance research, what I care about is the fact that everyone (including myself) can now use MuJoCo without worrying about the licensing fees it previously required. This is a great thing.

At the moment MuJoCo is only available as compiled binaries, which is fine enough by me. Eventually it is promised to be fully open-sourced at a GitHub repository set up for the purpose. The README of the repository made one thing very clear:

This is not an officially supported Google product.

I interpret this to mean I’ll be on my own to figure things out without Google technical support. Is that a bad thing? I won’t know until I dive in and find out.

Unity Machine Learning Agents Almost Within My Reach

While poking around Google’s Machine Learning Crash Course, I found that they have released a TensorFlow library for building agents with deep reinforcement learning. This might be fun but I don’t know enough about the field to make use of that library yet. It also reminded me to take another look at game engine Unity 3D’s development in this area. A lot has happened!

I first took a quick glance at Unity ML-Agents more than two years ago. At the time, the project was still an experimental thing for Unity and a lot was still in flux. Since I didn’t know much about working in Unity or in reinforcement learning, that was too many variables in flux for my taste. A year later, Unity ML-Agents reached an official version 1.0, but it was still technically a preview technology. But not long after that they had become a “verified” package for use with Unity 2020.3 LTS build, signifying a mature tool. As part of being a verified package for use with Unity LTS, ML-Agents got some nice things like an official Unity technology landing page and a few pieces of curriculum have been posted to Unity Learn to help people get started.

The primary focus of Unity ML-Agents is for creating agents in the virtual world of a Unity game. Not necessarily for real-world robots which is where my interests lie. This is an important caveat because the Unity physics engine is not an accurate representation of the real world, and reinforcement learning agents are notorious for exploiting flaws in virtual engines to do “impossible” things. But that’s no reason to give up on Unity, which can still be a useful tool for robotics research. These caveats are just some tradeoffs amongst many more to keep in mind.

During this time that Unity evolved their ML-Agents library, I’ve occasionally dabbled in Unity with projects like Bouncy Bouncy Lights. I’m not bold enough to call myself a Unity developer yet, but I’m no longer completely overwhelmed by Unity editor user interface as I once were. I haven’t done much more in Unity because I haven’t felt particularly motivated to make games. But ML-Agents? That looks like pretty good motivation for me to put serious effort into understanding reinforcement learning.

Manual Control Square Peg in a ROS Round Hole

I have converted my test code reading a joystick via ESP32 ADC peripheral to generate ROS-like data messages as output. There were some hiccups along the way but it’s good enough to proceed. The next step in the pipeline is to interpret those joystick commands in a little Sawppy rover context and generate ROS-like robot chassis velocity command (cmd_vel) as output. And as I start tackling the math I realized I’m going to face a recurring theme. I have some specific concepts around manual control of a little Sawppy rover, but those concepts aren’t a good fit for ROS conventions like REP103.

The first concept is velocity. ROS command for linear velocity is specified in meters per second. There’s no good way to say “go as fast as you can go”. Specifying meters per second will never be accurate on an open-loop DC motor system like micro Sawppy for multiple reasons. The first and most obvious one is battery power: full speed will be faster on a fully charged battery. The next problem is robot geometry. When traveling in an arc, Sawppy’s top speed will be constrained by the top speed of the outer-most wheel. As the arc tightens, that outer-most wheel running as fast as it could would still constrain the rover to a lower top speed. So Sawppy’s top speed will vary based on other conditions in the system.

The second concept is rotation. ROS command for angular velocity is specified in radians per second. There’s no good way to say “pivot about the rover’s inner wheel” because the center of the turn is dictated by the combination of linear and angular velocity. Thus the center of the turn is a result of (instead of a part of) the command. In my experience I’ve found that people have a hard time working with Sawppy turning radius when the center of turn is inside the rover’s footprint. People have an easy time with turns that resemble the cars we see on the roads everyday. People also had no problem with Sawppy pirouetting around the center axis. But in between those realms, people get confused pretty quickly. When I created the wired controller for Sawppy, I divided up the control space into two regimes: One mode where Sawppy pivots in place, and another mode where Sawppy’s turning radius is constrained to be no tighter than pivoting on one of the middle wheels. Deliberately constraining Sawppy’s maneuverability to be a subset of its capabilities was a worthwhile tradeoff to reduce user confusion .

But the whole reason of ROS is autonomy, and autonomous robots have no problem dealing with all degrees of freedom in robot movement, so there’s no reason to block off a subset of capability. However, that also meant if I’m structuring the manual joystick control system to follow ROS conventions, I have no easy way to block off that subset for human friendliness. It is certainly possible to do so with lots of trigonometry, but it always makes my head hurt. ROS works in radians and I have yet to develop a good intuitive sense for thinking about angles in radians. All of my mental geometry have been working in degrees.

These are but two of the problems on the road ahead for Sawppy, in its first draft as a manual remote-controlled vehicle. I hold out hope that this up front pain will make Sawppy software work easier in the future as I adapt ROS nodes to give Sawppy autonomy. But it does mean making today’s manual control Sawppy a square peg trying to fit in a round hole, with lots of imprecision that will fall to “best effort” basis instead of rigidly complying with expectations of ROS. But I’ll stay with it to the best of my abilities, which means revisiting rover geometry math in this new context.

Remaining To-Do For My Next Unity 3D Adventure

I enjoyed my Unity 3D adventure this time around, starting from the LEGO microgame tutorials through to the Essentials pathway and finally venturing out and learning pieces at my own pace in my own order. My result for this Unity 3D session was Bouncy Bouncy Lights and while I acknowledge it is a beginner effort, it was more than I had accomplished on any of my past adventures in Unity 3D. Unfortunately, once again I find myself at a pause point, without a specific motivation to do more with Unity. But there are a few items still on the list of things that might be interesting to explore for the future.

The biggest gap I see in my Unity skill is creating my own unique assets. Unity supports 2D and 3D creations, but I don’t have art skills in either field. I’ve dabbled in Inkscape enough that I might be able to build some rudimentary things if I need to, and for 3D meshes I could import STL so I could apply my 3D printing design CAD skills. But the real answer is Blender or similar 3D geometry creation software, and that’s an entirely different learning curve to climb.

Combing through Unity documentation, I learned of a “world building” tool called ProBuilder. I’m not entirely sure exactly where it fits in the greater scheme of things, but I can see it has tools to manipulate meshes and even create them from scratch. It doesn’t claim to be a good tool for doing so, but supposedly it’s a good tool for whipping up quick mockups and placeholders. Most of the information about ProBuilder is focused on UV mapping, but I didn’t know it at the start. All ProBuilder documentation assume I already knew what UV meant, and all I could tell is that UV didn’t mean ultraviolet in this context. Fortunately searching for UV in context of 3D graphics gives me this Wikipedia article on UV mapping. There is a dearth of written documentation for ProBuilder, what little I found all point to a YouTube playlist. Maybe I’ll find the time to sit through them later.

I skimmed through the Unity Essentials sections on audio because Bouncy Bouncy Lights was to be silent, so audio is still on the to-do list. And like 2D/3D asset creation, I’m neither a musician nor a sound engineer. But if I ever come across motivation to climb this learning curve I know where to go to pick up where I left off. I know I have a lot to learn since meager audio experimentation already produced one lesson: AudioSource.Play would stop any prior occurrences of the sound. If I want the same sound to overlap each other I have to use AudioSource.PlayOneShot.

Incorporating video is an interesting way to make Unity scenes more dynamic, without adding complexity to the scene or overloading the physics or animation engine. There’s a Unity Learn tutorial about this topic, but I found that video assets are not incorporated in WebGL builds. The documentation said video files must be hosted independently for playback by WebGL, which adds to the hosting complications if I want to go down that route.

WebGL
The Video Clip Importer is not used for WebGL game builds. You must use the Video Player component’s URL option.

And finally, I should set aside time to learn about shaders. Unity’s default shader is effective, but it has become quite recognizable and there are jokes about the “Unity Look” of games that never modified default shader properties. I personally have no problem with this, as long as the gameplay is good. (I highly recommend Overcooked game series, built in Unity and have the look.) But I am curious about how to make a game look distinctive, and shaders are the best tool to do so. I found a short Unity Learn tutorial but it doesn’t cover very much before dumping readers into the Writing Shaders section of the manual. I was also dismayed to learn that we don’t have IntelliSense or similar helpfulness in Visual Studio when writing shader files. This is going to be a course of study all on its own, and again I await good motivation for me to go climb that learning curve.

I enjoyed this session of Unity 3D adventure, and I really loved that I got far enough this time to build my own thing. I’ve summarized this adventure in my talk to ART.HAPPENS, hoping that others might find my experience informative in video form in addition to written form on this blog. I’ve only barely scratched the surface of Unity. There’s a lot more to learn, but that’ll be left to future Unity adventures because I’m returning to rover work.

Venturing Beyond Unity Essentials Pathway

To help beginners learn how to create something simple from scratch, Unity Learn set up the Essentials pathway which I followed. Building from an empty scene taught me a lot of basic tasks that were already done for us in the LEGO microgame tutorial template. Enough that I felt confident enough to start building my own project for ART.HAPPENS. It was a learning exercise, running into one barrier after another, but I felt confident I knew the vocabulary to search for answers on my own.

Exercises in the Essentials pathway got me started on the Unity 3D physics engine, with information about setting up colliders and physics materials. Building off the rolling ball exercise, I created a big plane for balls to bounce around in and increased the bounciness for both ball and plane. The first draft was a disaster, because unlike real life it is trivial to build a perfectly flat plane in a digital world, so the balls keep bouncing in the same place forever. I had to introduce a tilt to make the bounces more interesting.

But while bouncing balls look fun (title image) they weren’t quite good enough. I thought adding a light source might help but that still wasn’t interesting enough. Switching from ball to cube gave me a clearly illuminated surface with falloff in brightness, which I thought looked more interesting than a highlight point on a ball. However, cubes don’t roll and would stop on the plane. For a while I was torn: cubes look better but spheres move better. Which way should I go? Then a stroke of realization: this is a digital world and I can change the rules if I want. So I used a cube model for visuals, but attached a sphere model for physics collisions. Now I have objects that look like cubes but bounce like balls. Something nearly impossible in the real world but trivial in the digital world.

To make these lights show up better, I wanted a dark environment. This was a multi-step procedure. First I did the obvious: delete the default light source that came with the 3D project template. Then I had to look up environment settings to turn off the “Skybox”. That still wasn’t dark, until I edited camera settings to change default color to black. Once everything went black I noticed the cubes weren’t immediately discernable as cubes anymore so I turned the lights back up… but decided it was more fun to start dark and turned the lights back off.

I wanted to add user interactivity but realized the LEGO microgame used an entirely different input system than standard Unity and nothing on the Essentials path taught me about user input. Searching around on Unity Learn I got very confused with contradictory information until I eventually figured out there are two Unity user input systems. There’s the “Input Manager” which is the legacy system and its candidate replacement “Input System Package” which is intended to solve problems with the old system. Since I had no investment in the old system, I decided to try the new one. Unfortunately even tthough there’s a Unity Learn session, I still found it frustrating as did others. I got far enough to add interactivity to Bouncy Bouncy Lights but it wasn’t fun. I’m not even sure I should be using it yet, seeing how none of the microgames did. Now that I know enough to know what to look for, I could see that the LEGO microgame used the old input system. Either way, there’s more climbing of the learning curve ahead. [UPDATE: After I wrote this post, but before I published it, Unity released another tutorial for the new Input Manager. Judging by this demo, Bouncy Bouncy Lights is using input manager incorrectly.]

The next to-do item was to add the title and interactivity instructions. After frustration with exploring a new input system, I went back to LEGO microgame and looked up exactly how they presented their text. I learned it was a system called TextMesh Pro and thankfully it had a Unity Learn section and a PDF manual was installed as part of the asset download. Following those instructions it was straightforward to put up some text using the default font. After my input system frustration, I didn’t want to get much more adventurous than that.

I had debated when to present the interactivity instructions. Ideally I would present them just as the audience got oriented and recognizes the default setup. Possibly start getting bored and ready to move on, so I can give them interactivity to keep their attention. But I have no idea when that would be. When I read the requirement that the title of the piece should be in the presentation, I added that as a title card before showing the bouncing lights. And once I added a title card, it was easy to add another card with the instructions to be shown before the bouncing lights. The final twist was the realization I shouldn’t present them as static cards that fade out: since I already had all these physical interactions in place, they are presented as falling bouncing objects in their own right.

The art submission instructions said to put in my name and a way for people to reach me, so I put my name and newscrewdriver.com at the bottom of the screen using TextMesh Pro. Then it occurred to me the URL should be a clickable link, which led me down the path of finding out how an Unity WebGL title can interact with the web browser. There seemed to be several different deprecated ways to do it, but they all point to the current recommended approach and now my URL is clickable! For fun, I added a little spotlight effect when the mouse cursor is over the URL.

The final touch is to modify the presentation HTML to suit the Gather virtual space used by ART.HAPPENS. By default Unity WebGL build generates an index.html file that puts the project inside a fixed-size box. Outside that box is the title and a button to go full screen. I didn’t want the full screen option for presenting this work in Gather, but I wanted to fill my <iframe> instead of a little box within it. My CSS layout skills are still pretty weak and I couldn’t figure it out on my own, but I found this forum thread which taught me to replace the <body> tag with the following:

  <body>
      <div class="webgl-content" style="width:100%; height:100%">
        <div id="unityContainer" style="width:100%; height:100%">
        </div>
      </div>
  </body>

I don’t understand why we need to put 100% styles on two elements before it works, but hopefully I will understand whenever I get around to my long-overdue study session on CSS layout. The final results of my project can be viewed at my GitHub Pages hosting location. Which is a satisfying result but there are a lot more to Unity to learn.