Icon Fun with Segoe MDL2

For my recently completed test program, I wanted arrows to indicate motion along X/Y/Z axis. I also wanted a nice icon to indicate the button for homing operations, plus a few other practical iconography. Thankfully they are easily available to UWP applications via Segoe MDL2.

One of the deepest memories from my early exposure to a graphical desktop environment is the novelty of vector fonts. And more specifically, fonts that are filled with fun decorative elements like dingbat. I remember a time when such vector fonts were intellectual property that need to be purchased like software, so without a business need, I couldn’t justify buying my own novelty font.

The first one I had impression of being freely available and didn’t quickly disappear was Webdings, a font that Microsoft started bundling with Windows sometime around the Windows 98 timeframe. I no longer remember if earlier versions of Windows come bundled with their own novelty fonts, but I have fond memories of spending far too much time scrolling through my Webdings options.

Times have moved on, and so have typefaces and iconography. For their UWP application platform, Microsoft provided an accompanying resource for Fluent Design icons called Segoe MDL2. And again, I had a lot of fun scrolling through that page to see my options.

I was initially surprised to see many battery icons, but in hindsight it made sense as something important for creating UI on portable computing devices. There were several variants for battery styline. Including vertical and horizontal orientations and charging, not charging, or battery saver. And each style had ten levels to indicate battery level 10% apart. Some of the code point layouts were a little odd. For example, Battery0 (0xE850) through Battery9 (0xE859) were adjacent to each other, but a full Battery10 was some distance away at 0xE83F. I don’t know why, but it adds an unnecessary step to convert a battery percentage value to a corresponding icon in Segoe MDL2.

The one that made me laugh out loud? 0xEBE8, a bug.

First Project Prototype Has Poor Precision

I’ve been bringing pieces together to build a machine to take distance measurements visually, with the primary motivation of measuring dimensions of circuit board features. Mechanically the machine is the three-axis motion control of a retired 3D printer, with a webcam sitting where the print nozzle used to be. It is controlled from a PC attached via USB, running software that I wrote as an exercise to learn UWP development. Once I figured out enough of UWP layout engine, I could put some user interface controls and take the thing on its first test drive.

Verdict: The idea has promise, but this first draft implementation is a bucket of fail.

For the first test, I taped a Digi-Key PCB ruler onto the Y-axis carriage where the print bed used to be installed. The ruler has clearly labeled dimensions on board representative of components on a circuit board. The first and easiest test is to make sure my movement distances match the ruler distance and this machine flunked its first test.

I have added a little circle in the middle of the camera field of view to serve as reference. I placed that circle at the 10 cm mark and commanded a move of 1 cm along the negative X axis. I expect the little circle to sit above the 9 cm mark as a result, but it is actually sitting at roughly 8.95 cm, a distance of 1.05 cm and roughly 5% longer than expected.

Camera control test 8.95mm

The first hypothesis is that this is an effect of the camera’s USB cable tugging on the camera as the print carriage moved, changing the viewing angle. It is, after all, merely held by tape on this first draft. So I repeated the experiment along the Y axis, which does not move the camera carriage and would eliminate flexible tape as a variable. Again I see a 5-6% overshoot.

When two measurement tools disagree, bring in a third opinion to break the tie. I pulled out my digital caliper and measured the ruler markings and they match, indicating the problem is indeed with the printer mechanicals. For whatever reason, this motion control carriage is moving further than commanded. Perhaps the belts had stretched out? Whatever the reason, this behavior could very well be why the printer was retired. I think I can compensate by changing the steps-per-millimeter setting in printer firmware, all I need is a precise measurement of actual movement.

Which brings up the other half of my problem: I can only get plus or minus half a millimeter precision with this webcam configuration. I can’t bring the camera any closer to the workpiece, because this webcam’s autofocus fails to focus at such close ranges.

I see two paths forward to address the optical precision shortcomings:

  1. Use another camera, such as a “USB microscope”. Most of the cheap ones are basically the electronics of a webcam paired with optics designed for short distance focus.
  2. Keep the camera but augment the optics with a clip-on macro lens. These are sold with the intent they let cell phone cameras focus on objects closer than they normally could.

Either should allow me to measure more accurately and tune the steps-per-millimeter value. While I contemplate my options, I went back into my UWP application to play around with a few other features.

Quick Notes on UWP Drawing

Because I wanted to handle keyboard events, I created an UserControl that packages the CaptureElement displaying the camera preview. Doing this allowed an easy solution to another problem I foresaw but didn’t immediately know how to solve: How do I draw reference marks over the camera preview? I’d definitely need something to mark the center, and maybe additional marks for horizontal/vertical alignment and if I’m ambitious, an on screen ruler to measure distance.

With an UserControl, drawing these things became trivial: I can include graphical drawing elements as a peer of CaptureElement in my UserControl template, and we are off to the races.

Or so I thought. It is more accurate to say I was off to an entirely different set of problems. The first was making marks legible regardless of camera image. That means I can’t just use a bright color, because that would blend in on a bright background. Likewise, a dark color would be lost in a dark background. What I need is a combination of high contrast colors to ensure they are visible independent of background characteristics. I had thought: easy! Draw two shapes with different stroke thickness. I first draw a rectangle with a thicker stroke, like this blue rectangle:

StrokeThickness24

I then drew a yellow rectangle with half the stroke thickness, expecting it to sit in the center of the blue stroke. But it didn’t! The yellow covered the outer half leaving the inner half, instead of my expectation of a yellow line with blue on either side. But even though this was unexpected, it was still acceptable because that gave me the contrasting colors I wanted.

StrokeThickness24and12

This only barely scratches the surface of UWP drawing capability, but I have what I need for today. I’ve spent far too much time on UWP keyboard navigation and I’m eager to move forward to make more progress. Drawing a single screen element is fun, but to be useful they need to coexist with other elements, which means layout comes into the picture.

User Interface Taking Control

Once I finally figured out that keyboard events require objects derived from UWP’s Control class, the rest was pretty easy. UWP has a large library of common controls to draw from, but none really fit what I’m trying to present to the user.

What came closest is a ScrollViewer, designed to present information that does not fit on screen and allows the user to scroll around the full extents much as my camera on a 3D printer carriage can move around the XY extents of the 3D printer. However, the rendering mechanism is different. ScrollViewer is designed to let me drop a large piece of content (example: a very large or high resolution image) into the application and let ScrollViewer handle the rest independently. But that’s not what I have here – in order for scrolling to be mirrored to physical motion of 3D printer carriage, I need to be involved in the process.

Lacking a suitable fit in the list of stock controls, I proceeded to build a simple custom control (based on the UserControl class) that is a composite of other existing elements, starting with the CaptureElement displaying the webcam preview. And unlike on CaptureElement, the OnKeyDown and OnKeyUp event handlers do get called when defined on a UserControl. We are in business!

Once called, I have the option to handle it, in this case translating directional desire into G-code to be sent to the 3D printer. My code behavior fits under the umbrella of “inner navigation”, where a control can take over keyboard navigation semantics inside its scope.

I also have the ability to define special keys inside my scope, called accelerators ([Control] + [some key]) or access ([Alt] + [some key]) keys. I won’t worry about it for this first pass, but they can be very powerful when well designed and a huge productivity booster for power users. They also have a big role in making an application keyboard accessible. Again while it is a very important topic for retail software, it’s one of the details I can afford to push aside for a quick prototype. But it’ll be interesting to dive in sometime in the future, it’s a whole topic in and of itself. There’s literally a book on it!

In the meantime, I have a custom UserControl and I want to draw some of my own graphics on screen.

My Problem Was One Of Control

For my computer-controlled camera project, I thought it would be good to let the user control position via arrow keys on the keyboard. My quick-and-dirty first attempt failed, so I dived into UWP documentation. After spending a lot of time reading about nuts and bolts of keyboard navigation, I finally found my problem and it’s one of the cases when the answer has been in my face the whole time.

When my key press event handlers failed to trigger, the first page I went to is the Keyboard Events page. This page has a lot of information up front and center about eligibility requirements to receive keyboard events, here’s an excerpt from the page:

For a control to receive input focus, it must be enabled, visible, and have IsTabStop and HitTestVisible property values of true. This is the default state for most controls.

My blindness was reading the word “control” in the general sense of a visual element on the page for user interaction. Which is why I kept overlooking the lesson it was trying to tell me: if I want keyboard events, I have to use something that is derived from the UWP Control object. In other words, not “control” in the generic language case but “Control” as a specific proper name in the object hierarchy. I would have been better informed about the distinction if they had capitalized Control, or linked to the page for the formal Control object, or any of a number other things to differentiate it as a specific term and not a generic word. But for whatever reason they chose not to, and I failed to comprehend the significance of the word again and again. It wasn’t until I was on the Keyboard Accessibility page did I see this requirement clearly and very specifically spelled out:

Only classes that derive from Control support focus and tab navigation.

The CaptureElement control (generic name) used in the UWP webcam example is not derived from Control (proper name) and that’s why I have not been receiving the keyboard events. Once I finally understood my mistake, it was easy to fix.

Tab and Arrow Keys Getting In Each Others Way

In a UWP application, we have two major ways for navigating UI controls using the keyboard: a linear path using Tab key (and shift-Tab to go backwards), and a two-dimensional system with the four arrow keys. A part of what makes learning UWP keyboard navigation difficult is the fact that these two methods are both active simultaneously and we have to think about what happens when an user switches between them.

Application authors can control tabbing order by setting TabIndex. It is also the starting point of keyboard navigation, since the initial focus is on the element with the highest TabIndex. Occasionally the author would want to exclude something from tabbing order, they could turn that off by setting IsTabStop to false. I thought that was pretty easy until I started reading about TabFocusNavigation. This is where I’m thankful for the animation illustrating the concept on this page or else I would have been completely lost.

On the arrow navigation side, XYFocusKeyboardNavigation is how authors can disable arrow navigation. But since it is far from a simple system, selectively disabling certain parts of the app would have wildly different effects than simple “on” or “off” due to how subtrees of controls interact. That got pretty confusing, and that’s even before we start trying to understand how to explicitly control the behavior of each arrow direction with the XY focus navigation strategies.

Even with all these complex options, I was skeptical they could cover all possible scenarios. And judging by the fact we have an entire page devoted to programmatic focus navigation, I guess they didn’t manage. When the UI designer wants something that just can’t be declared using existing mechanisms, the application developer has the option of writing code to wrangle keyboard focus.

But right now my problem isn’t keyboard navigation behaving differently from what I wanted… the problem is that I don’t see keyboard events at all. My answer was elsewhere: I had no control, in both senses of the word.

Learning UWP Keyboard Navigation

After a quick review of the of UWP keyboard event basics, I opened up the can of worms that is keyboard navigation. I stumbled into this because I wanted to use arrow keys to move the 3D printer carriage holding a camera, but arrow keys already have roles in an application framework! My first effort to respond to arrow keys was a failure, and I hypothesized that my code conflicted with existing mechanisms for arrow keys. In order to figure out how I can write arrow key handlers that coexist with existing mechanisms, I must first learn what they are.

Graphical user interfaces are optimized for pointing devices like a mouse, stylus, or touchscreen. But that’s not always available, and so the user needs to be able to move around on screen with the keyboard. Hence the thick book of rules that is summarized by the UWP focus navigation page. As far as I can tell, it is an effort to put together all the ways arrow keys were used to move things around on screen. A lot of these things are conventions that we’ve become familiar with without really thinking about it as a rigorous system of rules, many were developed by application authors so it “felt right” for their specific application. It reminds me of the English language: native speakers have an intuitive sense of the rules, but trying to write down those rules is hard. More often than not, the resulting rules make no sense when we just read the words.

And, sadly, I think the exact same thing happened in the world of keyboard navigation. But in a tiny bit of good news, we’re not dependent on understanding words, this page also had a lot of animated graphics to illustrate how keyboard navigation functions under different scenarios. I can’t say it makes intuitive sense, but at least seeing the graphics help understand the intent being described. It’s especially helpful in scenarios where tab navigation interacts with arrow key navigation.

Reviewing UWP Keyboard Routed Events

I wanted to have keyboard control of the 3D printer carriage, moving the webcam view by pressing arrow keys. I knew enough about the application framework to know I needed to implement handlers for the KeyDown and KeyUp events, but in my first implementation those handlers are never called.

If at first I don’t succeed, it’s time to go to the manual.

The first stop is the overview for UWP Events and Routed Events, the umbrella of event processing architecture including keyboard events. It was an useful review and I was glad I hadn’t forgotten anything fundamental or have missed anything significantly new.

Next stop was the UI design document for keyboard interactions. Again this was technically a review, but I’ve forgotten much of this information. Keyboard handling is complicated! It’s not just an array of buttons, there are a lot of interactions between keys and conventions build up over decades as to what computer literate users have come to expect.

When I think of key interactions, my first reaction is to think about the shift key and control key. These modifier keys change the meaning of another button: the difference between lowercase ‘c’, uppercase ‘C’ and Control+C which might be anything from “interrupt command line program” to “Copy to clipboard.”

But that’s not important to my current project. My desire to move the camera carriage using arrow keys opened up an entirely different can of worms: keyboard navigation.

Webcam Test with UWP

Once I had an old webcam taped to the carriage of a retired 3D printer, I shifted focus to writing code to coordinate the electronic and mechanical bits. My most recent experiments in application development were in the Microsoft UWP platform, so I’m going to continue that momentum until I find a good reason to switch to something else.

Microsoft’s development documentation site quickly pointed me to an example that implements a simple camera preview control. It looks like the intent here is to put up a low resolution preview so the user can frame the camera prior to taking a high resolution still image. This should suffice as a starting point.

In order to gain access to the camera feed, my application must declare the webcam capability. This will show the user a dialog box that the application wants to access the camera with options to approve or deny, and the user must approve before I could get video. Confusingly, that was not enough. If I approve camera access I still see errors. It turns out that even though I didn’t care about audio, I had to request access for the microphone as well. This seems like a bug but a simple enough workaround in the short term.

Once that was in place, I got a low resolution video feed from the camera. I don’t see any way to adjust parameters of this live video. I would like to shift to a higher resolution and I’m willing to accept lower frame rate. I would also like to reduce noise and I’m willing to accept lower brightness. The closest thing I found to camera options is something called “camera profiles“. For the moment this is a moot point because when I queried for profiles on this camera, IsVideoProfileSupported returned false.

I imagine there is another code path to obtain video feed, used by video conference applications and video recording apps. There must be a way to select different resolutions and adjust other parameters, but I have a basic feed now so I’m content to put that on the TO-DO list and move on.

The next desire is ability to select a different camera, since laptops usually have a built-in camera and I would attach another via USB. Thanks to this thread on Stack Overflow, I found a way to do so by setting VideoDeviceId property of MediaCaptureInitializationSettings.

And yay, I have a video feed! Now I want to move the 3D printer carriage by pressing the arrow keys. I created keyboard event handlers KeyDown and KeyUp for my application, but the handlers were never called. My effort to understand this problem became the entry point for a deep rabbit hole into the world of keyboard events in UWP.

[Code for this exploration is public on Github.]

Simple Logger Extended With Subset List

One of the features about ETW I liked was LoggingLevel. It meant I no longer had to worry about whether something might be too overly verbose to log. Or that certain important messages might get buried in a lot of usually-unimportant verbose details. By assigning a logging level, developers have the option to filter messages by level during later examination. Unfortunately I got lost with ETW and had to take a step back with my own primitive logger, but that didn’t make the usefulness go away. In fact I quickly found that I wanted it as things got complex.

In my first experiment Hello3DP I put a big text box on the application to dump data. For the second experiment PollingComms I have a much smaller text area so I could put some real UI on the screen. However, the limited area meant the verbose messages quickly overflowed the area, pushing potentially important information off screen. I still want to have everything in the log file, but I only need a subset displayed live in the application.

I was motivated to take another stab at ETW but was similarly unsuccessful. In order to resolve my immediate needs I started hacking away at my simple logger. I briefly toyed with the idea of using a small database like SQLite. Microsoft even put in the work for easy SQLite integration in UWP applications. Putting everything into a database would allow me to query by LoggingLevel, but I thought it was overkill for solving the problem at hand.

I ended up adding a separate list of strings. Whenever the logger receives a message, it looks at the level and decides if it should be added to the subset of string as well. By default I limited the subset to 5 entries, and only at LoggingLevel.Information or higher. This was something I could pass into the on screen text box to display and notify me in real time (or at least within ~1 second) what is going wrong in my application.

Once again I avoided putting in the work to learn how to work with ETW. I know I can’t put it off forever but this simple hack kicked that can further down the road.

[This kick-the-can exercise is publicly available on GitHub]

Communicating With 3D Printer Added A Twist

I chose to experiment with UWP serial device communication with my 3D printer because (1) it sends out a sequence of text immediately upon connection and (2) it was already sitting on the table. Just trying to read that text was an educational exercise, including a side trip through the world of logging.

The next obvious step was to send a command and read the printer’s response. This is where I learned 3D printers like this MatterHackers Pulse XE behaves a little differently from serial devices I’ve worked with before. RoboClaw motor controllers or serial bus servos like the Dynamixel AX-12 or LewanSoul/Hiwonder LX-16A have one behavior in common: They listen for a command in a known format, then they send a response also in a known format. This isn’t what my 3D printer control board does.

It wasn’t obvious to me until in hindsight, but I should have known as soon as I saw it send out information upon connection before receiving any commands. That’s not the only time the printer would send out unprompted information. Sometimes it sends text about SD card status, or to indicate it is busy processing the previous command. Without a 1:1 mapping between command and response, the logic to read and interpret printer response to commands has to be a little more sophisticated than what I’ve needed to write for earlier projects.

Which is a great opportunity to learn how to structure my code to solve problems with the async/await pattern. When I had a strict command/response pattern, it was easy to write code that assumes the information I read is in direct response to the command I sent. Now that data may arrive unprompted, the read and write operations have to be separated into their own asynchronous processing loops. When the read loop receives data, it needs to be able to interpret that possibly in the absence of a corresponding command. But if there is a corresponding command, it needs to pair up the response with the command sent. Which meant I needed a queue of commands awaiting responses and logic to decide when to dequeue them and send a response back to their caller.

Looking at code behavior I can see another potential: that of commands that do not expect a corresponding response. Thankfully I haven’t had to deal with that combination just yet, what I have on hand is adding enough challenge for this beginner. Certainly getting confusing enough I was motivated to extended my logging mechanism to help understand the flow.

[The crude result of this exercise is available publicly on GitHub]

Simple Logging To Text File

Even though I aborted my adventures into Windows ETW logging, I still wanted a logging mechanism to support future experimentation into Universal Windows Platform. This turned into an educational project in itself, learning about other system interfaces of this platform.

Where do I put this log file?

UWP applications are not allowed arbitrary access to the file system, so if I wanted to write out a log file without explicit user interaction, there are only a few select locations available. I found the KnownFolders enumeration but those were all user data folders, I didn’t want these log files clogging up “My Documents” and such. I ended up putting the log file in ApplicationData.TemporaryFolder. This folder is subject to occasional cleanup by the operating system, which is fine for a log file.

When do I open and close this log file?

This required a trip into the world of UWP application lifecycle. I check if the log file existed and, if not, create and open the log file from three places: OnLaunched, OnActivated, and OnResuming. In practice it looks like I mostly see OnLaunched. The flipside is OnSuspending, where the application template has already set up a suspension deferral buying me time to write out and close the log file.

How do I write data out to this log file?

There is a helpful Getting Started with file input/output document. In it, the standard recommendation is to use the FileIO class. It links to a section in the UWP developer’s guide titled Files, folders, and libraries. The page Create, write, and read a file was helpful for me to see how these differ from classic C file I/O API.

These FileIO classes promise to take care of all the complicated parts, including async/await methods so the application is not blocked on file access. This way the user interface doesn’t freeze until the load or save operation completes, instead remaining responsive while file access was in process.

But when I used the FileIO API naively, writing upon every line of the log file, I received a constant stream of exceptions. Digging into the call stack of the exception (actually several levels deep in the chain) told me there was a file access collision problem. It was the page Best practices for writing to files that cleared things up for me: these async FileIO libraries create temporary files for each asynchronous action and copy over the original file upon success. When I was writing once per line, too many operations were happening in too short of a time resulting in the temporary files colliding with each other.

The solution was to write less frequently, buffer up a set of log messages so I write a larger set of them with each FileIO access, rather than calling once per log entry. Reducing the frequency of write operations resolved my collision issue.

[This simple text file logging class is available on GitHub.]

Complexity Of ETW Leaves A Beginner Lost

When experimenting with something new in programming, it’s always useful to step through the code in a debugger the first time to see what it does. An unfortunate side effect is far slower than normal execution speed, which interferes with timing-sensitive operations. An alternative is to have a logging mechanism that doesn’t slow things down (as much) so we can read the logs afterwards to understand the sequence of events.

Windows has something called Event Tracing for Windows (ETW) that has evolved over the decades. This mechanism is implemented in the Windows kernel and offers dynamic control of what events to log. The mechanism itself was built to be lean, impacting system performance as little as possible while logging. The goal is that it is so fast and efficient that it barely affects timing-sensitive operations. Because one of the primary purposes of ETW is to diagnose system performance issues, and obviously it can’t be useful it if running ETW itself causes severe slowdowns.

ETW infrastructure is exposed to Universal Windows Platform applications via the Windows.Foundation.Diagnostics namespace, with utility classes that sounded simple enough at first glance: we create a logging session, we establish one or more channels within that session, and we log individual activities to a channel.

Trying to see how it works, though, can be overwhelming to the beginner. All I wanted is a timestamp and a text message, and optionally an indicator of importance of the message. The timestamp is automatic in ETW. The text message can be done with LogEvent, and I can pass in a LoggingLevel to signify if it is verbose chatter, informative message, warning, error, or a critical event.

In the UWP sample library there is a logging sample application showcasing use of these logging APIs. The source code looks straightforward, and I was able to compile and run it. The problem came when trying to read this log: as part of its low-overhead goal and powerful complexity, the output of ETW is not a simple log file I can browse through. It is a task-specific ETL file format that requires its own applications to read. Such tools are part of the Windows Performance Toolkit, but fortunately I didn’t have to download and install the whole thing. The Windows Performance Analyzer can be installed by itself from the Windows store.

I opened up the ETL file generated by the sample app and… got no further. I could get a timeline of the application, and I can unfold a long list of events. But while I could get a timestamp for each event, I can’t figure out how to retrieve messages. The sample application called LogEvent with a chunk of “Lorem ipsum” text, and I could not figure out how to retrieve it.

Long term I would love to know how to leverage the ETW infrastructure for my own application development and diagnosis. But after spending too much time unable to perform a very basic logging task, I shelved ETW for later and wrote my own simple logger that outputs to a plain text file.

Unexpected Behavior: Serial Device Read Timeout Only Applies When There’s Data

After playing with the Custom Serial Device Access demonstration application to read a 3D printer’s greeting message, I created a blank C# application from the Universal Windows Platform application template in Visual Studio and copy/pasted the minimum bits of code to read that same printer greeting message and send it to text on screen.

The sample application only showed a small selection of text, but I wanted to read the entire message in my test application. This is where I ran into an unexpected behavior. I had set the SerialDevice.ReadTimeout property to various TimeSpan on the scale of a few seconds. Sometimes I would get the timeout behavior I expected, returning with some amount of data less than buffer size. But other times my read operation hangs indefinitely past the timeout period.

I thought I did something wrong with the async/await pattern causing me to await forever, but I cut the code way back to the minimum while still following the precedence of the sample app, and it was still happening unpredictably. Examining the data that was returned, it looked like the same greeting message I saw when I connected via PuTTY serial terminal, nothing to indicate a problem.

Eventually I figured out the factor wasn’t anything in the data I have read, but the data I have not yet read. Specifically, the hanging behavior occurs when there is no further data at all from the serial port waiting to be read. If there was even just one byte, everything is fine: the platform will pull that byte from the serial port, put it in my allocated buffer (I experimented with 1 kilobyte size buffer, 2 KB, 4KB, it didn’t matter) and return to me after the timeout period. But if there are no bytes at all, it hangs waiting.

I suppose this makes some sort of sense, it’s just not what I had expected. The documentation for ReadTimeout mentions that there’s an underlying Win32 data structure SERIAL_TIMEOUTS dictating underlying behavior. A quick glance through that page failed to find anything that corresponds to what I think is happening, which worries me somewhat. Fortunately, there are ways to break out of an await that has waited longer than desired.

[This Hello3DP programming exercise is publicly available on GitHub]

3D Printer as Serial Communication Test Device

Reading about novel programming patterns and contemplating unexpected hardware platform potential are great… but none of it is real until I sit down and make some code run. Since my motivation centered around controlling external peripherals via serial port, I’ll need a piece of hardware to experiment against. In the interest of reducing variables, I didn’t want to start with one of my own past projects. I expect to run into enough headaches debugging what went wrong, I don’t need to wonder if the problem is my code or my project hardware.

So what do I have sitting around at home that are serial controlled hardware? The easy answer is the brains of a 3D printer. And the most easily accessible item is my MatterHackers Pulse XE printer, which is conveniently sitting on the same table as the computer.

In order to get an idea of what I’m getting into, I connected to the printer via serial port terminal of PuTTY. I saw that a greeting message is sent out via serial as soon as I connected. This is great, because it meant I have some data to read immediately upon connect, no need to worry about sending a command before getting a response.

Moving to the development platform, I loaded up the UWP example program Custom Serial Device Access. After reading the source code to get a rough feel of what the application did, I compiled and ran it. It was able to enumerate the USB serial connection, but when I connect, I did not see the greeting message. Even though the parameters I used for PuTTY (250000 N 8 1) were also used here.

As tempting as it might have been to blame the example program and say it is wrong, I thought it was more likely that one of the other parameters of SerialDevice had to be changed from their default value. Flipping settings one by one to see if they change behavior, I eventually figured out that I needed to set IsDataTerminalReadyEnabled to true in order to receive data from a Pulse XE. I’m lucky it was only a single boolean value I had to change because if I had to change multiple values to a specific combination, there would have been too many possibilities to find by trial and error.

It’s always good to start as simple as possible because I never know what seemingly basic issue would crop up. IsDataTerminalReadyEnabled wasn’t even the only surprise I found, ReadTimeout behavior was also unexpected.

Xbox One Is Part Of Universal Windows Platform

Independent of my interest in learning a new pattern for asynchronous programming, I started reading about Microsoft’s Universal Windows Platform (UWP) because I wanted to see their latest take on dynamic layout concepts. This is something that web designers are familiar with, since their users might be using anything from a small touchscreen phone to a tablet to a laptop to a desktop computer. But I found that UWP had ambitions to scale across an even wider spectrum. On the low end, they wanted to cover IoT devices with small (or even no) screen. On the high end, the ambition surpasses desktop computers with the big screen TV in a living room connected to a Xbox One.

I’m intrigued by the possibility of writing code to run on my Xbox One game console. At the moment I have no idea what I would do with that potential, but just the possibility adds to my motivation to continue exploring UWP. The part that I haven’t figured out is how much of UWP I’m reading on the documentation site is real, what is still coming, and what might have been aspirations fallen by the wayside. The now-defunct Windows Phone was supposed to be part of this spectrum, placing “UWP on phones” in the “fallen by the wayside” category.

There’s more hand-waving than I would like about the design guidelines for scaling UI all the way up to TV sizes. A website spanning the range of phones to computers has a hard enough time reconciling user input via touchscreens versus keyboard and mouse. The UWP range of IoT devices (with possibly no UI) up to an Xbox running UWP on the living room TV (with an Xbox game controller) is a very wide range to cover, and I didn’t find as much guidelines as I had hoped.

Still, it’s a possibility. The most likely way for me to put my UWP education to use is to create some prototypes of interfacing with electronics hardware. I started looking at platforms like UWP from the perspective of machine control. Does that path make sense for a Xbox? Other than the Nintendo R.O.B. there hasn’t been a lot of console-controlled peripherals. Maybe I can have Sawppy join me in a game?

I looked over the UWP Limitations on Xbox page, and I didn’t see USB and serial APIs explicitly listed in the “Doesn’t Work” list. It’s not on the explicit “Does Work” list, either, so investigating this gray limbo zone might be a fun future exercise. For now, though, it’s time to start running hardware I know works.

The Very Informative C# Programming Guide for Asynchronous Programming

The problem when looking at a list of “Other Resources” links is that it’s hard to know how useful they are until I actually go in and read them. I started my C# async/await journey on this page, which was only a short conceptual overview. At the bottom is a link to a page titled Asynchronous programming with async and await (C#) which sounded just like the page I was already on. A quick look to confirm it’s indeed a different URL, and I opened it up in a new tab to visit later.

I read several different pages sections before I got around to revisiting that opened tab. I had expected a short page with a lot of redundant information, but I quickly realized I had been neglecting a gold mine of information. This page explained the async/await model to me better than any other pages by using one of my favorite mechanisms for teaching: an analogy. The process to prepare a big breakfast was rephrased into asynchronous operations and that’s where it finally clicked.

The most useful clarification was that async doesn’t necessarily mean parallel. In the breakfast analogy it means multiple parts of the breakfast can be cooked but it’s possible to have just one chef in the kitchen doing it all. This broke me out of my previous mental mold, which was preventing some concepts from sinking in because it didn’t make sense in a parallel processing context. This section finally made me understand asynchronous processing is a concept that is frequently related to, but at the root is independent of, parallel processing.

One goal of the async/await pattern was to make it easy for developers to reason about logic flow, but under the hood it actually involves a lot of complexity to make a computer do what a human would hopefully find more intuitive. And like all programming magic, if things should break down the developer needs to know implementation details to debug it. Such detail was summarized in the “What happens in an async method” chart with simple code annotated with a lot of arrows. It was intimating at first, but once past the hump it is very helpful. I later came across an expanded version of this diagram explanation on this page with more details.

Once I had that understanding, I had a much easier time understanding what to do if something fails to go as planned. Cancelling and managing end of tasks is its own section.

I expected to find some example programs to look at, and I did, but looking at source code is only seeing the destination. It was very helpful for me to see the journey to destinations in an example taking a synchronous application and convert it step-by-step to asynchronous operation.

And finally, I return to the asynchronous programming section of UWP documentation. These topics now make a lot more sense than they did before I sat down to learn TAP. And some of the promise of UWP hold intriguing possibilities.

First Steps Learning Task-based Asynchronous Pattern (TAP)

The upside of choosing to learn a Microsoft-backed technology via their developer documentation site is that there’ll be a lot of information. The downside is that it quickly becomes a flood of too much information, especially for things with a lot of surface area touching a lot of products. We end up with a section from each division, each introducing the concepts from their own perspectives. Resulting in a lot of redundant information though, thankfully in this case, I found nothing contradictory.

I started from the C# language side via the async keyword, which led me to a quick conceptual overview with a few short examples. Blitzing through this page got me oriented on the general direction, but it wasn’t detailed enough for me to feel I understood. As a starting point it wasn’t bad, but I needed to keep reading.

Next item I found was from the .NET side, and I was prepared for a long read with the title “Async in depth“. It was indeed more in depth than the previous page, but it was not quite as in depth as I had hoped. Still, it corrected a few misconceptions in my head, such as the fact async behavior does not necessarily mean multiple threads. In I/O bound operations, there may be no dedicated waiting thread at all. It meant in certain situations, this mechanism is better than old-school multithreaded techniques I had known. For example this avoids the situation of building up large numbers of threads sitting around blocked and waiting.

I got more details — and more detailed code walkthroughs — when I broadened my focus from the async keyword itself to TAP or Task-based Asynchronous Pattern, the umbrella under which Microsoft designated not just the async/await pattern itself but also a bunch of other related mechanisms. This section also has a comparison of .NET asynchronous programming patterns, and TAP can interoperate with them. Which is not relevant to me personally but it’s good to see Microsoft’s habit for backwards compatibility is alive and well. This will be useful if I get into TAP and then something even newer and shinier comes along and I need to switch. I hope there aren’t too many more of those, though, as the interop chart will get a lot more complicated and confusing very quickly.

These sections were informative, and I probably could have figured it out from there if I needed to get rolling on a short schedule. But I didn’t have to. I had the luxury of time, and I found the gold mine that is the C# Programming Guide for Asynchronous Programming.

Async/Await For Responsive Universal Windows Platform Applications

I have decided to sit down and put in the time to learn how to write software for asynchronous events using the relatively new (and certainly new to me) async/await pattern. I had several options for which language/platform to use as my first hands-on experience, and I chose Microsoft’s C#/UWP for multiple reasons. It was something I looked at briefly, there’s a free Community Edition of the Visual Studio software tool to explore (and debug) them, and most importantly, I saw lots of documentation available on Microsoft’s documentation site (formerly MSDN.) While I have no problem learning little things via reading posts on Stack Overflow, for something bigger I wanted a structured approach and I was familiar with Microsoft’s style.

Before I dove in, I was curious why Microsoft saw an incentive to invest in this pattern. This is a significant effort: it meant adding support the .NET platform, adding support in related languages like C#, adding support in tools like Visual Studio. Accompanied by tutorials, documentation, code libraries, and examples. And from what I can tell, their motivation is that using async/await pattern in UWP applications will keep the user interfaces responsive.

Responsive user interfaces have been a desire ever since there were user interfaces, because we want the user to get feedback for their actions. But it’s hard to keep everything running when the code is waiting on something to finish, which is why every user has the experience of clicking a button and watching the entire application freeze up until some mysterious thing is done. On the web the problem is both better and worse: better because the web browser itself usually stays responsive so the user can still click on things, worse because the web developer no longer has control and has to do silly things like warn the user not to click “refresh” or “back” buttons on their browser.

Back to the desktop: application platforms have the concept of an “UI thread” where all the user interaction happens, including events handlers for user actions. It is tempting to do all the work inside the event handler, but that is running on the UI thread. If the work takes time the UI thread is unable to handle other events, leaving the user with no feedback and unable to interact with the application. They are left staring at an hourglass (or spinning “beach ball”, etc) wondering what’s going on.

Until async/await came along the most common recommendation is to split up the work. Keep UI thread event handlers light doing as little as possible. Offload the time-consuming work to somewhere else, commonly another thread. Once the work is done, that worker is supposed to report its results back to the UI. But orchestrating such communications is time consuming to design, hard to get right in code, and even harder to debug when things go wrong. Between lazy developers who don’t bother, and honest mistakes by those who try, we still have a lot of users facing unresponsive frozen apps.

This is where async/await helps UWP and other application platforms: it is a way to design the platform interfaces so that they can tell app developers: “You know what, fine. Go ahead and run your code on the UI thread. But use async/await so we can handle the complicated parts for you.” By encouraging the pattern, UWP could potentially offer both a better developer experience and a better end-user experience. Success would lead to increased revenue, thus the incentive to release bountiful documentation for async/await under their umbrella of Task-based Asynchronous Pattern. (TAP)

Window Shopping: Universal Windows Platform Fluent Design

Looking over National Instruments’ Measurement Studio reinforced the possibility that there really isn’t anything particularly special about what I want to do for a computer front-end to control my electronics projects. I am confident that whatever I want to do in such a piece of software, I can put it in a Windows application.

The only question is what kind of trade-offs are involved for different approaches, because there is certainly no shortage of options. There have been many application frameworks over the long history of Windows. I criticised LabWindows for faithfully following the style of an older generation of Windows applications and failed to keep updated since. So if I’m so keen on the latest flashy gizmo, I might as well look over the latest in Windows application development: the Universal Windows Platform.

People not familiar with Microsoft platform branding might get unduly excited about “Universal” in the name, as it would be amazing if Microsoft released a platform that worked across all operating systems. The next word dispelled that fantasy: “Universal Windows” just meant across multiple Microsoft platforms: PC, Xbox, and Hololens. UWP was also going to cover phone as well, but well, you know

Given the reduction in scope and the lack of adoption, some critics are calling UWP a dead end. History will show if they are right or not. However that shakes out, I do like Fluent Design that was launched alongside UWP. A similar but competitive offering to Google’s Material Design, I think they both have potential for building some really good user interactivity.

Given the graphical capabilities, I’m not worried about displaying my own data visualizations. But given UWP’s intent to be compatible across different Windows hardware platforms, I am worried about my ability to communicate with my own custom built hardware. If something was difficult to rationalize a standard API across PC, Xbox, and Hololens, it might not be supported entirely.

Fortunately that worry is unfounded. There is a UWP section of the API for serial communication which I expect to work for USB-to-serial converters. Surprisingly, it actually went beyond that: there’s also an API for general USB communication even with devices lacking standard Windows USB support. If this is flexible enough to interface arbitrary USB hardware other than USB-to-serial converters, it has a great deal of potential.

The downside, of course, is that UWP would be limited to Windows PCs and exclude Apple Macintosh and Linux computers. If the objective is to build a graphically rich and dynamically adaptable user interface across multiple desktop application platforms (not just Windows) we have to use something else.