Compass Web App Project Gets Own Repository

Once I got Three.js and Magnetometer API up and running within the context of an Angular web app, I was satisfied this project is actually going to work. Justifying a move out from my collection of NodeJS tests and into its own repository. Another reason for moving into its own repository is that I wanted to make it easy to use angular-cli-ghpages to deploy my Angular app to GitHub pages. Where it will be served over HTTPS, a requirement for using sensor API from a web app.

Previously, I would execute such moves with a simple file copy, but that destroys my GitHub commit history. Not such a huge loss for small experiments like this one, but preserving history may be important in the future so I thought this is a good opportunity to practice. GitHub documentation has a page to address my scenario: Splitting a subfolder out into a new repository. It points us to a non-GitHub utility git-filter-repo which is a large Python script for manipulating git repositories in various ways, in this case isolating a particular directory and trimming the rest. I still had to manually move everything from a /compass/ subdirectory into the root, but that’s a minor change and git could recognize the files were renamed and not modified.

The move was seamless except for one detail: there is a conflict between local development and GitHub Pages deployment in its index.html. For GitHub Pages, I needed a line <base href="/compass/"> but for local development I needed <base href="/">. Otherwise the app fails to load because it is trying to load resources from the wrong paths resulting in HTTP404 Not Found errors. To make them consistent, I can tell my local development server to serve files under the compass subdirectory as well so I can use <base href="/compass/"> everywhere.

ng serve --serve-path compass

I don’t recall this being a problem in my “Tour of Heroes” tutorial. What did I miss? I believe using --serve-path is merely a workaround without understanding the root cause, but that’s good enough for today. It was more important that GitHub Pages is up and running and I could test across different browsers.


Source code for this project is publicly available on GitHub

Magnetometer Service as RxJS Practice

I’m still struggling to master CSS, but at least I got far enough to put everything I want on screen roughly where I want them, even after the user resizes their window. Spacing and proportion of my layout is still not ideal, but it’s good enough to proceed to the next step: piping data from W3C Generic Sensor API for Magnetometer into my Angular app. My initial experiment was a much simpler affair where I could freely use global variables and functions, but that approach would not scale to my future ambition to execute larger web app projects.

Installation

The Magnetometer API is exposed by the browser and not a code library, so I didn’t need to “npm install” any code. However, as it is not a part of core browser API, I do need to install W3C Generic Sensor API type definition for the TypeScript compiler. This information is only used at development time.

npm install --save-dev @types/w3c-generic-sensor

After this, TypeScript compiler still complains that it can’t find type information for Magnetometer. After a brief search I found I also need to edit tsconfig.app.json and add w3c-generic-sensor to “types” array under “compilerOptions”.

  "compilerOptions": {
    "types": [
      "w3c-generic-sensor",
    ]
  },

That handles what I had to do, but I’m ignorant as to why I had to do it. I didn’t have to do the same for @types/three when I installed Three.js, why was this different?

Practicing Both RxJS and Service Creation

I’ll need a mechanism to communicate magnetometer data when it is available. When the data is not available, I want to be able to differentiate between reasons why that data is not available. Either the software API is unable to connect to a hardware sensor, or the software API is not supported at all. The standard Angular architectural choice for such a role is to package it up as a service. Furthermore, magnetometer data is an ongoing stream of data, which makes it a perfect practice exercise for using RxJS in my magnetometer service. It will distribute magnetometer data as a Subject (multicast Observable) to all app components that subscribe to it.

Placeholder Data

Once I had it up and running, I realized everything is perfectly setup for me to generate placeholder data when real magnetometer data is not available. Client code for my magnetometer data service doesn’t have to change anything to receive placeholder data. In practice, this lets me test and polish the rest of my app without requiring that I run it on a phone with real magnetometer hardware.

State and Status Subjects

Happy with how this first use turned out, I converted more of my magnetometer service to use RxJS. The current state of the service (initialized, starting the API, connecting to magnetometer hardware, etc) was originally just a publicly accessible property, which is how I’ve always written such code in the past. But if any clients want to be notified as soon as the state changes, they either have to poll state or I have to write code to register & trigger callbacks which I rarely put in the effort to do. Now with RxJS in my toolbox, I can use a Behavior Subject to communicate changes in my service state, making it trivial to expose. And finally, I frequently send stuff to console.log() to communicate status messages, and I converted that to a Behavior Subject as well so I can put that data onscreen. This is extra valuable once my app is running on phone hardware, as I can’t (easily) see that debug console output.

RxJS Appreciation

After a very confused introduction to RxJS and a frustrating climb up the learning curve, I’m glad to finally see some payoff for my investment in study time. I’m not yet ready to call myself a fan of RxJS, but I feel I have enough confidence to wield this tool for solving problems. This story is to be continued!

With a successful Angular service distributing data via RxJS, I think this “rewrite magnetometer test in Angular” is actually going to happen. Good enough for it to move into its own code repository.

[UPDATE: After learning more about Angular and what’s new in Angular v16, everything Idescribed in this post has been converted to Angular Signals.]


Source code for this project is publicly available on GitHub

Angular Component Dynamic Resizing

Learning to work within Angular framework, I had to figure out how to get my content onscreen at the location and size I wanted. (Close enough, at least.) But that’s just the start. What happens when the user resizes the window? That opens a separate can of worms. In my RGB332 color picker web app, the color picker is the only thing onscreen. This global scope meant I could listen to Window resize event, but listening to window-level event isn’t necessarily going to work for solving a local element-level problem.

So how does an Angular component listen to an event? I found several approaches.

It’s not clear what tradeoffs are involved using Renderer versus EventManager. In both cases, we can listen to events on an object that’s not necessarily our component. Perhaps some elements are valid for one API versus another? Perhaps there’s a prioritization I need to worry about? If I only care about listening to events that apply to my own specific component, things can be slightly easier:

  • @HostListener decorator allows us to attach a component method as the listener callback to an event on a component’s host object. It’s not as limited as it appears at first glance, as events like “window:resize” apparently propagates through the tree so our handler will fire even though it’s not on the Window object.

In all of the above cases, we’re listening on a global event (window.resize) to solve a local problem (react to my element’s change in size.) I was glad to see that web standards evolved to give us a local tool for solving this local problem:

  • ResizeObserver is not something supported by core Angular infrastructure. I could write the code to interact with it myself, but someone has written an Angular module for ResizeObserver. This is part of a larger “Web APIs for Angular” project with several other modules with similar goals: give an “Angular-y” way to leverage standardized APIs.

I tried this new shiny first, but my callback function didn’t fire when I resized the window. I’m not sure if the problem was the API, the Angular module, my usage of it, or that my scenario not lining up with the intended use case. With so many unknowns, I backed off for now. Maybe I’ll revisit this later.

Falling back to @HostListener, I could react to “window.resize” and that callback did fire when I resized the window. However, clientWidth/clientHeight size information is unreliable and my Three.js object is not the right size to fill its parent <DIV>. I deduced that when “window:resize” fired, we have yet to run through full page layout.

With that setback, I fell back to an even cruder method: upon every call to my animation frame callback, I check the <DIV> clientWidth/clientHeight and resize my Three.js renderer if it’s different from existing values. This feels inelegant but it’ll have to do until I have a better understanding of how ResizeObserver (or an alternative standardized local scope mechanism) works.

But that can wait, I have lots to learn with what I have on hand. Starting with RxJS and magnetometer.


Source code for this project is publicly available on GitHub

Angular Component Layout Sizing

For my first attempt at getting Three.js running inside an Angular web app, the target element’s width and height were hard coded to a number of pixels to make things easier. But if I want to make a more polished app, I need to make this element fit properly within an overall application layout. Layout is one of the things Angular defers to standard CSS. Translating layouts I have in my mind to CSS has been an ongoing challenge for me to master so this project will be another practice opportunity.

Right now, I’m using a <DIV> in my markup to host a canvas object generated by Three.js renderer. Once my Angular component has been laid out, I need to get the size of that <DIV> and communicate it to Three.js. A little experimentation with CSS-related properties indicated my <DIV> object’s clientWidth and clientHeight were best fit for the job.

Using clientWidth was straightforward, but clientHeight started out at zero. This is because during layout, the display engine looked at my <DIV> and saw it had no content. The canvas isn’t added until after initial layout in AfterViewInit hook of Angular component lifecycle. I have to create CSS to block out space for this <DIV> during layout despite lack of content at the time. My first effort was to declare a height using “vh” unit (Viewport Height) to stake my claim on a fraction of the screen, but that is not flexible for general layout. A better answer came later with Flexbox. By putting “display: flex;” on my <DIV> parent, and “flex-grow:1” on the <DIV> itself, I declared that this Three.js canvas should be given all available remaining space. That accomplished my goal and felt like a more generally applicable solution. A reference I found useful during this adventure was the Flexbox guide from CSS-Tricks.com.

It’s still not perfect, though. It is quite possible Flexbox was not the right tool for the job, but I needed this practice to learn a baseline from which I can compare with another tool such as CSS Grid. And of course, getting a layout up on screen is literally just the beginning: what happens when the user resizes their window? Dynamically reacting to resize is its own adventure.


Source code for this project is publicly available on GitHub

Angular + Three.js Hello World

I decided to rewrite my magnetometer test app as an Angular web app. Ideally, I would end up with something more polished. But that is a secondary goal. The primary goal is to use it as a practice exercise for building web apps with Angular. Because there are definitely some adjustments to make when I can’t just use global variables and functions for everything.

My first task is to learn how to use Three.js 3D rendering library from within an Angular web app. I know this is doable from others who have written up their experience, I only have to follow their lead.

Installation

The first step is obvious: install Three.js library itself into my project.

npm install --save three

Now my Angular app could import objects from Three, but it would fail TypeScript compilation because the compiler doesn’t have type information. For that, a separate library needs to be installed. This is only used at development time, so I save it as a development-only dependency.

npm install --save-dev @types/three

HTML DOM Access via @ViewChild

Once installed I could create an Angular component using Three.js. Inside that component I could use most of the code from Three.js introduction “Creating a Scene“. One line I could not use directly is:

document.body.appendChild( renderer.domElement );

Because now I can’t just jam something to the end of my HTML document’s <BODY> element. I need to stay within the bounds of my Angular component. To do so, I name an element in my component template HTML file where I want my Three.js canvas to reside.

[...Component template HTML...]

  <div #threejstarget></div>

[...Component template HTML...]

In the TypeScript code file, I can obtain a reference to this element with the @ViewChild decorator.

  @ViewChild('threejstarget') targetdiv!: ElementRef;

Why the “!” suffix? If we declared the variable “targetdiv” by itself, TypeScript compiler would complain that we risk using a variable that may be null or undefined instead of its declared type. This is because TypeScript compiler doesn’t know @ViewChild will handle that initialization. We use an exclamation mark (!) suffix to silence this specific check on this specific variable without having to turn on the (generally useful) null/undefined checks in TypeScript.

(On the “To-Do” list: come back later and better understand how @ViewChild relates to similar directives @ViewChildren, @ContentChild, and @ContentChildren.)

Wait until AfterViewInit

But there are limits to @ViewChild power. Our ElementRef still starts null when our component is initialized. @ViewChild could not give us a reference until the component template view has been created. So we have to wait until the AfterViewInit stage of Angular component lifecycle before adding Three.js render canvas into our component view tree.

  ngAfterViewInit() : void {
    this.targetdiv.nativeElement.appendChild( this.renderer.domElement );
  }

An alternative approach is to have <CANVAS> inside our component template, and attach our renderer to that canvas instead of appending a canvas created by renderer.domElement. I don’t yet understand the relevant tradeoffs between these two approaches.

Animation Callback and JavaScript ‘this

At this point I had a Three.js object onscreen, but it did not animate even though my requestAnimationFrame() callback function was being called. A bit of debugging pointed to my mistaken understanding of how JavaScript handles an object’s “this” reference. My animation callback was getting called in a context where it was missing a “this” reference back to my Angular component, and thus unable to advance the animation sequence.

requestAnimationFrame(this.renderNeedle);

One resolution to this issue (JavaScript is very flexible, there are many other ways) is to declare a callback that has an appropriate “this” reference saved within it for use.

requestAnimationFrame((timestamp)=>this.renderNeedle(timestamp));

That’s a fairly trivial problem and it was my own fault. There are lots more to learn about animating and Angular from others online, like this writeup about an animated counter.

Increase Size Budget

After that work I had a small Three.js animated object in my Angular application. When I run “ng build” at that point, I would see a warning:

Warning: bundle initial exceeded maximum budget. Budget 500.00 kB was not met by 167.17 kB with a total of 667.17 kB.

An empty Angular application already weighs in at over 200kB. Once we have Three.js in the deployment bundle, that figure ballooned by over 400kB and exceeded the default warning threshold of 500kB. This is a sizable increase, but thanks to optimization tools in the Angular build pipeline, this is actually far smaller than my simple test app. My test app itself may be tiny, but it downloaded the entire Three.js module from a CDN (content distribution network) and that file three.module.js is over a megabyte (~1171kB) in size. By that logic this is the better of two options, we just have to adjust the maximumWarning threshold in angular.json accordingly.

My first draft used a fixed-size <DIV> as my Three.js target, which is easy but wouldn’t be responsive to different browser situations. For that I need to learn how to use CSS layout for my target <DIV>


Source code for this project is publicly available on GitHub

Compass Web App for Angular Practice

I’ve wanted to learn web development for years and one of my problems was that I lacked the dedication and focus to build my skill before I got distracted by something else. This is a problem because web development world moves so fast that, by the time I returned, the landscape has drastically changed with something new and shiny to attract my attention. When I window-shopped Polymer/Lit, I was about to start the cycle again. But then I decided to back off for a few reasons.

First and most obvious is that I didn’t yet know enough to fully leverage the advantages of web components in general, and Polymer/Lit in particular. They enable small lightweight fast web apps but only if the developer knows how to create a build pipeline to actually make it happen. I have yet to learn how to build optimization stages like tree-shaking and minimization. Without these tools, my projects would end up downloading far larger files intended for developer readability (comments, meaningful variable names, etc.) and include components I don’t use. Doing so would defeat the intent of building something small lightweight.

That is closely related to the next factor: Angular framework has a ready setup of all of those things I have yet to master. Using Angular command line tools to build a new project boilerplate comes with a build pipeline that minimizes download size. I wasn’t terribly impressed by my first test run of this pipeline, but since I don’t yet know enough to setup my own, I definitely lack the skill to analyze why and certainly don’t yet know enough to do better.

And finally, I have already invested some time into learning Angular. There may be some “sunk cost fallacy” involved here but I’ve decided I should get basic proficiency with one framework just so I have a baseline to compare against other frameworks. If I redirect focus to Polymer/Lit, I really don’t know enough to understand its strengths and weaknesses against Angular or any other framework. How would I know if it lived up to “small lightweight fast” if I have nothing to compare it against?

Hence my next project is to redo my magnetometer web app using Angular framework. Such a simple app wouldn’t need all the power of Angular, but I wanted a simple practice run while things are still fresh in my mind. I thought about converting my AS7341 sensor web app into an Angular app, but those files must be hosted on an ESP32 which has limited space. (Part of the task would be converting to use ESPAsyncWebServer which supports GZip-compressed files.) In comparison, a magnetometer app would be hosted via GitHub pages (due to https:// requirement of sensor API) and would not have such a hard size constraint. Simpler deployment tipped the scales here, so I am going with a compass app starting with putting Three.js in an Angular boilerplate app.


Source code for this project is publicly available on GitHub

Window Shopping Polymer and Lit

While poking around with browser magnetometer API on Chrome for Android, one of my references was a “Sensor Info” app published by Intel. I was focused on the magnetometer API itself at first, but I mentally noted to come back later to look at the rest of the web app. Now I’m returning for another look, because “Sensor Info” has the visual style of Google’s Material Design and it was far smaller than an Angular project with Angular Material. I wanted to know how it was done.

The easier part of the answer is Material Web, a collection of web components released by Google for web developers to bring Material Design into their applications. “Sensor Info” imported just Button and Icon, unpacked size weighing in at several tens of kilobytes each. Reading the repository README is not terribly confidence inspiring… technically Material Web has yet to reach version 1.0 maturity even though Material Design has moved on to their third iteration. Not sure what’s going on there.

Beyond visual glitz, the “Sensor Info” application was built with both Polymer and Lit. (sensors-app.js declares a SensorsApp class which derives from LitElement, and import a lot of stuff from @polymer) This confused me because I had thought Lit was a successor to Polymer. As I understand it, the Polymer team plans no further work after version 3 and has taken the lessons learned to start from scratch with Lit. Here’s somebody’s compare-and-contrast writeup I got via Reddit. Now I see “Sensor Info” has references to both projects and, not knowing either Polymor or Lit, I don’t think I’ll have much luck deciphering where one ends and another begins. Not a good place for a beginner to start.

I know both are built on the evolving (stabilizing?) web components standard, and both promise to be far simpler and lightweight than frameworks like Angular or React. I like that premise, but such lightweight “non-opinionated” design also means a beginner is left without guidance. “Do whatever you want” is a great freedom but not helpful when a beginner has no idea what they want yet.

One example is the process of taking the set of web components in use and packaging them together for web app publishing. They expect the developer to use a tool like webpack, but there is no affinity to webpack. A developer can choose to use any other tool. Great, but I hadn’t figured out webpack myself nor any alternatives, so this particular freedom was not useful. I got briefly excited when I saw that there are “Starter Kits” already packaged with tooling that are not required (remember, non-opiniated!) but are convenient for starting out. Maybe there’s a sample webpack.config.js! Sadly, I looked over the TypeScript starter kit and found no mention of webpack or similar tool. Darn. I guess I’ll have to revisit this topic sometime after I learn webpack.

Mermaid.js for Diagrams in GitHub Markdown

This blog is a project diary, where I write down not just the final result, but all the distractions and outright wrong turns taken on the way. I write a much shorter summary (with less noise) for my projects in the README file of their associated GitHub repository. As much as I appreciate markdown, it’s just text and I have to fire up something else for drawings, diagrams, and illustrations. This becomes its own maintenance headache. It’d be nice to have tools built into GitHub markup for such things.

It turns out, I’m not the only one who thought so. I started by looking for a diagram tool to generate images I can link to my README files, preferably one that I might be able to embed into my own web app projects. From there I found Mermaid.js which looked very promising for future project integration. But that’s not the best part: Mermaid.js already have their fans, including people at GitHub. About a year ago, GitHub added support for Mermaid.js charts within their markdown variant, no graphic editor or separate image upload required.

I found more information on how to use this on GitHub documentation site, where I saw Mermaid is one of several supported tools. I have yet to need math formulas or geographic mapping in my markdown, but I have to come back to take a closer look into STL support.

As my first Mermaid test to dip my toes into this pool, I added a little diagram to illustrate the sequence of events in my AS7341 spectral color sensor visualization web app. I started with one of the sample diagrams on their live online editor and edited to convey my information. I then copied that Mermaid markup into my GitHub README.md file, and the diagram is now part of my project documentation there. Everything went through smoothly just as expected and no issues were encountered. Very nice! I’m glad I found this tool and I foresee adding a lot of Mermaid.js diagrams to my project README in the future. Even if I never end up integrating Mermaid.js into my own web app projects.

Webpack First Look Did Not Go Well

I’ve used Three.js in two projects so far to handle 3D graphics, but I’ve been referencing it as a monolithic everything bundle. Three.js documentation told me there was a better way:

When installing from npm, you’ll almost always use some sort of bundling tool to combine all of the packages your project requires into a single JavaScript file. While any modern JavaScript bundler can be used with three.js, the most popular choice is webpack.

— Three.js Installation Guide

In my magnetometer test project, I tried to bring in webpack to optimize three.js but aborted after getting disoriented. Now I’m going to sit down and read through its documentation to get a better feel of what it’s all about. Here are my notes from a first look as a beginner.

I have a minor criticism about their home page. The first link is to their “Getting Started” guide, the second link is to their “Concepts” section. I followed the first link to “Getting Started”, which is the first page in their “Guides” section. I got to the end of that first page and saw “Next” link is about Asset Management, the next guide page. Each guide page linked to the next. I quickly got into guide pages who used acronym or terminology that had not yet been explained. Later I realized this was because the terminology was covered in the “Concepts” section. In hindsight I should not have clicked “Next” at the end of “Getting Started” guide. I should have gone back to “Concepts” to learn the lingo, before reading the rest of the guides.

Reading through the guides, I quickly understood that webpack is a very JavaScript-centric system built by people who think JavaScript first. I wanted to learn how to use webpack to integrate my JavaScript code and my HTML markup, but these guide pages didn’t seem to cover my use scenario. Starting right off the bat with the Getting Started guide, they used code to build their markup:

   const element = document.createElement('div');
   element.innerHTML = _.join(['Hello', 'webpack'], ' ');
   document.body.appendChild(element);

Wow. Was that really necessary? What about people who wanted to, you know, write HTML in HTML?

    <body><div>Hello webpack</div></body>

Is that considered quaint and old fashioned nowadays? I didn’t find anything in “Guides” or “Concepts” discussing such integration. I had thought the section on “HtmlWebpackPlugin” would have my answer, but it’s actually a chunk of JavaScript code that erased my index.html (destroying my markup) with a bare-bones index.html that loads the generated JavaScript bundles and have no markup. How does my own markup figure in this picture? Perhaps the documentation authors felt it was too obvious to write down, but I certainly don’t understand how to do it. I feel like an old man yelling at a cloud “tell me how to use my HTML!”

I had thought putting my HTML into the output directory was a basic task, but I failed. And I was dismayed to see “Concepts” and “Guides” pages got deeper and more esoteric from there. I understand webpack is very powerful and can solve many problems with modularizing JavaScript code, but it also requires a certain mindset that I have yet to acquire. Webpack’s bare bones generated index.html looks very similar to the generated index.html of an Angular app (which uses both HTML and webpack) so there must be a way to do this. I just don’t know it yet.

Until I learn what’s going on, for the near future I will use webpack only indirectly: it is already configured and running as part of Angular web app framework’s command line tools suite. I plan to do a few Angular projects to become more familiar with it. And now that I’ve read through webpack concepts, I should recognize pieces of Angular workflow as “Aha, they used webpack to do that.”

Fun with Magnetic Field Viewing Film

It was fun to visualize magnetic field with an array of retired Android phones. It was, admittedly, a gross underutilization of hardware power. It was more a technical exercise than anything else. There are much cheaper ways to visualize magnetic fields. I learned from iron filings in school way back when, but that got extremely messy. Ferrofluid in a vial of mineral oil is usually much neater, unless the vial leaks. I decided to try another option: magnetic field viewing films.

Generally speaking, such films are a very thin layer of oil sandwiched between top and bottom sheets, suspending magnetic particles within. The cheap ones look like they just use fine iron particles, and we see the field among slightly different shades of gray caused by different orientation of uniformly colored particles. I decided to pay a few extra bucks and go a little fancier. Films like the unit I bought (*) have much higher visual contrast.

As far as I can tell, the particles used in these films present different colors depending on orientation of magnetic field. When the field is perpendicular to the film, as when one of the magnet poles, the film shows green. When the field is parallel, we see yellow. Orientation between those two extremes show different colors within that spectrum. When there’s no magnetic field nearby, we see a muddy yellow-green.

Playing with a Hall switch earlier, I established that this hard drive magnet has one pole on one half and another pole on the other half. Putting the same magnet under this viewing film confirms those findings, and it also confirms this film doesn’t differentiate between north or south poles: they both show as green.

This was the simplest scenario: a small disc salvaged from an iPad cover shows a single pole on the flat face.

Similarly simple is the magnet I salvaged from a Microsoft Arc Touch Mouse.

This unexpected complex field was generated by a magnet I salvaged from a food thermometer. I doubt this pattern was intentional, as it does nothing to enhance its mission of keeping the food thermometer stuck to the side of my refrigerator. I assume this complex pattern was an accidental result of whatever-was-cheapest magnetization process.

The flat shape of this film was a hinderance when viewing the magnetic field of this evaporator fan rotor, getting in the way of the rotor shaft. The rotor is magnetized so each quarter is a magnetic pole. It’s difficult to interpret from a static picture, but moving the rotor around under the film and seeing it move interactively makes the relationship more visible. It is also higher resolution and responds faster than an array of phones.

This disc was one of two that came from a 1998 Toyota Camry’s stock tape deck head unit. I don’t know exactly where it came from because I didn’t perform this particular teardown.

We can see it laid out on the tabletop in this picture, bottom row center next to the white nylon gears.

And finally, the motor that set me on this quest for magnetic viewing film: the rotor from a broken microwave turntable motor. Actually looking at the plastic hub color, I think this is the broken rotor that got replaced by the teardown rotor sometime later.

And since I’m on the topic, I dug up its corresponding coil from that turntable motor teardown. Curious if I would see a magnetic field, I connected it to my benchtop DC power supply and slowly increased the voltage. I could only get up to 36V DC and didn’t notice anything. This coil was designed for 120V AC, so I wasn’t terribly surprised. I’ll have to return to this experiment when I can safely apply higher voltage to this coil.


(*) Disclosure: As an Amazon Associate I earn from qualifying purchases.