Absolutely, and I don't want to claim I'm the first or only one doing font rendering on GPU. There's Slug as you pointed out, Pathfinder and Spinel as Jasper cited, and also interesting experimental work including GLyphy by Behdad and algorithms by Evan Wallace and Will Dobbie, plus a whole series of academic papers including "Massively Parallel Vector Graphics," "Random Access Vector Graphics," and others.
However, I would say that a common thread is that doing this well is hard. There's no straightforward cookbook scheme that people can just implement, and there are always tradeoffs. Slug is used in a number of games (and congrats to Eric for winning those licenses), but not as far as I know in any UI toolkits, and there are reasons for that.
> Slug is used in a number of games (and congrats to Eric for winning those licenses), but not as far as I know in any UI toolkits, and there are reasons for that
Presumably because its antialiasing is crap? But there's nothing inherent to fragment-oriented approaches that prevents you from doing good aa, and they slot nicely into the existing rasterization pipeline (which is why slug has fewer feature level requirements than pathfinder). They also permit arbitrary domain transformations (some caveats here as you have to calculate a bounding box still), and given appropriate space partitioning should not be significantly slower than scanline algorithms.
Also: UI toolkits are not known for being on the leading edge of graphics research. I think fastuidraw demonstrates this rather well. Insofar as there is exciting work happening in industry, it is mainly happening in web browsers; and I would expect mozilla and google to devote their efforts pathfinder and skia, respectively.
No, Slug’s technique can handle AA and do it well. The problem with Slug for general purpose UI frameworks is it needs to do a lot of pre-processing on it’s data to do the good job it does.
Slug aa is only 1-dimensional. From the paper (end of section 2):
> Adding and subtracting these fractions from the winding number has the effect of antialiasing in the direction of the rays. Averaging the final coverages calculated for multiple ray directions antialiases with greater isotropy, but at a performance cost. Considering only rays parallel to the coordinates axes is a good compromise, especially when combined with supersampling, as discussed later.
I.E. you don't get a real 2-d coverage result, only an amalgamation of a number of 1-d coverage results; and you must trade off performance and quality. Other approaches do not require such a tradeoff.
Analytic 2-d coverage can be done more cheaply than n 1-d samples (n is probably in the neighborhood of 4-6), and produces better (mathematically ideal, albeit with uncomfortable caveats) results. (Note 4-6 samples don't mean 4-6x slower, due to space partitioning, buffers, and other fixed costs, as well locality. And I think slug takes 2 samples by default as is.)
Oh I wasn’t pointing it out as a critical response to it being your thesis. I’m actually very interested to see how it turns out, because I’m digging into this space at the moment.
I’m trying to build a platform-agnostic styling language specifically for UI/UX designers, and it’s leading me down the path of “render everything via WebGPU”.
Is there a way I can follow your progress? Very keen on hearing more about your research if/when it’s ready.
I don't know much about the font space, but enough to know it's a really hard problem, and the slug team seems to do a really good job.