I’ve recently finished implementing Haphaestus’s rendering engine which I named Mondrian, after a dutch artist who liked drawing boxes. This is a GPU-accelerated library for drawing the surrounding backgrounds & borders CSS allows webdevs to add to their pages. Hardware-accelerated text rendering is handled by Typograffiti.
The hardware acceleration is there to avoid the need for uglier optimizations elsewhere in the Argonaut Stack, & to give myself an excuse to play with GLSL. Also other browsers & UI frameworks are adopting this trend, & whilst simplifying the performance advice for webdevs. Who no longer need to worry about what rendering is hardware-accelerated once everything is!
Mondrian is mostly implemented in GLSL, which proved very straightforward. Most of the graphics operations I needed to perform had corresponding GLSL builtins!
Beyond that Mondrian serves to pass Haskell datastructures over
uniforms to GLSL shaders running accross triangle pairs, & to parse those datastructures from CSS properties. OpenGL (with the aid Typograffiti’s lightweight abstractions) was the perfect level of abstraction for me to work at, though it didn’t have great handling for arrays.
A testscript allows me to view how different CSS properties are rendered on a single element, this was crucial in ensuring Mondrian works correctly! I have not figured out how to write useful unittests upon OpenGL.
Without going out of my way to encode arrays into textures for GLSL to decode, I mostly resorted to splitting every background layer into its own rectangle, with a solid fill behind it & borders on top. All borders for an element are rendered into a single rect to simplify implementing rounded rects in the future.
Also there’s gradient colour stops, which I preprocessed according to the spec & limited to 10 items (that should be enough for anyone?) so they fit in a fixed-size array. And manually unrolled the loops to fix what appears to be GPU bugs. I don’t know rare or not that GPU bug but at least Mondrian’s more resilient now!
Background images, which I’ll also lower foreground images to after layout, are loaded using dependency injection (HURL in the future) & Juicy Pixels. If you want to add support for another image format, see if you can interest Juicy Pixels. Though I do have plans to pull in Rasterific SVG since I consider SVG a vital web image format.
With a bit of effort the images are converted to a colourspace widely supported by OpenGL & the underlying bytes are uploaded to the GPU. I strongly considered using atlas textures, but in the end I decided to parse
background-repeat to the
GL_TEXTURE_WRAP texture properties. Unfortunately several images I tested against (cartoon cats from webcomics “Pepper & Carrot” & “Lackadaisy”) fails to render correctly, and I’ve been unable to figure out that bug.
Colours are parsed with the aid of the “colour” hackage, with a couple globals the backgrounds & border objects make sure to share. Though extracting the parsed results from this library to hand to GLSL was trickier than I expected.
Browser engines (and other UI libraries) which do not fully embrace hardware acceleration have needed to include logic tracking which elements actually need to be rerendered. Worse they’ve had to introduce logic for moving large chunks of the rendered UI around onscreen to minimize how much scrolling needs to rerender, which often involves GPU-acceleration anyways.
Since GPUs have been common-place in home computers for a while now… In the cases where it’s not available, I have my eyes on Pixman.