Ugh, I feel I've been doing lots of negative posts lately.
Another issue here is that if we ever get hardware accelerated rendering working, all the display_xxx functions that draw primitives will likely have to go for performance reasons, leaving only functions for drawing images and text. Direct access to the screen pixel buffer will also have to go.
There is also the issue that my current OpenGL renderer uploads all images (bild_t) to a texture atlas in VRAM during initialization. After that, the texture atlas can't be modified. markohs has expressed interest in changing that to some form of streaming texture atlas, however. Especially since a static texture atlas also makes it impossible to have more than 64k images. My OpenGL renderer has however hit a major obstacle.
One aspect of this idea can however carry over to my hardware accelerated ideas, and that is that perhaps windows should render to their own render target, but only when they have changed. These render targets would then used as textures when rendering the GUI layer onto the screen for each frame. I don't know if that will be more efficient that simply rendering all windows directly to the screen, since having to switch textures is expensive. These per window textures would be distinct from textures for images, though.