I sometimes do development work in a Linux virtual machine on my MacBook Pro. At some point in the past, OpenGL performance in this environment was at near-native levels. Recently, though, it seems performance has been bogged down significantly.
Even in GLSandbox that is supposed to be pretty much 100% GPU bound, I’m seeing frame rates hovering at 10 FPS and CPU usage around 80%. The only conclusion I can make is that the driver is relying on some CPU fallback and perhaps doing pixel format conversions or something else that gets quite expensive. This is undoubtedly related to the fact that the OpenGL driver in the virtual environment is capped at OpenGL 2.1 and I’m doing rendering to textures and blitting between FBOs to get the frames on the screen, which uses functionality from various OpenGL extensions. A quick test with a (presumably) simpler OpenGL rendering app shows good performance – supporting the assumption that direct rendering to the window back buffer might offer superior performance.
I tried fiddling with various buffer format parameters and other settings to see if anything makes a difference but to no avail. I also contemplated setting up an OpenGL profiler to pinpoint the slow operations, but the open source options for this seem daunting (at least compared to what Apple offers on OS X). Now, I could continue debugging this and spend hours trying to dig out the underlying reason, but this would be mostly to my own benefit so that I wouldn’t have to boot up a native Linux environment every now and then. Therefore I’m just dropping this line of inquiry and continuing with stabilizing the 1.15 release in general. (Although I must say that it irritates me to leave this kind of issues unsolved.)