Pages: [1]
  Print  
Author Topic: Implement DRR, it's worth the effort!  (Read 17636 times)
cheb
Lesser Nub


Cakes 3
Posts: 127



WWW
« on: January 11, 2017, 12:53:49 PM »

Dynamic Resolution Rendering is a powerful mechanism.
I tested it extensively with my game engine (the most sophisticated multi-threaded, background-loading rotating cube there is) and was overjoyed seeing it implemented in Dishonored II as a default setting.

DRR is as simple as using render-to-texture to dynamically adjust render resolution to achieve desired FPS. It's an incredibly potent technique despite its simplicity. Also, can be used to get on-the-fly supersampling if one allows for resolutions *higher* than the screen. Which is how OA3 could utilize extra power of muscle-cards like GTX 1080 and the like. FYI, I almost set my laptop aflame by rendering my rotating cube in 4k. Cheesy

Also, this allowed me to cheat out of having a mechanism for video mode switching in my engine  Roll Eyes
Logged

Imma lazy dreamer. I achieved nothing.
Gig
In the year 3000
***

Cakes 45
Posts: 4394


WWW
« Reply #1 on: January 11, 2017, 04:17:56 PM »

I did not know it.

Trying to search for some infos, I found something like these pages:
https://software.intel.com/en-us/articles/dynamic-resolution-rendering-sample
http://www.geforce.com/whats-new/articles/dynamic-super-resolution-instantly-improves-your-games-with-4k-quality-graphics

It may be interesting, although I do not know how it cannot be distracting if the rendering quality quickly passes from fullhd to svga-like resolution when entering a polygon-heavy area. I can guess some filters may somehow mask it, but they would still have limits, isn't it?

Could you please link some more documentation about DRR?
Logged

I never want to be aggressive, offensive or ironic with my posts. If you find something offending in my posts, read them again searching for a different mood there. If you still see something bad with them, please ask me infos. I can be wrong at times, but I never want to upset anyone.
fromhell
Administrator
GET A LIFE!
**********

Cakes 35
Posts: 14520



WWW
« Reply #2 on: January 11, 2017, 09:31:47 PM »

While not dynamic, the engine already supports r_virtualMode which is the concept of rendering a low resolution onto a higher one.
Logged

asking when OA3 will be done won't get OA3 done.
Progress of OA3 currently occurs behind closed doors alone

I do not provide technical support either.

new code development on github
cheb
Lesser Nub


Cakes 3
Posts: 127



WWW
« Reply #3 on: January 12, 2017, 04:37:03 AM »

Well, the core principle is it being dynamic. So having a virtual resolution already supported is a good start.

Resolution changing is *surprisingly* little effect when you are looking at a dynamic scene.

In Dishonored 2, the effects could be seen *if* you are standing still, as minor flickering on thin details like railings or open-work towers, and minor flickering of shadow details as shadow map resolution changes. In motion...? Not noticeable at all.

I'd say in OA with its fast-paced nature many players won't immediately notice difference between 640x480 and 1600x1200. While the technique is a *huge* help on weak cards when... Let's say I hated it with passion when, in the day, had to zoom through rocket trails in Quake 3. These things are framerate murderers, especially on such fillrate bound poor man's cards like GF 2 MX or GF FX 5200.


DRR wipes the floor with this particular problem. Especially if you program it using the "sharp drops, slow rising" principle I thought up.

Quote
Could you please link some more documentation about DRR?
Uh, I'm a man of practice, not words.
In fact, I invented DRR independently, just like I invented multithreading in my old, abandoned shooter for MS-DOS ccool You see, I didn't know it already existed and how it was called, and it was explained to me on the forum that this is called "DRR" when I posted my research results and my test executable calling it "adaptive MSAA". shifty

You can try watching it on my rotating cube (well, nowadays it's a ball): http://chebmaster.com/downloads/chentrah_test020.zip  shifty F12+Q switches resolution to manual control by the mouse wheel. qf=25 correspopnds to rendering at half resolution, qf=400 at double resolution. "ESC" main menu, "F11" fullscreen, "F3" screenshot, "2" game mode with mouse look and FPS cap of 80, "1" menu mode with FPS cap of 30. People tell me the loading often hangs on their machines  Lips Sealed Also, you have to erase the sessions folder to switch between 32-bit and 64-bit executables, otherwise they crash.
Required minimum OpenGL 2.1 / Windows XP or wine 1.0
Logged

Imma lazy dreamer. I achieved nothing.
Gig
In the year 3000
***

Cakes 45
Posts: 4394


WWW
« Reply #4 on: January 12, 2017, 04:49:16 AM »

While not dynamic, the engine already supports r_virtualMode which is the concept of rendering a low resolution onto a higher one.
It could be a good starting point for implementing DRR, in case you like the idea...

From what I can guess, this r_virtualMode cvar can be set to -1=disabled (default) or to a value like 0, 1, 2, 3, 4 etc... which corresponds to a value of the list that you can get using "/modelist" command (see also DO NOT LINK[/b]) h t t p s : / / openarena . wikia . com/wiki/Manual/Graphic_options#Resolution]here). /Vid_restart required after change.
So, the scene is rendered at r_virtualMode resolution and then adapted to r_mode resolution, right?
Probably this may be very useful also for Pelya's Android port, which for some limit of its SDL-Android "layer" (forgive me for my ignorance about exact terms) can only use the phone's native resolution, it cannot stretch a lower resolution to use fullscreen -so you are forced to render the game in fullhd if you have a fullhd screen, even if your GPU isn't powerful enough for a good framerate at such resolution-).

I just did some try with recent engine nightly build, but it doesn't seem to work very well on my system: the virtual screen just takes a little part of the window (in lower left corner), it's not stretched to use the full window. It happens in both fullscreen and windowed modes.
You can see a couple of screenshots attached here.
Strange1.png: r_virtualmode 2 (512x384), r_mode 5 (960x720)
Strange2.png: r_virtualmode 2 (512x384), r_mode 6 (1024x768)

Environment: Windows 10 64bit, AMD Radeon HD 5670 with drivers 15.201.1151.1008, dual monitor system (desktop resolution 1280x1024 -which should be 5:4 ratio according to this calculator- each).

After doing such test, after having set r_virtualmode to -1 (disabled), I also noticed a similar behavior, which is not caused by r_virtualmode (due to happening also with old 0.8.8 binaries, which did not have virtualmode support) but maybe they may share a common cause: with some video modes -such as 2 (512x384) or 5 (960x720)- in fullscreen mode, the screen looks similar to these screenshots (except for the desktop, of course), with the game using only the lower left part of the monitor, leaving the rest of it completely black. Is that an expected behavior? Unlike the virtualmode problem, this one does not happen with some other video modes -such as 3 (640x480), 4 (800x600) or 6 (1024x768)-.

Uh, I'm a man of practice, not words.
In fact, I invented DRR independently, just like I invented multithreading in my old, abandoned shooter for MS-DOS ccool You see, I didn't know it already existed and how it was called, and it was explained to me on the forum that this is called "DRR" when I posted my research results and my test executable calling it "adaptive MSAA". shifty
But then, do you think about developing the OA implementation of it yourself on github, or about giving Fromhell "hints" about how to do it in case she would like to do it herself (e.g. a flowchart of the checks to performs to determine the needed resolution...)?

Also, changing resolution changes the size of command console output a lot... that would be very noticeable in-game! I don't know if Fromhell's r_virtualmode at the moment manages that in some way (my tests with that cvar encountered the problems I mentioned above).
« Last Edit: January 12, 2017, 12:09:36 PM by Gig » Logged

I never want to be aggressive, offensive or ironic with my posts. If you find something offending in my posts, read them again searching for a different mood there. If you still see something bad with them, please ask me infos. I can be wrong at times, but I never want to upset anyone.
cheb
Lesser Nub


Cakes 3
Posts: 127



WWW
« Reply #5 on: January 12, 2017, 08:55:07 AM »

Quote
yourself on github
IF i ever manage to break free from the Curse of the Rotating Cube (I'm writing that engine 15 years now, folks, and I never produced anything playable since that DOS shooter of 1997  Lips Sealed) then, of course.
Sharing sources would be a bit pointless because, well, Free Pascal  Roll Eyes

Quote
Also, changing resolution changes the size of command console output a lot...
Well, obviously you have to work in two resolutions in parallel: GUI at static screen or virtual resolution and 3d game scene at dynamic resolution.
The steps are
1. clear back buffer or its side panels, if any, not covered by the 3d viewport.
2. render the scene into texture at dynamic resolution
3. render that texture into the back buffer, stretching its used part to the viewport or downsampling with a shader if you use higher resolution than the viewport. Up to 2x x 2x upsampling it doesn't need a shader as the linear filtering does the downsampling trick.
4. render GUI elements into the back buffer.

AFAIR I added virtual resolution to Quake 2 to make its GUI look normal size, not ants-crawling-along-the screen-edges tiny in my customized Q2 renderer but it was long time ago and it was using the quake 2 to Delphi port, so I did it in Pascal anyway  Embarrassed
Well, anyway, I made it so that choosing resolutions like 400x300 made Q2 engine think the screen resolution is that, and scale GUI accordingly, while rendering at default desktop resolution. Pity I didn't have any idea of wide-screen monitors back then, assuming it's always 4:3  Embarrassed

Quote
e.g. a flowchart of the checks to performs to determine the needed resolution...
It's as ridiculously easy as creating a feedback loop and then calibrating its responsiveness.

First, you declare a load balancing factor, let's name it Quality Factor.

You constantly monitor videocard performance. I personally found the most useful is the duration of SwapBuffers() call averaged over several last frames. So I use it as my main measuring stick, with averaged FPS serving as sanity check. if FPS drops below allowed threshold, QF starts dropping rapidly.

Each frame you calculate desired qf based on SwapBuffers() duration and FPS (each has its own formula, choose the lower result). Then you change QF, it should approach desired qf slowly (exponentially), with dropping much more readily than rising.

Then, you use that QF value to govern everything about the render, from the render target resolution to LOD switching distance (my ball had LODs but I locked it to one of the lower ones because it is still rendered using glBegin.. glVertex...glEnd heresy, which throws a wrench into measurement procedure because performance of this backward-compatibility stub drops sharply at reaching about 20 000 triangles). I think it's also important that there is some hysteresis in render's response to QF changes, but I haven't tested that part extensively yet.

For my implementation details (i doubt it wuld be interesting) see my load balancer class in chentrah\modules\chentrah\src\mo_loba.pp in the test .zip provided above, the actual measurement may be scattered across many units and involves such horrors as assembly language blocks to utilize RDTSC with conditional compiling to use system timers instead on the ARM platform (my engine is now able to crash with an error message on raspberry Pi 2)

How to set your render to texture... Well, I suppose it's more a question of balance between supported hardware range and amount of hard work.
For my engine, I drew the line at video cards having proper OpenGL 2.1 support, with FBO without extensions. So, for example, the pathetic wuss of GF 7025 hobbles along while GF FX is straight out.

Everything earlier, and you have to delve into the hell of p-buffers or early not-exactly-standard FBO implementations.

If you delve even deeper still, I vouch for glCopyTexSubImage2D() to copy a chunk of back buffer into a texture. It worked surprisingly well at lower resolutions like 512x256, even on a GeForce 2 MX 400 (in fact, my Quake 2 renderer used that to re-create underwater screen warping available in the software renderer)

P.S. I found that creating FBOs is dirt cheap unless they are of ridiculous size like 6k x 6k . Tested on intel and nVidia but not on AMD.
« Last Edit: January 12, 2017, 08:58:01 AM by cheb » Logged

Imma lazy dreamer. I achieved nothing.
Gig
In the year 3000
***

Cakes 45
Posts: 4394


WWW
« Reply #6 on: January 12, 2017, 09:46:17 AM »

Wow, with that the texture quality really improves a lot, looking at your screenshots!
http://chebmaster.com/q2facelift/ss/q2s_items_before.jpg
http://chebmaster.com/q2facelift/ss/q2s_items_after.jpg
I already had the impression Q2 textures looked strangely "too much" blurry... were they always downsampled, then?

Ooops, that's a bit off-topic...
« Last Edit: January 12, 2017, 11:16:50 AM by Gig » Logged

I never want to be aggressive, offensive or ironic with my posts. If you find something offending in my posts, read them again searching for a different mood there. If you still see something bad with them, please ask me infos. I can be wrong at times, but I never want to upset anyone.
Gig
In the year 3000
***

Cakes 45
Posts: 4394


WWW
« Reply #7 on: January 12, 2017, 10:30:05 AM »

... back in topic:
Quote from: Cheb
How to set your render to texture... Well, I suppose it's more a question of balance between supported hardware range and amount of hard work.
For my engine, I drew the line at video cards having proper OpenGL 2.1 support, with FBO without extensions. So, for example, the pathetic wuss of GF 7025 hobbles along while GF FX is straight out.

Everything earlier, and you have to delve into the hell of p-buffers or early not-exactly-standard FBO implementations.
I think one of Fromhell's goals is to make OA3 run on all systems where the original Q3A did run (yes, even Windows 9.x & 3dfx)...

Maybe one may just not enable the feature by default and write something like "requires opengl x.x" in the tip of its opton in the GUI...
« Last Edit: January 12, 2017, 10:50:44 AM by Gig » Logged

I never want to be aggressive, offensive or ironic with my posts. If you find something offending in my posts, read them again searching for a different mood there. If you still see something bad with them, please ask me infos. I can be wrong at times, but I never want to upset anyone.
cheb
Lesser Nub


Cakes 3
Posts: 127



WWW
« Reply #8 on: January 12, 2017, 12:32:56 PM »

Quote
not enable the feature by default and write something like
Looking at big commercial games, they have a "video settings detection" phase at first launch.
OA3 can go similar way, activating features by default IF supported. Or drawing a line, for example, at availability of OGL 2. This and above gets DRR via FBO, with default screen resolution used by default, texture quality set to maximum and so on. Everything lower gets classical treatment, with the default state being 640x480, textures not at maximum quality and stuff.

[off-topic]
I was going to support Win98 / GL 1.4 forever until I realized how insanely labor-intensive it would be, duplicating everything I do with these awkward technologies. It's like supporting Internet Explorer 6 was in web design (before that cadaver finally died). Life goes on, inventing new much easier ways to do stuff but you are stuck developing everything twice: for modern systems and for that guy.

So I had to settle for supporting Raspberry Pi 2 for bragging rights  Sad

Quote
were they always downsampled, then?
Well, there was a console command to disable downsampling... That nobody knew about.
Also, my renderer uses Lanczos upsampling which is *much* better looking that even simply enabling the native texture resolution.

[/off-topic]
« Last Edit: January 12, 2017, 12:35:28 PM by cheb » Logged

Imma lazy dreamer. I achieved nothing.
Gig
In the year 3000
***

Cakes 45
Posts: 4394


WWW
« Reply #9 on: January 13, 2017, 06:30:37 AM »

Well, for the moment I say "thank you for the idea", which sounds interesting for both DRR (aiming for more stable fps in exchange of some quality, when needed) and supersampling (anti-aliasing method).

IIRC, recently Fromhell said that for a while she would prefer to work on art (character models) than on source code...
... but maybe she may like to get into this stuff sooner or later, and so thank you for your hints about how to do it.

Of course, that would include various aspects to think about, some examples:
- Would it be possible to completely manage it through engine, or some gamecode support (e.g. for differently managing 2D/HUD from 3D/world) would be required?
- In the second case, what would happen with existing mods?
- Enabling it by using a specific cvar or something like "r_virtualmode -2"?
- Etc.
Probably there is no point in thinking about such stuff now, I just wrote them down as example/reminder...

There is not really much I can do about this stuff... for the moment I added it to the "wishlist" on the wiki, just as a small reminder:
(DO NOT LINK) h t t p s : / / openarena . wikia . com/wiki/Wishlist#Engine



@Fromhell:
About the odd results I got while doing tests with current implementation of r_mode and r_virtualmode, I have now done some more tries:
- I disabled the second display from Windows screen settings, but the behavior did not change.
- I tried some r_mode values in fullscreen in the original Q3A, also there some modes do correctly fit the screen, while others not (although the result isn't exactly the same: there, it looks like the game screen starts from upper left instead of lower left monitor corner, and one can notice that it's "partially extended" due to the "unused" part of the screen showed "enlarged" desktop icons instead of black; hence it looks like the monitor resolution is actually lowered to some degree, although the game still does not completely fit it, although looking larger than in windowed mode)
- I tried on another PC, with windows XP and integrated Intel graphics instead. And the results were pretty much the same as the previous tests, although with more "flickering" effects in the "unused" part of the screen.
So, I have not been able to see r_virtualmode working as expected...  Sad
« Last Edit: January 13, 2017, 08:49:13 AM by Gig » Logged

I never want to be aggressive, offensive or ironic with my posts. If you find something offending in my posts, read them again searching for a different mood there. If you still see something bad with them, please ask me infos. I can be wrong at times, but I never want to upset anyone.
fromhell
Administrator
GET A LIFE!
**********

Cakes 35
Posts: 14520



WWW
« Reply #10 on: January 13, 2017, 02:17:30 PM »

because it requires vertex shaders to be enabled.

This won't help performance on modern machines though, the real suffering of that is all the software vertex processing going on in the world!  There's no vertex shaded skinning or anything fast like that.  Implementing DRR for performance reasons would be a red herring and wouldn't work well with the refdef system.
« Last Edit: January 13, 2017, 02:26:21 PM by fromhell » Logged

asking when OA3 will be done won't get OA3 done.
Progress of OA3 currently occurs behind closed doors alone

I do not provide technical support either.

new code development on github
Gig
In the year 3000
***

Cakes 45
Posts: 4394


WWW
« Reply #11 on: January 13, 2017, 04:11:06 PM »

because it requires vertex shaders to be enabled.
Excuse me, here were you answering to my problems with r_virtualmode?
Considering that vertex light mode is usually locked out by videoflags... ooops!
(DO NOT LINK) h t t p s : / / openarena . wikia . com/wiki/Manual/Graphic_options#Lighting

Quote
This won't help performance on modern machines though, [... ] and wouldn't work well with the refdef system.
Here you are back to drr, right?
I'm sorry, I don't know what the refdef system is.  Embarrassed
Logged

I never want to be aggressive, offensive or ironic with my posts. If you find something offending in my posts, read them again searching for a different mood there. If you still see something bad with them, please ask me infos. I can be wrong at times, but I never want to upset anyone.
cheb
Lesser Nub


Cakes 3
Posts: 127



WWW
« Reply #12 on: January 14, 2017, 10:57:55 PM »

Quote
Implementing DRR for performance reasons would be a red herring
Not exactly true.

While the vertex load stays constant, texel/pixel load varies dramatically. And that is where the main weakness of cheap video cards lies.

While nothing you do would chane anything on a GTX 1080, there are lots of mammoth crap video cards that could benefit from DRR greatly. As a minimum this relieves the user of the necessity to hand-pick a suitable video mode. That was my main reason to implement it (that, and the fact that video mode switching on LCD monitors is just meaningless. The proper modern way is using desktop resolution and DRR-ing down if it turns out that the video card is too feeble for full HD).

Also, you don't exactly need shaders to use DRR, the fixed pipeline works just fine up to 2x supersampling. The only thing you have to use is FBO.

[tempting]I'd say the entire mechanism should take no more than 200..500 lines of code and could be enabled-by-default-if-extension-available. giggity[/tempting]

Also, I forgot exact details but fooling the engine into thinking screen was different resolution was pretty easy for Quake 2.

Also, it may be not obvious, but such wasteful supersampling also enhances texturing, like LOD bias but without jittering.
« Last Edit: January 14, 2017, 11:28:18 PM by cheb » Logged

Imma lazy dreamer. I achieved nothing.
fromhell
Administrator
GET A LIFE!
**********

Cakes 35
Posts: 14520



WWW
« Reply #13 on: January 15, 2017, 07:18:46 PM »

FWIW Intel HD onboard chips can play OA on 1080 fine (in the hundreds), and the old video cards I have would struggle with the texture uploading bandwidth for the DRR'd framebuffer.  

I really implemented r_virtualMode to debug a 640x480 UI without actually switching to 640x480
Logged

asking when OA3 will be done won't get OA3 done.
Progress of OA3 currently occurs behind closed doors alone

I do not provide technical support either.

new code development on github
Gig
In the year 3000
***

Cakes 45
Posts: 4394


WWW
« Reply #14 on: January 16, 2017, 02:25:39 AM »

[OT]

because it requires vertex shaders to be enabled.
After reading that, I tried with /videoflags 0 and /r_vertexlight 1, but this did not change anything.
Finding strange that a lighting mode may have something to do with this, I thought again about it and realized you were talking albout vertex shaders, hence /r_ext_vertex_shader 1, hence GLSL, which is a completely different thing than vertex light. Oooops!  Embarrassed Embarrassed
My fault of course, however you may have warned me that when I posted this:
Considering that vertex light mode is usually locked out by videoflags... ooops!
(DO NOT LINK) h t t p s : / / openarena . wikia . com/wiki/Manual/Graphic_options#Lighting
I was completely off target!  Smiley

However, I tried r_virtualmode with glsl activated and... the game crashed.
Attached, you can find the relevant part of stderr.log: maybe the feature requires some glsl shader which was not included in your engine effects assets package (probable... because it's not mentioned in the "content list" of that post).

[/OT]
« Last Edit: January 16, 2017, 07:42:59 AM by Gig » Logged

I never want to be aggressive, offensive or ironic with my posts. If you find something offending in my posts, read them again searching for a different mood there. If you still see something bad with them, please ask me infos. I can be wrong at times, but I never want to upset anyone.
Hitchhiker
Member


Cakes 11
Posts: 181


« Reply #15 on: January 16, 2017, 12:11:31 PM »

hi y'all, just my two cents... cheb will correct me if I understand this wrongly (I've only recently discovered framebuffer objects - read few weeks ago).
how this would work is this:
 -during engine video initialization the framebuffer objects (FBOs) would be initialized (i.e. create 8 or more FBOs where lower resolution FBO would be 32x32 pixels in size, second one would be 64x64, etc until 2048x2048 or maximum the graphic card can have for texture unit size)
      -these FBOs would exist for the entire duration of the video setup (until the next vid_restart)
 -during the game loop see how much miliseconds it has taken to render the previous video frame and decide if lower or higher resolution FBO is needed to keep the framerate steady
      -this above decision selects the appropriate FBO as our rendering target (this is really one OpenGL call to make and a fast one as well) - following this rendering happens as usual without any changes
 -FBOs are during their entire lifecycle stored in video card memory and each of them can be used as a texture when drawing primitives by simply using glBind(TEXTURE_2D, FBO_id); - again, one command and a fast one as well to perform (no need for transfers from CPU or similar)
      -as the FBOs are basically textures the appropriate one is used as the texture when our postprocessing 'quad' is drawn, UV coordinates for the texture-to-quad mapping would probably be the same for all the FBOs

It would require OpenGL 1.2 (1.2??) from what I've seen.
« Last Edit: January 16, 2017, 12:40:31 PM by Hitchhiker » Logged
cheb
Lesser Nub


Cakes 3
Posts: 127



WWW
« Reply #16 on: January 17, 2017, 06:19:48 AM »

Quote
Intel HD onboard chips can play OA on 1080 fine (in the hundreds),
Mine can't (HD3000). At least not always and not if I want to play with guaranteed smoothness.
Had to settle on 1024x768 (also, nice square aspect "as it was meant to play")
I never profiled this, but I suspect that the problem is with rocket trails and other effects that give spikes in overdraw.

Quote
and the old video cards I have would struggle with the texture uploading bandwidth for the DRR'd framebuffer.
Hmm, it depends on the definition of "old", then.
It seems my collection of old video cards falls into a different range.
The oldest I have, and that's GF FX 5200, which has *only* twice the fill rate of an GF 2 MX 400 (reeeeally pathetic) but supports FBO just fine, and having that support means almost zero extra cost, there is no "uploading". You attach a texture to FBO, render into it, then switch back to frame buffer, bind that texture and voila.
For older video cards, as I said, the uploading cost is... I once made a nifty bloom & fake HDR post-processing that worked on a GF 2 MX 400, and it achieved 75 FPS at 640x480 (250 FPS without it), and it worked by glCopyTexSubImage2d(). See the demo here (will not work in windowed mode if window wider than 1024)
So if, say, you use DRR to protect from overdraw spikes and only activate downsampling during frame rate drops (explosions & other effects galore) and you only use render resolutions of 512x384 or below, then still DRR will work and will help, even on a GeForce 2.
DRR not possible on Voodoo2, but then I have no idea how to make a renderer supporting that. I loved that card and lament the fact I don't have it anymore, but... Yeah, too too non-standard.

Quote
the framebuffer objects (FBOs) would be initialized (i.e. create 8 or more FBOs where lower resolution FBO would be 32x32 pixels in size, second one would be 64x64, etc until 2048x2048 or maximum the graphic card can have
First, 256x256 is the sane lower limit and I only use it for emergencies (see below)

Second, you ideally begin with *just one* FBO that equals the screen resolution (say, a 640x480 texture). It is very hard to find a card that doesn't support NPOT (non-power-of-2) textures. I only have one, GF FX 5200.

An important detail: either do not issue buffer clearing commands without glScissor, or clear the buffer by rendering a blank quad. I learned this the hard way. Otherwise you get performance loss for clearing the parts not in use.

Third, you create larger FBOs on demand when your load balancer finally decides that yes, supersampling is viable. Because a 2048x2048 FBO is 32 megabytes of video memory. Not exactly much, but still.

If the video card has enough memory, then creating a new FBO takes 10 miliseconds or less, not really noticeable.
So, you begin with 640x480, then create a second FBO of 1280x960 and then maybe a 2048x1440. More samples than that are... a bit meaningless IMHO.

The thing to watch out for:
- sometimes FBO creation fails without reason and you have to fall back to the existing smaller one.
- maximum FBO size reported by OpenGL does NOT guarantee that your video driver can really create one that huge. My GTX 460 was reporting, like, 8k x 8k but could fail as early as 6k x 6k. This failure may result in a freeze of 100..150 miliseconds.


Quote
-this above decision selects the appropriate FBO as our rendering target (this is really one OpenGL call to make and a fast one as well) - following this rendering happens as usual without any changes
Yes, and as far as I can see, FBO switching has no measurable cost.
glBindFramebuffer(GL_FRAMEBUFFER_EXT, MyFBOHandle);

Quote
FBOs are during their entire lifecycle stored in video card memory and each of them can be used as a texture when drawing primitives by simply using glBind(TEXTURE_2D, FBO_id); - again, one command and a fast one as well to perform (no need for transfers from CPU or similar)
Exactly.
Just, ahem, do not forget to unbind the FBO first: glBindFramebuffer(GL_FRAMEBUFFER_EXT, 0);

Quote
-as the FBOs are basically textures the appropriate one is used as the texture when our postprocessing 'quad' is drawn, UV coordinates for the texture-to-quad mapping would probably be the same for all the FBOs
Not, because you want to use only a part of any given FBO (glViewport() + glScissor()) to achieve smooth resolution manipulation.
But still very simple and straightforward.

P.S. Oh, ah, and FBO isn't its texture. FBO to texture is exactly like a GLSL program object to a fragment shader object and creation is quite similar. You create a bog-standard texture, you create a special depth buffer texture of identical dimensions, then you create a FBO and attach both textures to it as attachments.

See my function (testing for a specific video driver quirk) that goes full circle: creating, binding, deleting:
Code:
  procedure TestOpenGlQuirkNo2;
  //FBO supports GL_RGB8 and GL_RGBA16 but not GL_RGBA8
  //Found on: GeForce FX 5200
  //Detected by: failing to create a 256x256 FBO with a GL_RGBA8 color buffer
    function TryCreateFbo(cbf: GLuint; txt: string): boolean;
    var db, fbo, cb: GLuint;
    begin
      if Mother^.Debug.Verbose then AddLog('  Creating a FBO with %0 color buffer...',[txt]);
      glGenRenderbuffers(1, @db);
      CheckGlError;
      glBindRenderbuffer(GL_RENDERBUFFER_EXT, db);
      CheckGlError;
      glRenderbufferStorage(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
      CheckGlError;
      glGenTextures(1, @cb);
      CheckGlError;
      glEnable(GL_TEXTURE_2D);
      glBindTexture(GL_TEXTURE_2D, cb);
      CheckGlError;
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
      glTexImage2D ( GL_TEXTURE_2D, 0, cbf, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NIL);
      CheckGlError;
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
      CheckGlError;
      glGenFramebuffers(1, @fbo);
      glBindFramebuffer(GL_FRAMEBUFFER_EXT, fbo);
      CheckGlError;
      glBindTexture(GL_TEXTURE_2D, Mother^.GAPI.StubTexture); //bind a safe non-used texture
      glFramebufferTexture2D(
        //GL_DRAW_FRAMEBUFFER
        GL_FRAMEBUFFER_EXT
        , GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, cb, 0);
      CheckGlError;
      glFramebufferRenderbuffer(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, db);
      CheckGlError(true);
      Result:= GL_FRAMEBUFFER_COMPLETE_EXT = glCheckFramebufferStatus(GL_FRAMEBUFFER_EXT);
      glBindFramebuffer(GL_FRAMEBUFFER_EXT, 0);
      CheckGlError;
      glDeleteFramebuffers(1, @fbo);
      CheckGlError;
      glDeleteTextures(1, @cb);
      CheckGlError;
      glDeleteRenderbuffers(1, @db);
      CheckGlError;
      if Mother^.Debug.Verbose then AddLog('  ..result=%0',[Result]);
    end;
  begin
    if OpenGLQuirkFboCantGL_RGBA8 in Mother^.GAPI.QuirksTested then Exit; //already tested
    try
      if Mother^.Debug.Verbose then AddLog('This video card haven''t yet been tested for OpenGL quirk #2. Trying to detect it...');

      if TryCreateFbo(GL_RGBA16, 'GL_RGBA16') and not TryCreateFbo(GL_RGBA8, 'GL_RGBA8')
        then Mother^.GAPI.Quirks+= [OpenGLQuirkFboCantGL_RGBA8];

      if Mother^.Debug.Verbose then AddLog('The quirk is marked as checked. Detected=%0', [OpenGLQuirkFboCantGL_RGBA8 in Mother^.GAPI.Quirks]);
      Mother^.GAPI.QuirksTested+= [OpenGLQuirkFboCantGL_RGBA8];
    except
      Die(RuEn(
        'Сбой при проверке OpenGL на глюк №2 (FBO не может работать с 8-битным цветовым буфером с альфа-каналом)',
        'Failed to test for OpenGL quirk #2 (FBO cannot work with 8-bit RGBA color buffer)'
      ));
    end;
  end;
« Last Edit: January 17, 2017, 06:33:21 AM by cheb » Logged

Imma lazy dreamer. I achieved nothing.
GrosBedo
Member


Cakes 20
Posts: 710


« Reply #17 on: March 11, 2017, 05:07:04 PM »

Great idea, but nowadays OA is running pretty good even on old hardware, so not sure if it would be useful right now, but that's a really great idea.
Logged
Pages: [1]
  Print  
 
Jump to: