this post was submitted on 10 Dec 2024
73 points (88.4% liked)

PC Gaming

8765 readers
564 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 2 years ago
MODERATORS
all 25 comments
sorted by: hot top controversial new old
[–] [email protected] 18 points 1 week ago (1 children)

This is great and I hope this technology can be implemented on older hardware that maybe barely doesn't meet todays high system requirements.

I hope this is not used as a crutch by developers to hide really bad optimization and performance, as they have already been doing with upscalers like FSR/DLSS.

[–] [email protected] 22 points 1 week ago (2 children)

no, I fucking hope not. Older games rendered an actual frame. Modern engines render a noisy, extremely ugly mess, and rely on temporal denoising and frame generation (which is why most modern games only show you scenes with static scenery with a very slow moving camera).

Just render the damn thing properly in the first place!

[–] [email protected] 4 points 1 week ago (1 children)

Depends what you want to render. High fps requirements in conjunction with movement where the human eye is the bottleneck is a perfect interpolation case. In such a case the bad frames aren't really seen.

[–] [email protected] 4 points 1 week ago* (last edited 1 week ago) (1 children)

no, it depends how you want to render it. Older games still had most of today's effects. It's just that everyone is switching to unreal, whose focus isn't games anymore. And which imo, looks really bad on anything except a 4090, if that. Nobody is putting in the work for an optimized engine. There is no "one size fits all". They do this to save money in development, not because it's better.

ffs even the noisy image isn't always at native resolution anymore.

[–] [email protected] 3 points 1 week ago (1 children)

A context aware interpolation with less overhead is a cool technology as compared to context unaware averaging. How that ends up implemented in various engines is a different topic.

[–] [email protected] 7 points 1 week ago

there should't be any averaging! Just render the damn frame!

You can't tell me we could get something like mgsV on previous gen hardware at 60 fps, And that hardware with 9 times the processing power can only render a lower resolution, noisy image which is then upscaled and denoised.... at 30 fps.

"But raytracing!!!"

If these are the compromises that need to be made just to shoehorn that in, then current hardware isn't really capable of realtime raytracing in the first place.

[–] [email protected] 1 points 1 week ago

I think you are misunderstanding, because I agree with you when the games minimum hardware requirements are met.

I am saying I hope this technology can be used so that hardware that is below minimum requirements could potentially still get decently playable framerates using this technology on newer titles. The obvious drawback being decreased visual quality. I agree that upscaling, particularly TAA and its related effects, should not be used to reduce system requirements because the developers do not design their game well or make use of ugly effects. But I think this can be useful for old systems or perhaps only integrated graphics chips depending on how the technology works. That was what I meant. Sorry I was not clear enough initially.

[–] [email protected] 16 points 1 week ago (1 children)

I REALLY love how, in this AI friendly article, they're using a picture of Aloy. 5/7 no notes.

[–] [email protected] 2 points 6 days ago

with or without rice?

[–] [email protected] 13 points 1 week ago (1 children)

They really trying everything besides not bloating the game to shit and optimising their games

[–] [email protected] 1 points 3 days ago

They can’t control the developers but they can control their drivers and tech

[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (1 children)

The paper includes the following chart for average frame gen times at various resolutions, in various test scenarios they compared with other frame generation methods.

Here's their new method's frame gen times, averaged across all their scenarios.

540p: 2.34ms

720p: 3.66ms

1080p: 6.62ms

Converted to FPS, by assuming constant frametimes, thats about...

540p: 427 FPS

720p: 273 FPS

1080p: 151 FPS

Now lets try extrapolated pixels per frametime to guesstimate an efficiency factor:

540p: 518400 px / 2.34 ms = 221538 px/ms

720p: 921600 px / 3.66 ms = 251803 px/ms

1080p: 2073600 px / 6.62 ms = 313233 px/ms

Plugging pixels vs efficiency factor into a graphing system and using power curve best fit estimation, you get these efficiency factors for non listed resolutions:

1440p: 361423 px/ms

2160p: 443899 px/ms

Which works out to roughly the following frame times:

1440p: 10.20 ms

2160p: 18.69 ms

Or in FPS:

1440p: 98 FPS

2160p: 53 FPS

... Now this is all extremely rough math, but the basic take away is that frame gen, even this faster and higher quality frame gen, which doesn't introduce input lag in the way DLSS or FSR does, is only worth it if it can generate a frame faster than you could otherwise fully render it normally.

(I want to again stress here this is very rough math, but I am ironically forced to extrapolate performance at higher resolutions, as no such info exists in the paper.)

IE, if your rig is running 1080p at 240 FPS, 1440p at 120 FPS, or 4K at 60 FPS natively... this frame gen would be pointless.

I... guess if this could actually somehow be implemented at a driver level, as an upgrade to existing hardware, that would be good.

But ... this is GPU tech.

Which, like DLSS, requires extensive AI training sets.

And is apparently proprietary to Intel... so it could only be rolled out on existing or new Intel GPUs (until or unless someone reverse engineers it for other GPUs) which basically everyone would have to buy new, as Intel only just started making GPUs.

Its not gonna somehow be a driver/chipset upgrade to existing Intel CPUs.

Basically this seems to be fundamental to Intel's gambit to make its own new GPUs stand out. Build GPUs for less cost, with less hardware devoted to G Buffering, and use this frame gen method in lieu of that.

It all depends on the price to performance ratio.

[–] [email protected] 2 points 1 week ago (1 children)

Now this is all extremely rough math, but the basic take away is that frame gen, even this faster and higher quality frame gen, which doesn't introduce input lag in the way DLSS or FSR does, is only worth it if it can generate a frame faster than you could otherwise fully render it normally.

The point of this method is that it takes less computations than going through the whole rendering pipeline, so it will always be able to render a frame faster than performing all the calculations unless we’re at extremes cases like very low resolution, very high fps, very slow GPU.

IE, if your rig is running 1080p at 240 FPS, 1440p at 120 FPS, or 4K at 60 FPS natively... this frame gen would be pointless.

Although you did mention these are only rough estimates, it is worth saying that these numbers are only relevant to this specific test and this specific GPU (RTX 4070 TI). Remember time to run a model is dependent on GPU performance, so a faster GPU will be able to run this model faster. I doubt you will ever run into a situation where you can go through the whole rendering pipeline before this model finishes running, except for the cases I listed above.

I... guess if this could actually somehow be implemented at a driver level, as an upgrade to existing hardware, that would be good

It can. This method only needs access to the frames, which can easily be accessed by the OS.

But ... this is GPU tech.

This can run on whatever you want that can do math (CPU, NPU, GPU), they simply chose a GPU. Plus it is widely known that CPUs are not as good as GPUs at running models, so it would be useless to run this on a CPU.

And is apparently proprietary to Intel... so it could only be rolled out on existing or new Intel GPUs (until or unless someone reverse engineers it for other GPUs) which basically everyone would have to buy new, as Intel only just started making GPUs.

Where did you get this information? This is an academic paper in the public domain. You are not only allowed, but encouraged to reproduce and iterate on the method that is described in the paper. Also, the experiment didn’t even use Intel hardware, it was NVIDIA GPU and AMD CPU.

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago)

The point of this method is that it takes less computations than going through the whole rendering pipeline, so it will always be able to render a frame faster than performing all the calculations unless we’re at extremes cases like very low resolution, very high fps, very slow GPU.

I feel this is a bit of an overstatement, otherwise you'd only render the first frame of a game level and then just use this method to extrapolate every single subsequent frame.

Realistically, the model has to return back to actually fully pipeline rendered frames from time to time to re-reference itself, otherwise you'd quickly end up with a lot of hallucination/artefacts, kind of an AI version of a shitty video codec that morphs into nonsense when its only generating partial new frames based on detected change from the previous frame.

Its not clear at all, at least to me, in the paper alone, the average frequency, or under what conditions that reference frames are reffered back to... after watching the video as well, it seems they are running 24 second, 30 FPS scenes, and functionally doubling this to 60 FPS, by referring to some number of history frames to extrapolate half of the frames in the completed videos.

So, that would be a 1:1 ratio of extrapolated frame to reference frame.

This doesn't appear to actually be working in a kind of real time, moderated tandem between real time pipeline rendering and frame extrapolation.

It seems to just be running already captured videos as input, and then rendering double FPS videos as output.

...But I could be wrong about that?

I would love it if I missed this in the paper and you could point out to me where they describe in detail how they balance the ratio of, or conditions in which a reference frame is actually referred to... all I'm seeing is basically 'we look at the history buffer.'

Although you did mention these are only rough estimates, it is worth saying that these numbers are only relevant to this specific test and this specific GPU (RTX 4070 TI).

Thats a good point, I missed that, and it's worth mentioning they ran this on a 4070ti.

I doubt you will ever run into a situation where you can go through the whole rendering pipeline before this model finishes running, except for the cases I listed above.

Unfortunately they don't actually list any baseline for frametimes generated through the normal rendering pipeline, would have been nice to see that as a sort of 'control' column where all the scores for the various 'visual difference/error from standard fully rendered frames' are all 0 or 100 or whatever, then we could compare some numbers of how much quality you lose for faster frames, at least on a 4070ti.

If you control for a single given GPU then sure, other than edge cases, this method will almost always result in greater FPS for a slight degredstion in quality...

...but there's almost no way this method is not proprietary, and thus your choice will be between price comparing GPUs with their differing rendering capabilities, not something like 'do i turn MSAA to 4x or 16x', available on basically any GPU.

More on that below.

This can run on whatever you want that can do math (CPU, NPU, GPU), they simply chose a GPU. Plus it is widely known that CPUs are not as good as GPUs at running models, so it would be useless to run this on a CPU.

Yes, this is why I said this is GPU tech, I did not figure that it needed to be stated that oh well ok yes technically you can run it locally on a CPU or NPU or APU but its only going to actually run well on something resbling a GPU.

I was aiming at practical upshot for average computer user not comprehensive breakdown for hardware/software developers and extreme enthusiasts.

Where did you get this information? This is an academic paper in the public domain. You are not only allowed, but encouraged to reproduce and iterate on the method that is described in the paper. Also, the experiment didn’t even use Intel hardware, it was NVIDIA GPU and AMD CPU.

To be fair, when I wrote it originally, I used 'apparently' as a qualifier, indicating lack of 100% certainty.

But uh, why did I assume this?

Because most of the names on the paper list the company they are employed by, there is no freely available source code, and just generally corporate funded research is always made proprietary unless explicitly indicated otherwise.

Much research done by Universities also ends up proprietary as well.

This paper only describes the actual method being used for frame gen in relatively broad strokes, the meat of the paper is devoted to analyzing it's comparative utility, not thoroughly discussing and outlining exact opcodes or w/e.

Sure, you could try to implement this method based off of reading this paper, but that's a far cry from 'here's our MIT liscensed alpha driver, go nuts.'

...And, now that you bring it up:

Intel filed what seem to me to be two different patent applications, almost 9 months before the paper we are discussing came out, with 2 out of 3 of the credited inventors on the patents also having their names on this paper, which are directly related to this academic publication.

This one appears to be focused on the machine learning / frame gen method, the software:

https://patents.justia.com/patent/20240311950

And this one appears to be focused on the physical design of a GPU, the hardware made to leverage the software.

https://patents.justia.com/patent/20240311951

So yeah, looks to me like Intel is certainly aiming at this being proprietary.

I suppose its technically possible they do not actually get these patents awardes to them, but I find that extremely unlikely.

EDIT: Also, lol video game journalism processional standards strike again, whoever wrote the article here could have looked this up and added this highly relevant 'Intel is pursuing a patent on this technology' information to their article in maybe a grand total of 15 to 30 extra minutes, but nah, too hard I guess.

[–] [email protected] 10 points 1 week ago* (last edited 1 week ago) (3 children)

Can we please stop with this shit?

The ideal framerate booster was already invented, it's called asynchronous space warp.

Frames are generated by the GPU at whatever rate it can do, and then the latest frame is "updated" using reprojection at the framerate of the display, based on input.

Here is LTT demoing it two years ago.

It blows my mind that were wasting time with fucking frame generation, when a better way to acheive the same result has been used for VR (where adding latency is a GIANT no-no) for nearly a decade.

[–] [email protected] 8 points 1 week ago* (last edited 1 week ago) (1 children)

This is a hilariously bad take for anything not VR. async warping causes frame smearing on detail that is really noticable when the screens aren't so close your peripheral blind spots make up for it.

Its an excellent tool in the toolbox but to pretend that async reprojection "solved" this kind of means you don't understand the problem itself..

Edit: also the LTT video is very cool as a proof of concept, but absolutely demonstrates my point regarding smearing. There are also many, MANY cases where a clean frame with legible information would be preferable to a less latent smeared frame.

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

Thank you for being rude.

I'm not pretending it solves anything other than the job of increasing the perceived responsiveness of a game.

There are a variety of potential ways to fill in the missing peripheral data, or even occluded data, other than simply stretching the edge of the image. Some of which very much overlap with what DLSS and frame generation are doing.

My core argument is simply that it is superior to frame generation. If you're gonna throw in fake frames, reprojection beats interpolation.

Frame generation is completely unfit for purpose, because while it may spit out more frames, it makes games feel LESS responsive, not more.

ASW does the opposite. Both are "hacky" and "fake" but one is clearly superior in terms of the perceived experience.

One lets me feel like the game is running faster, the other makes the game look like it runs faster, while making it feel slower.

This solution by intel is better, essentially because it works more like ASW than other implementations of frame generation.

[–] [email protected] 1 points 1 week ago (1 children)

Frame reprojection lacks motion data. It is in the title. It is reprojecting the last frame. Frame generation uses the interval between real frames, feeds in vector data, and estimates movement.

If I am trying to follow a ball going across the screen, not moving my mouse, reprojection is flat out worse. Because it is reprojecting the last frame, where nothing moved. Frame 1, Frame 1RP , then Frame 2. 1 and 1RP would have the ball in the exact same place. If I move my viewpoint, then the perspective will feel correct, viewport edges will blur and the reprojection will map to perspective which feels better for head tracking in VR. But for information delivery it is no new data, not even a guess. It's still the same frame, just in a different point in space. Not till the next real frame comes in.

With frame generation, if I am watching this ball again, now it looks more like Frame 1 (Real), Frame 1G (estimate), Frame 2 (real) Now frame 1 and frame 1G have different data, and 1G is built on vector data between frames. Not 100% but it's a educated guess where the ball is going between frame 1 and frame 2. If I move my viewpoint, it is not as responsive feeling as reprojection, but it the gained fake middle frame helps with motion tracking in action.

The real answer is to use frame generation with low-latency configurations, and also enable reprojection in the game engine if possible. Then you have the best of both worlds. For VR, the headset is the viewport, so it's handled at a driver level. But for games, the viewport is a detached virtual camera, so the gamedev has to expose this and setup reprojection, or Nvidia and AMD need to build some kind of DLSS/FSR like hook for devs to utilize.

But if you could do both at once, that would be very cool. You would get the most responsive feel in terms of lag between input and action on screen, while also getting motion updates faster than a full render pass. So yes, Intel's solution is a set in that direction. But ASW is not in itself a solution, especially for high motion scenes with lost of graphics. There is a reason the demo engine in the LTT video was extremely basic. If you overloaded that with particle effects and heavy rendering like you see in high end titles, then the smearing from reprojection would look awful without rules and bounding on it.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (1 children)

The reprojected frame with the ball in the same spot is still more up to date than a generated frame using interpolation.

With reprojection, every other frame is showing where the ball actually is.

It essential displays the game-world at the framerate it is actually being generated, with as little latency as possible.

I vastly prefer this. Together with the reduced perceived input latency, this makes motion tracking FAR easier than with frame generation.

With current frame generation, every frame, is showing where the ball was, two or three frames ago. You never see where it is right now. Due to this, in fast paced action, hand-eye coordination is slower, more likely to overshoot, etc.

And further developed reprojection, absolutely could account for such things.

[–] [email protected] 1 points 6 days ago (1 children)

Your understanding of frame generation is incorrect.

Again let's say a huge absurdly low FPS and a big frame window for example. 10ms between frames.

If your frame windows is 10ms. Frame 1 at 0ms and Frame 2 at 10ms. Frame generation is not just interpolation. That is what your new TV does when you activate motion smoothing and soap opera mode. This is not what framegen is, at all.

In frame generation the frame generation engine (driver or program) stores a motion vector array. This determines the trend line of how pixels are likely to change. In our example, the motion vectors for the ball indicate large motion in a diagonal direction let's say, and the overall frame indicates low or no motion due to the user not swinging the camera wildly. The frame generation then uses frame 1 to make an estimate of a frame 1.5, and the ball does actually move in the image thanks to motion vector analysis. The ball moves independently of the scene itself due to the change in user camera, so the user can see the ball itself moving against the background.

So, in frame 1.5, the ball you are seeing, as well as the scene, have actually moved. Now, the user can see this motion, and lets say they didn't notice it in frame 1. This means frame 1.5 is a chance for them to react! And their inputs go through sooner, reducing true latency by allowing them to react to in-game stimus faster. Yes, even if the frame is "faked"

In reprojection, at frame 1.5RP, again crucially there is not any new scene data. Reprojection is not using motion vectors it's using the camera and geometry only. If the user isn't moving the POV at all for example then the reprojection just puts the frame where it already was and the user waits the full 10ms before the ball appears to move. Even if the camera is moving, reprojection is going to adjust the scene angle relative to camera, the ball is not going to move within the overall scene. Again, consider if the ball is flying left, and the user walking left. The reprojection cannot move the ball left. If anything, if the reprojection is put on the existing scene geometry, the opposite would occur and the ball may even appear to move right or slow down due to paralax.

Reprojection uses old frame data and moves it like flat cards in 3d space, so the frame of the ball in scene the ball stays in position till frame 2. And can only be affect by camera motion that drives reprojection, not other rendering data. And what the user sees of the ball wouldn't change until 10ms later. Only the overall flat scene can reprojection, so the user tilting the camera or swinging it can feel instantly responsive. But till the next render pass, the real motion data, delivered either via motion vector or frame 2, doesn't his them in a reprojection on 1.5.

So again, your understanding of current frame gen is wildly incorrect. And what you are describing for reprojection getting better is essentially to add reprojection to framegen. And use motion vectors to render the new portion of the frame, and use the projection to adjust overall pov based on camera input. Which again, works well. Adding reprojection and Framegen together is not a bad idea. And reprojection is great for reducing perceived latency (why it is essential for avoiding motion sickness in VR). These are two techniques solving different forms of latency issues. Combined they offer far more.

[–] [email protected] 1 points 6 days ago* (last edited 6 days ago)

So the article above is sraight up wrong? All frame generation is already extrapolation, not interpolation?

I had to look it up because I could have sworn that reprojection can and does use motion vectors to do more than just update the perspective.

AND IT DOES.

You're talking about what VR does as the last step of EVERY rendered frame, which is an extremely simple reprojection to get the frame closer to what it would have been (what oculus called ATW), had it been rendered instantly (which it obviously can't be). This is also seemingly the extend to which the unity demo showcased by LTT took it.

What Oculus called ASW, asynchronous space warp, absolutely can and does update the position of the ball, which is why it can and is used to entirely replace rendering every other frame.

Valves version of it is a lot simpler, and closer to just ATW, and does not use motion vectors when compensating for lost frames. Unlike ASW their solution was never meant to be used constantly, for every other frame, to enable VR on lesser hardware.

[–] [email protected] 4 points 1 week ago

Second page of the paper explains the shortcomings of warping and hole filling.

[–] [email protected] 4 points 1 week ago (1 children)

As a gut who absolutely hates those "60fps 4k anime fight scene rerenders" I hope to dear god this isn't used in the future

[–] [email protected] 1 points 1 week ago