Fake frames is “frame generation” for Nvidia it’s called DLLS.
Rather than having the graphics card create 120 frames, you can crank the settings up to where you only get 60, then AI “guesses” what the next frame would show doubling it to 120 but keeping the higher settings.
This can make things blurry because the AI may guess wrong. So every odd frame is real, every even frame is just a guess.
Frame 1: real
Frame 2: guess
Frame 3: real
If the guess for #2 is accurate, everything is cool, if #2 guessed a target moves left when it moved right then #3 corrects and that “blink” is the problem.
The bigger issue is developers relying on that tech so they don’t have to optimize code. So rather than DLSS being an extra ompf, it’s going to be required for “acceptable” performance
Fake frames is “frame generation” for Nvidia it’s called
DLLS.Rather than having the graphics card create 120 frames, you can crank the settings up to where you only get 60, then AI “guesses” what the next frame would show doubling it to 120 but keeping the higher settings.
This can make things blurry because the AI may guess wrong. So every odd frame is real, every even frame is just a guess.
Frame 1: real
Frame 2: guess
Frame 3: real
If the guess for #2 is accurate, everything is cool, if #2 guessed a target moves left when it moved right then #3 corrects and that “blink” is the problem.
The bigger issue is developers relying on that tech so they don’t have to optimize code. So rather than DLSS being an extra ompf, it’s going to be required for “acceptable” performance
Can someone explain how AI can generate a frame faster than the conventional method?
(that’s part of the grift)
Which part? I mean even if it isn’t generating the frames well, it’s still doing the work. So that capability is there. What’s the grift?