NightRaven49
Master Member
- Joined
- May 17, 2019
- Messages
- 3,707
- Reaction score
- 1,551
ur entire post is a mistake, but I'll add in details after lunch.If I made any mistake in these statements, let me know.
ur entire post is a mistake, but I'll add in details after lunch.If I made any mistake in these statements, let me know.
Try me. AMD should have known what to do five years ago.ur entire post is a mistake, but I'll add in details after lunch.
rt = raytracing. raytracing on nvidia gpus dont run on tensor cores, they run on the rt cores. tensor cores are used for ai tasks (such as dlss) yes, but that is separate from rt.The RT uses tensor cores which are capable of AI inferencing (the part where you can deploy the fully-baked AI model to answer new questions).
not fully. nvidia is generally quite stingy on vram, which limits the size of ai models that can be run on their gpus. u can already see this limitation with onprem image generation models (eg stable diffusion), where the size of the generated image is related to the size of the vram buffer. and if u want more vram, guess what, u have to pony up big time for otherwise diminishing returns. im not sure if the ai features of win12 are onprem, but if the current iteration of copilot is anything to go by, u dont even need dedicated hardware for it.Which means the Nvidia GPU is future proofed for AI PC that Microsoft is preparing to launch for Win12.
totally not helped by the monopolistic actions of nvidia, which they are still pulling to this dayAMD is the one who's dropping the ball. They totally missed the AI applications. In addition AMD = buggy. This reputation stuck for a long time and I don't see any meaningful improvement to date.
amd does have rocm as a promising alternative, especially when ai companies are actively trying to leave the walled garden of nvidia. asrock even actively promotes ai support on their 7000 series cards, with not unusable results. amd also has npus baked into their latest processors. and if amd = buggy (an outdated stereotype), then nvidia = buggy as well as they both have similar amount of bugs in their consumer drivers.Try me. AMD should have known what to do five years ago.
and this is comparing cpus to gpus in ai workloads, which is not a fair comparison as cpus and gpus work in fundamentally different ways.and they should absolutely be called out on it, but u have to hold nvidia to the same standard as well.I absolutely dislike AMD. Especially the part they purposely withheld Threadripper Zen 4 from the market for more than a year.
Oh ... you're right... I just searched and realised RT denoising is not AI based but mathematically calculated from existing rays. Somehow I got confused between the AI-based DLSS and RT denoising algorithm, because both in a way similar to cheating by interpolation.rt = raytracing. raytracing on nvidia gpus dont run on tensor cores, they run on the rt cores. tensor cores are used for ai tasks (such as dlss) yes, but that is separate from rt.
not fully. nvidia is generally quite stingy on vram, which limits the size of ai models that can be run on their gpus. u can already see this limitation with onprem image generation models (eg stable diffusion), where the size of the generated image is related to the size of the vram buffer. and if u want more vram, guess what, u have to pony up big time for otherwise diminishing returns. im not sure if the ai features of win12 are onprem, but if the current iteration of copilot is anything to go by, u dont even need dedicated hardware for it.
totally not helped by the monopolistic actions of nvidia, which they are still pulling to this dayamd does have rocm as a promising alternative, especially when ai companies are actively trying to leave the walled garden of nvidia. asrock even actively promotes ai support on their 7000 series cards, with not unusable results. amd also has npus baked into their latest processors. and if amd = buggy (an outdated stereotype), then nvidia = buggy as well as they both have similar amount of bugs in their consumer drivers.
ah yes, a nvidia blog post on why nvidia is better, very credibleand this is comparing cpus to gpus in ai workloads, which is not a fair comparison as cpus and gpus work in fundamentally different ways.
and they should absolutely be called out on it, but u have to hold nvidia to the same standard as well.
You're right about the VRAM, it's all about saving cost and making profits for Nvidia.not fully. nvidia is generally quite stingy on vram, which limits the size of ai models that can be run on their gpus. u can already see this limitation with onprem image generation models (eg stable diffusion), where the size of the generated image is related to the size of the vram buffer. and if u want more vram, guess what, u have to pony up big time for otherwise diminishing returns. im not sure if the ai features of win12 are onprem, but if the current iteration of copilot is anything to go by, u dont even need dedicated hardware for it.
I gladly dispute this claim. I've been switching between AMD and Nvidia graphic solutions and I certainly don't think AMD drivers are buggy. This sounded like the entire experience with AMD graphic solution is bad, but is clearly not the case. The point is, there's always software bugs and I certainly do not feel these bugs destroys user experience.The RT uses tensor cores which are capable of AI inferencing (the part where you can deploy the fully-baked AI model to answer new questions). Same as the Intel and Mac NPU.
Which means the Nvidia GPU is future proofed for AI PC that Microsoft is preparing to launch for Win12.
If I made any mistake in these statements, let me know.
AMD is the one who's dropping the ball. They totally missed the AI applications. In addition AMD = buggy. This reputation stuck for a long time and I don't see any meaningful improvement to date.
Also, if AMD can't do it, it's time for Chinese GPU manufacturers to step in to the game to challenge Nvidia's price gouging.

I don't disagree that there may seems to be more software bugs with AMD graphic solution. But generally, the way people have put it across sounds like they are "very buggy", which is factually inaccurate. The other thing people like to compare is DLSS vs FSR image quality, which I find very perplexing. When your GPU don't have the ability to run at game smoothly, the last thing I will kick a fuss about is the quality of the image. So one either drop native resolution, graphic settings, or just enable whatever upscaling technology available. Don't get me wrong, DLSS tends to look better, but the truth is that it does not run on every graphic solution out there. So what good is it if it cannot run on some graphic cards?Historically speaking from my experience, I would agree that AMD driver had more bugs than Nvidia.
But at the same time, my current Nvidia card is the first since 8500GS soooooooooo...
Upscaling is always the last resort for me.I don't disagree that there may seems to be more software bugs with AMD graphic solution. But generally, the way people have put it across sounds like they are "very buggy", which is factually inaccurate. The other thing people like to compare is DLSS vs FSR image quality, which I find very perplexing. When your GPU don't have the ability to run at game smoothly, the last thing I will kick a fuss about is the quality of the image. So one either drop native resolution, graphic settings, or just enable whatever upscaling technology available. Don't get me wrong, DLSS tends to look better, but the truth is that it does not run on every graphic solution out there. So what good is it if it cannot run on some graphic cards?
Some games I will run at native resolution. For some games, I will upscale it, and frame cap it so that my GPU don't need to work so hard and run cooler.Upscaling is always the last resort for me.
I tweak cp2077 until give up. In the end disable RT and just run at 1440p native high without upscaling, turn on fmf, cap at 72fps.Some games I will run at native resolution. For some games, I will upscale it, and frame cap it so that my GPU don't need to work so hard and run cooler.I am generally not that fuss about how the game looks, unless the upscale image looks really rubbish or introduced too many severe image quality issues.
I tweak cp2077 until give up. In the end disable RT and just run at 1440p native high without upscaling, turn on fmf, cap at 72fps.
CP2077 is relatively easy to tune and very worth tuning.
If you are having trouble, check out BenchmarKing optimization guides, there is no need for those fancy upscaling and fancy tech. If you are on Nvidia can turn on DLAA to clean things up if you got extra performance to spare after tuning.
Last year i did a comparison and uploaded here, i did a little of my twist to it and the results are excellent.
You do not need a high-end GPU to get good graphics in CP2077.
After switched to Readon GPU I forget everything about rtx and DLSS Happily play on native with HDR and color is surprisingly better too.I don't disagree that there may seems to be more software bugs with AMD graphic solution. But generally, the way people have put it across sounds like they are "very buggy", which is factually inaccurate. The other thing people like to compare is DLSS vs FSR image quality, which I find very perplexing. When your GPU don't have the ability to run at game smoothly, the last thing I will kick a fuss about is the quality of the image. So one either drop native resolution, graphic settings, or just enable whatever upscaling technology available. Don't get me wrong, DLSS tends to look better, but the truth is that it does not run on every graphic solution out there. So what good is it if it cannot run on some graphic cards?
not fully. nvidia is generally quite stingy on vram, which limits the size of ai models that can be run on their gpus. u can already see this limitation with onprem image generation models (eg stable diffusion), where the size of the generated image is related to the size of the vram buffer. and if u want more vram, guess what, u have to pony up big time for otherwise diminishing returns. im not sure if the ai features of win12 are onprem, but if the current iteration of copilot is anything to go by, u dont even need dedicated hardware for it.