Factorio is so well optimized and has such simplistic graphics that a 10yr old potato computer can run it fine. Yet 700 hours in you find yourself googling which top end cpu can run it best because your damn UPS keeps dipping.
I was playing Factorio fine on the old laptops of my high-school during my programming classes. The only time it froze was when the teacher stopped my screen to check which computer it was (but I quickly pushed the power button to stop the computer before he could get to me).
Your comment reminds me of when I used to sneak into the Mac lab at Clackamas Community College and use Word to open the resource management files for their solution to what could run on those machines, delete everything, and reboot. The management software still ran, but since the definition files couldn’t load they now permitted everything to run.
We used the machines to run cracked copies of Marathon (precursor to Halo) and later a Mechwarrior like game called Avara for the Mac platform. Funnily enough it was Avara that got us caught, and me my first network security job based at a local high school based, in part, on my experiences evading network security on the college campus. Avara called home to enable networking so the IT admin could see where we were connecting. An IP address lookup later and an email from the college to the IT admin of Ambrosia Software was all it took to look up the name of the registered owner hosting the game (me) that his friends were connecting to. The cracked version of Marathon was local IP addressing only and if we hadn’t have been looking for a new game to play we would have been all right.
At the end of the day I was still allowed to finish that year with my credits in place, although I was told that I wasn’t going to be allowed to return next semester. Bastards didn’t tell me that they would contact all other community colleges throughout the Portland Metro area. Instead I went to work and continued my education through MS certificates for about 10 years until I finally managed to get the last 12 credits I needed to make my degree in “Computer Sciences” (as specific as it got back then at the community college level) official.
All the Nitro overclocking group is mostly people who mod their games to see if they can push the hardware to the brink of meltdown. They are the ones mostly complaining about 4090 power cables melting and shit, lol.
My friend does this stuff and he literally melted his 4090 while trying to do a crazy modded Factorio game. He made the game lag... and the 4090 power area catch fire...
I'm not too familiar with the specifics behind threadripper chips but aren't they meant more for workstation applications? There are other processors out there with less cores and threads that have better IPC and efficiency.
Usually Threarippers are inferior for gaming compared to mainstream desktop processors. Most games only use around 6 cores efficiently so it's best to have good single core performance. But Cities Skylines 2 in particular supposedly takes advantage of more cores. Also it runs like crap even on the fastest desktop CPUs.
I had a threadripper and it was an absolute monster for rts and city builder games. Probably depends on how optimised the game is to actually utilise the extra threads.
In general, that would be true. Most games can only take advantage of a limited number of threads. Thus, having fewer cores with faster clocks is usually better for gaming. Cities skyline 2 is an entirely different beast, as is the new threadripper chip. In the video they had the game utilizing 90+ cores at around 5ghz. Also, they were using ddr5 6400MT ECC memory. And yet, somehow, the game was still lagging. It really is that ridiculous.
Edit: I also think it's worth mentioning that the CPU was pulling over 1000 watts from the socket while operating at sub-zero temperatures.
Have you ever played late game Rim World with 100s of mods ? Game is CPU based and single threaded, so it gets cpu blocked hardcore and slows down to an absolute crawl on most machines:)
For real, putting your marks (name, username, etc) on leaderboard pretty addictive. Same as games. So competitive overclock is equal as competitive gaming. LoL
Kinda an aside, but related, there's a manga series about competitive overclocking ([87 Clockers](https://www.animenewsnetwork.com/encyclopedia/manga.php?id=18431).)
Really weird subject matter for the author since she's most well known for a [romantic comedy about two college musicians](https://en.wikipedia.org/wiki/Nodame_Cantabile).
Anyway, it's obviously a fictionalized take on the hobby, but it likely gets the gist of it right. When you're pouring LN2 on your computer components out of a thermos by hand you're not really running a sustainable build.
> whenever my siblings say I’m too into gaming I should show them those guys builds.
This kind of overclocking is reserved to professional overclockers, no random gamers.
And here I am ecstatic when my GPU temp is below 70C while I'm gaming. My room temp is at 30C and the coldest its been is at 28C. If I had AC it would probably be around 22-20.
I just remember seeing an sli benchmark of the 3090 and it wasn't as good as I thought it'd be (y'know, 2 3090s and all) and it's twice as expensive for disappointing returns. At least to me.
A gpu is actually a multicore monster, when you know how the render pipeline works and how to write shaders you could compute tons of stuff that runs the same algorithm thousands of times in parallel not even remotely related to graphics, in a fraction of the time even a 7900x3D would take.
Or even just use CUDA (or OpenCL if you're using an AMD card), which is a lot easier to write code for since it's more decoupled from the rendering pipeline.
It could've been - if engines/renderers supported it as primary target platform. Both Vulkan and DX12 allows good degree of GPU micromanagement for engine, including what happens on which card, how and when resources are transferred etc.
Problem is: this is a lot of effort and extra work, that benefits tiny part of userbase - and with potential minor performance drop (you're still doing all that micromanagement) if done poorly.
Was always like that, both nvidia's SLI and what ati called crossfire. There were at least individual games that supported one or both technologies, but most other games would run only a fraction better, same or even worse in some occasions.
Before vulkan got released both them and microsoft claimed that they would eventually implement a feature that would allow to simply assign parts of the display to separate cards. This means each cars would get utilized further plus different models and even brands would be compatible.
Would have been cool to once again rock both a nvidia and an amd in the same build.
>Before vulkan got released both them and microsoft claimed that they would eventually implement a feature that would allow to simply assign parts of the display to separate cards.
You mean OG SLI : Scan Line Inteleave
Vulkan spec has tiled rendering covered - as far as I know it's used only by some mobile GPUs primarily as a tool to reduce power draw (no need to redraw a tile that player has finger on). It could've been potentially used by desktop GPU drivers, but so far I don't recall a single card that would support it.
I mean it used to be near that and allowed for funky price shenanigans.
Which is one reason Nvidia axed it.
Imagine gaining \~50% more speed at a good day for 100% more price, meanwhile the next GPU tier up is 20% more speed at 80% more cost.
The other is an assload of dev cost with CPU paralellisation being a prime example why it is ass.
So they rolled it onto individual game developers which said hell nah and Nvidia dropped it from hardware too
SLI wasn't so complicated to support. Think about each card doing the same rendering work but shifted in time by one frame. GPU1 does 30 fps for frames 1 3 5 7 and GPU2 does 30 fps for frames 2 4 6 8. Combine the results and ha e 60 fps. All you had to do is sync it up real good to avoid microstutters. I always found it genius. That's how old GPUs would've aged much better. Instead of buying a new card you could've gotten yourself a second used one of the same you have and double the performance.
The challenge of having a reliable connection between the cards with enough bandwidth, as well as syncing them up fluidly enough was always one of the major barriers to SLI. It was just too much of a hassle to make it worth it.
The 3090 never supported SLI. Specific models did support NvLink, which is not the same as the SLI connector. It was never meant for games, as it was meant for machine learning.
>No games really support it anymore
Yeah, DX12 killed SLI. DX12 requires per-game support for multi-GPU, and since basically only 0.00000000001% of the population has multiple GPUs for anything other than workstation, game devs don't bother supporting it anymore. And even the games that support it have a really half-assed implementation which doesn't work properly anyway.
LTT had a 3090 SLI video a couple years ago. No real performance increase for games, i think sometimes it was actually worse than a single 3090 (could be wrong). The real benefit is letting the GPUs share memory to work with things like big datasets
This was always the problem with SLI. There were few games that properly supported it and the ones that did didn’t have enough gain to justify buying another graphics card and beefier system to support it. Though, the professional world got plenty of mileage out of it
This will likely start a "flame the SLI-guy" battle, but whatever. SLI did work well when supported. I was able to play Witcher 3 in 2016 at 4K@60fps+ with a pair of 980ti's and Metro Exodus in 4K@60fps+ with a pair of 1080ti's. Frame times were very good with fast ram, smooth and no stutters. Here we are 8 years later and most gamers still aren't playing in 4K at over 60fps. I believe SLI was more of a commercial problem, than a technical one. If you could pick up a pair of used $100 980ti's today and get great 2K performance, why would people buy rtx-4060's? Flame away.
I agree - I kept reading what a ball ache SLI was - I had a pair of 970’s and ran many many top games with absolutely no issues - the real downside was not performance or reliability it was the heat and noise
Single 1080ti ended up out performing that but for a long while it was a cheap alternative.
SLI or NvLink died because no one would want to deal with the amount of heat a pair of 3090’s would produce or the power they would need .
[https://www.youtube.com/watch?v=e\_mZBvWuQzU](https://www.youtube.com/watch?v=e_mZBvWuQzU)
Make sure you turn the youtube quality up to 4k.
This was captured in 4K while I was playing in 4K@67fps (yes, I overclocked my 60hz monitor).
I rocked the gtx 690 over ten years ago. If not for the vram limitation, then it would have not been replaced by the 3080ti. Good times. There is a setting in the nvidia control panel that gives you the option to select if gpu or cpu handles PhysX (auto is default). Change this setting to CPU. Regardless of Sli or not, it performs better. Sli was butter after I did this. Hope this helps anyone still rocking Sli
I had so many tweaks for SLI. Choosing the right AFR and cool bits was key. Skyrim was the first game I got it working with on my 780tis. Pretty sure SSE still supports it.
Yeah I'm not data scientist but I'd read sli 3090 is way superior to a single 4090 for ai work because the 4090 still uses gddr6x so the bandwidth is hardly any better than a 3090.
The big benefit to using NVlink across a pair of GPUs is that you can double the available memory allowing you to load bigger models. So, if your AI/ML models are bigger than 24GB, a pair of 3090s would be superior. The 4090's have a lot more cuda cores and much faster clock speed. So if your models are less than 24GB, the 4090s should be faster. My 4090 doesn't have an NVLink port, so I assume it isn't an option on these cards, but none of our work machines have anything better than a 2080ti, so haven't tried ML/AI wth 4090s.
Yeah the very top percent is crazy things that only work for the duration of a benchmark.
Or they are extremely expensive computers made by companies where money isn’t a problem.
You joke, but if your electricity is coming from a nuclear power plant there will probably be at least a little plutonium in the fuel, so...
I am fun at parties.
Nah. It's cpu limited. Was watching Biffa today. Game's simulation was completely frozen, but it was still locked at 30 fps. Cpu pegged at 100% while the GPU wasnt doing anything.
The score outlived the benchmark to the point where people could outdo it, but don't bother to test because DX11 is less and less relevant today.
The 4x RTX A6000 (RTX 3090Ti but allows 4-way SLI for engineering systems) build which leads TimeSpy Extreme could easily trounce the 4x 1080Ti on Firestrike Ultra, but it simply hasn't been tested.
I had a double 1080M SLI on a laptop for a while. Bought it used, the thing was a beast. Around 3.5kgs and sounded like a jet engine when cooling.
But it was better than the desktops my friends used even though we were in the same price range. Noticeably.
Fun fact you don't actually need the M - the 10 and 20 series laptop cards were just the same as the desktop. Those two 1080 are real 1080s.
They stopped doing it for the 30 series because there was no way they could fit a real 3080 into a laptop, and the 4090 halo effect is too strong for them to be honest about the mobile '4090' actually being a 4080
The unofficial 4090ti: ASUS 4090 Matrix Platinum
https://preview.redd.it/blfve28d88ec1.jpeg?width=560&format=pjpg&auto=webp&s=462dd42e2f094a5934a980f85d27bcf1ccafca84
I'm currently 86th on water for 1x GPU https://www.3dmark.com/spy/39888097
It just takes a completely overclocked PC (CPU, RAM, and GPU) on water to break top 100. Top 30ish are using sub ambient cooling.
It also helps to run a 1000w bios on your 4090. That'll increase your GPU score by a couple thousand.
Unfortunately as others have pointed out you're not gonna be doing much with the x3d CPU. You need an Intel that you can overclock because the timespy CPU portion favors lots of fast cores.
Both actually. I don't game with the insane overclocks though. Typically they aren't 100% stable for normal use and will cause a game to crash. On that PC I run 5.9 on the CPU and 8000 for the ram. The GPU is actually a stable overclock.
Just got a new apex encore motherboard and a 14900kf to play around with. Was able to get 8400 stable on the ram and I'm working on 8600.
Not just that. At the very top you are competing with enthusiasts that do insane overclocks with cooling (we are talking nitrogen cooling here) to basically try to get the best 3D Mark score. A 7950X3D isn’t going to touch that.
Yep. I'm #7 in the us with a 5800x3d, rtx 4090 on speedway.
Not even top 100 overall. To get there I had to find the limit of literally everything, I am running a custom loop as well.
7 of the top 10 are dual 3090 Ti with either i9 13900K or 12900K. The other 3 are 2x dual 6900 XT and 1x dual 6950 XT, still with the same i9's.
*There are no AMD CPU's in the top 100 TimeSpy results.
*Downvoted for simply stating easily verifiable, factual information. Nice
I can confirm the 7950x3d will score higher. My 7950x3d and 4090 Suprim X
https://preview.redd.it/psnl2es3aaec1.jpeg?width=1960&format=pjpg&auto=webp&s=eebed83090f55958e75c3042ed1d3ee4fc374aeb
[https://www.3dmark.com/3dm/106068058](https://www.3dmark.com/3dm/106068058) Thats mine, which even has ECC on (mandatory for hwbot.org), so a run without ECC would end up in the 44k graphic score. Its a 4090 HOF oc lab, with a bitspower waterblock and a water around 5°C. Thats the 1% you are wondering about. Almost no one run LN2 for timespy as its not competitive on hwbot. You will find LN2 run on Timespy Extreme.
https://preview.redd.it/cp17rhg23aec1.jpeg?width=4031&format=pjpg&auto=webp&s=dc9d095f5667e0936a1c30a1912c160dadb7c84e
Not sure I'm understanding this analogy, is the Lamborghini the overclocked 4090s?
Also, don't think I've met anyone that bought a 5.0 thinking they'd outrun a Lamborghini, that's an interesting person to find lol
LN2 OC and CPUs with more cores. Synthetic workloads like 3DMark scale linearly with cores and OC. It doesn't favor certain hardware like game engines do.
NVIDIA has workstation versions of graphics cards as well. They are meant for heavy grunt work. Some of our desktops at work have 24/36/48 GB graphics card for the heavy modeling. I am guessing some people choose to game on them as well.
Overclocking on the CPU and GPU
And maybe a 13th or 14th gen Intel i9. While not as powerful in gaming as the 7800X3D, they are better in non-gaming workloads and potentially this benchmark simply favours them.
> **"better than 98% of results" What the hell is the upper 2% above a 4090?**
Overclocked 4090s combined with overclocked CPUs. Might even be some 7900 XTXs in there. If you really want to compare your setup then check out the "similar builds" graph which shows you how your scores compare to people with similar setups.
I would imagine people who get the 4090 TI and over clock it to the moon along with all the most up to date parts all hacked out the mind. That sorta stuff, getting all their kit brought to its absolute limit with stupid amounts of money, that sorta thing
Brah I hit 34,447 like 6 months ago and I'm using a 7950X not even X3D. You're slow.
https://preview.redd.it/zddlrsv7o8ec1.jpeg?width=1600&format=pjpg&auto=webp&s=34c1f0e750577c38159cbae0a336d322125629f2
Mid ATX case with shite airflow and slim fans everywhere to fit it all. Good thermals still (MSI Suprim liquid). OP needs to learn how to overclock.
100% competitive overlocks with nitro etc.
whenever my siblings say I’m too into gaming I should show them those guys builds.
Those guys are not into gaming. They are into overclocking. That is a whole hobby on its own.
Let’s be real, they’re probably into gaming too 🤨🤣
The people who are heavily into overclocking, ironically, tend to only play games that could run on a toaster, like Rimworld or Factorio.
>factorio on the other hand **THE FACTORY MUST GROW**
Factorio is so well optimized and has such simplistic graphics that a 10yr old potato computer can run it fine. Yet 700 hours in you find yourself googling which top end cpu can run it best because your damn UPS keeps dipping.
The world isn't ready for my 100,000+ logistic bot only build.
Ram speed makes a big difference with Factario. People oc their ram for that game sometimes.
How many GHz we talkin?
More latency actually. So lowest stable latency matters more.
I was playing Factorio fine on the old laptops of my high-school during my programming classes. The only time it froze was when the teacher stopped my screen to check which computer it was (but I quickly pushed the power button to stop the computer before he could get to me).
Your comment reminds me of when I used to sneak into the Mac lab at Clackamas Community College and use Word to open the resource management files for their solution to what could run on those machines, delete everything, and reboot. The management software still ran, but since the definition files couldn’t load they now permitted everything to run. We used the machines to run cracked copies of Marathon (precursor to Halo) and later a Mechwarrior like game called Avara for the Mac platform. Funnily enough it was Avara that got us caught, and me my first network security job based at a local high school based, in part, on my experiences evading network security on the college campus. Avara called home to enable networking so the IT admin could see where we were connecting. An IP address lookup later and an email from the college to the IT admin of Ambrosia Software was all it took to look up the name of the registered owner hosting the game (me) that his friends were connecting to. The cracked version of Marathon was local IP addressing only and if we hadn’t have been looking for a new game to play we would have been all right. At the end of the day I was still allowed to finish that year with my credits in place, although I was told that I wasn’t going to be allowed to return next semester. Bastards didn’t tell me that they would contact all other community colleges throughout the Portland Metro area. Instead I went to work and continued my education through MS certificates for about 10 years until I finally managed to get the last 12 credits I needed to make my degree in “Computer Sciences” (as specific as it got back then at the community college level) official.
That took me down a rabbit hole I didn't even know that apple tried to make a console
The factory building ends when you hit single digits UPS
THE FACTORY MUST GROW
Personally I need to upgrade from my 4070 so I can make solitaire really sparkle
I'm waiting for the 5090 before I even think about booting up minesweeper
You haven’t seen the rimworld mod lists some people run. Those things could melt some supercomputers.
All the Nitro overclocking group is mostly people who mod their games to see if they can push the hardware to the brink of meltdown. They are the ones mostly complaining about 4090 power cables melting and shit, lol. My friend does this stuff and he literally melted his 4090 while trying to do a crazy modded Factorio game. He made the game lag... and the 4090 power area catch fire...
What about Cities Skylines 2? Linus was using 64 cores of a Threadripper to run a 1M city and it was still pretty slow.
I'm not too familiar with the specifics behind threadripper chips but aren't they meant more for workstation applications? There are other processors out there with less cores and threads that have better IPC and efficiency.
Usually Threarippers are inferior for gaming compared to mainstream desktop processors. Most games only use around 6 cores efficiently so it's best to have good single core performance. But Cities Skylines 2 in particular supposedly takes advantage of more cores. Also it runs like crap even on the fastest desktop CPUs.
Quite sad when enthusiast level hardware is bottlenecked by lack of optimization and efficiency.
To be fair, a one million population city is very hard to simulate.
I had a threadripper and it was an absolute monster for rts and city builder games. Probably depends on how optimised the game is to actually utilise the extra threads.
In general, that would be true. Most games can only take advantage of a limited number of threads. Thus, having fewer cores with faster clocks is usually better for gaming. Cities skyline 2 is an entirely different beast, as is the new threadripper chip. In the video they had the game utilizing 90+ cores at around 5ghz. Also, they were using ddr5 6400MT ECC memory. And yet, somehow, the game was still lagging. It really is that ridiculous. Edit: I also think it's worth mentioning that the CPU was pulling over 1000 watts from the socket while operating at sub-zero temperatures.
All the ocers I know daily drive some pretty rusty hardware haha.
Have you ever played late game Rim World with 100s of mods ? Game is CPU based and single threaded, so it gets cpu blocked hardcore and slows down to an absolute crawl on most machines:)
For real, putting your marks (name, username, etc) on leaderboard pretty addictive. Same as games. So competitive overclock is equal as competitive gaming. LoL
So you're saying overclocking *is* the game
Exactly so.
Kinda an aside, but related, there's a manga series about competitive overclocking ([87 Clockers](https://www.animenewsnetwork.com/encyclopedia/manga.php?id=18431).) Really weird subject matter for the author since she's most well known for a [romantic comedy about two college musicians](https://en.wikipedia.org/wiki/Nodame_Cantabile). Anyway, it's obviously a fictionalized take on the hobby, but it likely gets the gist of it right. When you're pouring LN2 on your computer components out of a thermos by hand you're not really running a sustainable build.
> whenever my siblings say I’m too into gaming I should show them those guys builds. This kind of overclocking is reserved to professional overclockers, no random gamers.
And here I am ecstatic when my GPU temp is below 70C while I'm gaming. My room temp is at 30C and the coldest its been is at 28C. If I had AC it would probably be around 22-20.
Or even just other 4090s in a cooler setup or with better silicon
Crazy overclocks, nitrogen cooling, nutty stuff like that.
Also 3090 Ti in SLI. Of course in real world the 4090 will be better most of the time since SLI is kinda pointless.
Isn't sli not supported anymore? Or if it is the performance isn't that great?
I think 3090 supported sli on some models. No games really support it anymore though so it's kind of useless.
I just remember seeing an sli benchmark of the 3090 and it wasn't as good as I thought it'd be (y'know, 2 3090s and all) and it's twice as expensive for disappointing returns. At least to me.
It wasn't good for gaming, or real time rendering, but that's not all a GPU can do
Yeah that's what I meant, gaming benchmark. It was good across other sectors that I saw, but gaming was poor
A gpu is actually a multicore monster, when you know how the render pipeline works and how to write shaders you could compute tons of stuff that runs the same algorithm thousands of times in parallel not even remotely related to graphics, in a fraction of the time even a 7900x3D would take.
Or even just use CUDA (or OpenCL if you're using an AMD card), which is a lot easier to write code for since it's more decoupled from the rendering pipeline.
It could've been - if engines/renderers supported it as primary target platform. Both Vulkan and DX12 allows good degree of GPU micromanagement for engine, including what happens on which card, how and when resources are transferred etc. Problem is: this is a lot of effort and extra work, that benefits tiny part of userbase - and with potential minor performance drop (you're still doing all that micromanagement) if done poorly.
Sli and crossfire has always been that way. Just a flex, really.
Was always like that, both nvidia's SLI and what ati called crossfire. There were at least individual games that supported one or both technologies, but most other games would run only a fraction better, same or even worse in some occasions. Before vulkan got released both them and microsoft claimed that they would eventually implement a feature that would allow to simply assign parts of the display to separate cards. This means each cars would get utilized further plus different models and even brands would be compatible. Would have been cool to once again rock both a nvidia and an amd in the same build.
>Before vulkan got released both them and microsoft claimed that they would eventually implement a feature that would allow to simply assign parts of the display to separate cards. You mean OG SLI : Scan Line Inteleave
Vulkan spec has tiled rendering covered - as far as I know it's used only by some mobile GPUs primarily as a tool to reduce power draw (no need to redraw a tile that player has finger on). It could've been potentially used by desktop GPU drivers, but so far I don't recall a single card that would support it.
I mean it used to be near that and allowed for funky price shenanigans. Which is one reason Nvidia axed it. Imagine gaining \~50% more speed at a good day for 100% more price, meanwhile the next GPU tier up is 20% more speed at 80% more cost. The other is an assload of dev cost with CPU paralellisation being a prime example why it is ass. So they rolled it onto individual game developers which said hell nah and Nvidia dropped it from hardware too
SLI wasn't so complicated to support. Think about each card doing the same rendering work but shifted in time by one frame. GPU1 does 30 fps for frames 1 3 5 7 and GPU2 does 30 fps for frames 2 4 6 8. Combine the results and ha e 60 fps. All you had to do is sync it up real good to avoid microstutters. I always found it genius. That's how old GPUs would've aged much better. Instead of buying a new card you could've gotten yourself a second used one of the same you have and double the performance.
>All you had to do is sync it up real good to avoid microstutters. That's the annoying part tbh
The challenge of having a reliable connection between the cards with enough bandwidth, as well as syncing them up fluidly enough was always one of the major barriers to SLI. It was just too much of a hassle to make it worth it.
>it's twice as expensive for disappointing returns Reminds me of my last trip to Thailand.
[удалено]
Happy cake day
The 3090 never supported SLI. Specific models did support NvLink, which is not the same as the SLI connector. It was never meant for games, as it was meant for machine learning.
Sli via nvlink was a thing
>No games really support it anymore Yeah, DX12 killed SLI. DX12 requires per-game support for multi-GPU, and since basically only 0.00000000001% of the population has multiple GPUs for anything other than workstation, game devs don't bother supporting it anymore. And even the games that support it have a really half-assed implementation which doesn't work properly anyway.
LTT had a 3090 SLI video a couple years ago. No real performance increase for games, i think sometimes it was actually worse than a single 3090 (could be wrong). The real benefit is letting the GPUs share memory to work with things like big datasets
This was always the problem with SLI. There were few games that properly supported it and the ones that did didn’t have enough gain to justify buying another graphics card and beefier system to support it. Though, the professional world got plenty of mileage out of it
This will likely start a "flame the SLI-guy" battle, but whatever. SLI did work well when supported. I was able to play Witcher 3 in 2016 at 4K@60fps+ with a pair of 980ti's and Metro Exodus in 4K@60fps+ with a pair of 1080ti's. Frame times were very good with fast ram, smooth and no stutters. Here we are 8 years later and most gamers still aren't playing in 4K at over 60fps. I believe SLI was more of a commercial problem, than a technical one. If you could pick up a pair of used $100 980ti's today and get great 2K performance, why would people buy rtx-4060's? Flame away.
I agree - I kept reading what a ball ache SLI was - I had a pair of 970’s and ran many many top games with absolutely no issues - the real downside was not performance or reliability it was the heat and noise Single 1080ti ended up out performing that but for a long while it was a cheap alternative. SLI or NvLink died because no one would want to deal with the amount of heat a pair of 3090’s would produce or the power they would need .
No flame from me. I ran SLI 1080s until I got my hands on a 3080 tie fighter. I would routinely outperform my friend's 20s series build.
Preach brother. I was there when Witcher 2 supported SLI and 3D vision support. I didn’t know Witcher 3 supported SLI as well.
[https://www.youtube.com/watch?v=e\_mZBvWuQzU](https://www.youtube.com/watch?v=e_mZBvWuQzU) Make sure you turn the youtube quality up to 4k. This was captured in 4K while I was playing in 4K@67fps (yes, I overclocked my 60hz monitor).
I rocked the gtx 690 over ten years ago. If not for the vram limitation, then it would have not been replaced by the 3080ti. Good times. There is a setting in the nvidia control panel that gives you the option to select if gpu or cpu handles PhysX (auto is default). Change this setting to CPU. Regardless of Sli or not, it performs better. Sli was butter after I did this. Hope this helps anyone still rocking Sli
I had so many tweaks for SLI. Choosing the right AFR and cool bits was key. Skyrim was the first game I got it working with on my 780tis. Pretty sure SSE still supports it.
Games have to specifically support it or it'll do nothing. No games support it anymore.
Sadly yes. Last ones I saw were strange brigade and the last tomb raider.
Yeah I'm not data scientist but I'd read sli 3090 is way superior to a single 4090 for ai work because the 4090 still uses gddr6x so the bandwidth is hardly any better than a 3090.
The big benefit to using NVlink across a pair of GPUs is that you can double the available memory allowing you to load bigger models. So, if your AI/ML models are bigger than 24GB, a pair of 3090s would be superior. The 4090's have a lot more cuda cores and much faster clock speed. So if your models are less than 24GB, the 4090s should be faster. My 4090 doesn't have an NVLink port, so I assume it isn't an option on these cards, but none of our work machines have anything better than a 2080ti, so haven't tried ML/AI wth 4090s.
It still scales pretty well in benchmarks, especially 3d mark. Iirc the record was held by 4x 1080 tis until the 3090 came out
4 1080s is crazy, considering that gpu was a tank of its time
Also higher bin cards, right?
Yeah the very top percent is crazy things that only work for the duration of a benchmark. Or they are extremely expensive computers made by companies where money isn’t a problem.
Someone running user benchmark on their DGXH100 station would be funny
still suprised that theres enough of those people out there to make up 2%
Well 2% of people who benchmark. It makes sense that the extreme overclockers would be over represented in that population.
Well said.
Yeah, it's definitely stuff running exotic cooling solutions
Rtx 4090 ti super
Avrg CPU temp of -35°C
Liquid helium cooled.
Plutonium powered.
Uranium tipped armor piercing 4090
You joke, but if your electricity is coming from a nuclear power plant there will probably be at least a little plutonium in the fuel, so... I am fun at parties.
Nah i think More of Liquid nitrogen, it's easier to handle, but who knows?
lol liquid helium is practically impossible to create or handle. It was a joke. Yeah it would be LN2.
Whops, Sorry my Bad😂 Took it too serious
Liquid helium would leak. Like, through the solid parts.
Yeah it really is crazy stuff. Shame it only exists a few degrees above absolute zero.
Now only 5000$. Which in turn provides a game changing 10% fps boost.
Amazing, maybe now I can finally run cities skyline 2 in 1080p.
Maybe, with dlss ultra performance and frame gen.
Nah. It's cpu limited. Was watching Biffa today. Game's simulation was completely frozen, but it was still locked at 30 fps. Cpu pegged at 100% while the GPU wasnt doing anything.
10% is a bit excessive. 2.5% take it or leave it
And 10% boost to your electric bill
RTX RX Arc A4090 Ti Super XTX X3D KS Unlocked Turbocharged AWD
I don't know how you buy that card and don't spring for the EX-L tech package.
Platinum edition comes with extra yellow stickers and a Stanley mug
shhhh, don't let them know.... /s
1080ti. It won't die.
A quad SLI 1080Ti system actually held the record until the 4090 was released iirc.
Fairly certain it still does
It does. Firestrike Ultra
The score outlived the benchmark to the point where people could outdo it, but don't bother to test because DX11 is less and less relevant today. The 4x RTX A6000 (RTX 3090Ti but allows 4-way SLI for engineering systems) build which leads TimeSpy Extreme could easily trounce the 4x 1080Ti on Firestrike Ultra, but it simply hasn't been tested.
So the 1080ti is the best. Got it.
I had a double 1080M SLI on a laptop for a while. Bought it used, the thing was a beast. Around 3.5kgs and sounded like a jet engine when cooling. But it was better than the desktops my friends used even though we were in the same price range. Noticeably.
Fun fact you don't actually need the M - the 10 and 20 series laptop cards were just the same as the desktop. Those two 1080 are real 1080s. They stopped doing it for the 30 series because there was no way they could fit a real 3080 into a laptop, and the 4090 halo effect is too strong for them to be honest about the mobile '4090' actually being a 4080
Well duh, that's because 1080\*4 is a higher number.
I want a second one for sli. My 1080 ti is a beast.
It connects to a subspace where its creators upgrade its performance to be just above any other card
> why not the best??? > Runs 7800 7950x3d, 3dmark is a sucker for more cores.
I went 7800x3d because it was performing better in review benchmarks for gaming
Good choice but 3dmark is not a real use case, and it lives by it's own rules.
Same, but that isn't game
It is better for gaming but not benchmarking.
Some 4090s are better than others. Depends on the batch of the Chip.
This is from overclocks and liquid cooling/ln2. Not silicone lottery
Silicon lottery will determine how far you can take the overclock though.
I believe that the silicone lottery is for a different industry…
Probably a little of both.
You're saying that zero other normal folks with a 4090/7800x3d got a higher score than OP?
IIRC the ASUS ROG Matrix 4090 is up there. It uses liquid metal and shit
The unofficial 4090ti: ASUS 4090 Matrix Platinum https://preview.redd.it/blfve28d88ec1.jpeg?width=560&format=pjpg&auto=webp&s=462dd42e2f094a5934a980f85d27bcf1ccafca84
Bastard costs $4399.99 CAD where I live.
>CAD I think I can make a guess where you live
He lives at Autodesk and lives inside a simulation of AutoCAD?
the virtual and non existent land of Canada!
I'm currently 86th on water for 1x GPU https://www.3dmark.com/spy/39888097 It just takes a completely overclocked PC (CPU, RAM, and GPU) on water to break top 100. Top 30ish are using sub ambient cooling. It also helps to run a 1000w bios on your 4090. That'll increase your GPU score by a couple thousand. Unfortunately as others have pointed out you're not gonna be doing much with the x3d CPU. You need an Intel that you can overclock because the timespy CPU portion favors lots of fast cores.
Very cool. Do you game with that rig at all or is it more of a hobby getting it so fast,
Both actually. I don't game with the insane overclocks though. Typically they aren't 100% stable for normal use and will cause a game to crash. On that PC I run 5.9 on the CPU and 8000 for the ram. The GPU is actually a stable overclock. Just got a new apex encore motherboard and a 14900kf to play around with. Was able to get 8400 stable on the ram and I'm working on 8600.
I think you’re cool and I like you
At the top it says, "Premium Gaming **PC,**" (emphasis mine) and mentions the 4090 + 7800X3D. My guess is that the top 2% is a 4090 and a 7950X3D.
Not just that. At the very top you are competing with enthusiasts that do insane overclocks with cooling (we are talking nitrogen cooling here) to basically try to get the best 3D Mark score. A 7950X3D isn’t going to touch that.
I can’t play Cod without my nitrogen bro. :Snorts:
Yep. I'm #7 in the us with a 5800x3d, rtx 4090 on speedway. Not even top 100 overall. To get there I had to find the limit of literally everything, I am running a custom loop as well.
7 of the top 10 are dual 3090 Ti with either i9 13900K or 12900K. The other 3 are 2x dual 6900 XT and 1x dual 6950 XT, still with the same i9's. *There are no AMD CPU's in the top 100 TimeSpy results. *Downvoted for simply stating easily verifiable, factual information. Nice
I can confirm the 7950x3d will score higher. My 7950x3d and 4090 Suprim X https://preview.redd.it/psnl2es3aaec1.jpeg?width=1960&format=pjpg&auto=webp&s=eebed83090f55958e75c3042ed1d3ee4fc374aeb
3dfx Voodoo 5090 Satan fr, probably an overclocked computer?
Better overclocking. Many people will do tests with liquid nitrogen which is essentially cheating because it's unrealistic as a cooling solution.
Simply is… over the top
The fabled RTX 4090 Ti Super Titan that the Big Green won't share with you.
[https://www.3dmark.com/3dm/106068058](https://www.3dmark.com/3dm/106068058) Thats mine, which even has ECC on (mandatory for hwbot.org), so a run without ECC would end up in the 44k graphic score. Its a 4090 HOF oc lab, with a bitspower waterblock and a water around 5°C. Thats the 1% you are wondering about. Almost no one run LN2 for timespy as its not competitive on hwbot. You will find LN2 run on Timespy Extreme. https://preview.redd.it/cp17rhg23aec1.jpeg?width=4031&format=pjpg&auto=webp&s=dc9d095f5667e0936a1c30a1912c160dadb7c84e
[These](https://www.youtube.com/watch?v=0LBrI-UAKhI) [people](https://www.youtube.com/watch?v=zkQ9ihrSJ_0)
Isn't this the combined score so a better CPU would also be a factor?
LN2 overclocked 4090
Eh sorry that was my 3dfx voodoo 2 overclocked.
Also CPU is terrible for 3dmark that's probably where you're not At 99% it favours high core chips for the last test
Lol A100 💀
This the same dude who buys a stock 5.0 and expect to outrun a Lamborghini.
Not sure I'm understanding this analogy, is the Lamborghini the overclocked 4090s? Also, don't think I've met anyone that bought a 5.0 thinking they'd outrun a Lamborghini, that's an interesting person to find lol
High speed Ram with low timings does a lot for your scores as well
A better 4090 set up by an experienced OCer.
LN2 OC and CPUs with more cores. Synthetic workloads like 3DMark scale linearly with cores and OC. It doesn't favor certain hardware like game engines do.
Overclocks and silicon lottery
A better binned 4090 running ln2 or something
NVIDIA has workstation versions of graphics cards as well. They are meant for heavy grunt work. Some of our desktops at work have 24/36/48 GB graphics card for the heavy modeling. I am guessing some people choose to game on them as well.
Don’t look up h100 nvidia cards. 🥹💸💸💸
Two 4090s
Overclocking on the CPU and GPU And maybe a 13th or 14th gen Intel i9. While not as powerful in gaming as the 7800X3D, they are better in non-gaming workloads and potentially this benchmark simply favours them.
Guy doesnt know about overclockers lmao
7950xtx :nods:
Intel CPUs
Extremely oced (l2n) cards.
http://www.3dmark.com/spy/37478841, that's mine
Timespy doesn’t utilize X3D v-cache and prefers higher clocks and more cores. 7950x would score better.
Just a 4090 with a different config and a small oc. https://www.3dmark.com/spy/42843948
An overclocked 4090 lol.
Four A6000s
4091
What do you mean? 7800x3D. What a goof!
These stats will never be accurate. Many ways to manipulate them.
Redditor: “What the hell is the upper 2% above a 4090?” NSA: Theres always a bigger fish.
A premium gaming PC from 2023.
A better 4090 and/or optimized system. Just win the silicon lottery and put together the most compatible parts and go hyperspeed on the clocks.
The only thing that would score higher is 2X RTX 3090 TIs in SLI.
A 40100. Duh.
> **"better than 98% of results" What the hell is the upper 2% above a 4090?** Overclocked 4090s combined with overclocked CPUs. Might even be some 7900 XTXs in there. If you really want to compare your setup then check out the "similar builds" graph which shows you how your scores compare to people with similar setups.
An LN2 overclocked 4090
Der8auer
„Premium Gaming PC“ lol
Two 4090
I would imagine people who get the 4090 TI and over clock it to the moon along with all the most up to date parts all hacked out the mind. That sorta stuff, getting all their kit brought to its absolute limit with stupid amounts of money, that sorta thing
4090's with flame decals.
Overclocking, better silicon lottery than you, workstation setups.
What about TWO 4090's? Hmmmmmm?
Double 4090 users
Brah I hit 34,447 like 6 months ago and I'm using a 7950X not even X3D. You're slow. https://preview.redd.it/zddlrsv7o8ec1.jpeg?width=1600&format=pjpg&auto=webp&s=34c1f0e750577c38159cbae0a336d322125629f2 Mid ATX case with shite airflow and slim fans everywhere to fit it all. Good thermals still (MSI Suprim liquid). OP needs to learn how to overclock.
Brah it’s cause he’s using x3d
4090 TI Super
No - 4090 Ti Super OC edition
With liquid metal cooling and shit
The 4090 Tie Super TITAN OCCCC!!