Feel free to correct the Wikipedia article then. That was my first stop and they referred to your CPU as "Ryzen Threadripper 1950X".bignick217 wrote: ↑Mon, 9. Sep 19, 11:59I'm giving you a fair bit of latitude with a lot of your responses. Some things you get right. Others you missed the mark, but were close enough. This one, you are way off and it needs to be addressed. I think I know my own processor better than you do. First off, I did not "incorrectly" state my processor initially as the R9 1950X. While Threadripper is the colloquially accepted name designation for all "Threadripper" processors, R9 is it's actual class designation. In the newest 3rd generation of Ryzen processors (Technically "Zen 2"), the R9 designation has been taken up AM4 class processors that exceed 8 physical cores, but in the first generation of chips, R9 was Threadripper's designation.
https://en.wikipedia.org/wiki/Ryzen
It is possible that the Ryzen 9 class name was dropped from those CPUs around generation 2 when they decided to push Threadripper branding up market to compete with Intel HEDT. This would make sense from the cost point of view, since those Threadripper parts cost similar to the new Ryzen 9 3900X and 3950X, minus the more expensive motherboard.
Except when the game you are playing is CPU bottlenecked. In which case it does make a huge difference. X4 is largely CPU bottlenecked. The uplift in performance will correspond directly to frames per second increase because the CPU cannot generate them fast enough to keep up with the GPU.bignick217 wrote: ↑Mon, 9. Sep 19, 11:59Second, are you out of your mind. No, it's not slow for games. Never has been. It is more than capable of handling everything thrown at it with good high framerates. Especially if you pair it with 3200Mhz RAM (and even better if you pair it with CL14 Samsung B-Die RAM modules) like I have (which speeds up the Infinity Fabric and in turn the die-to-die communication speed). What it is not good for, is ultra high framerates if you're wanting to push past 200fps for use with 200hz+ monitors at 1080p resolution. But at higher resolutions like 1440p, there is very little difference between the 1950X and other processors of it's generation and at 4K, the differences are virtually indistinguishable. It was Intel's IPC and Frequency advantage that allowed them to hang on to the ultra high framerate lead. A lead, they have been quickly losing over successive Ryzen generations. But in case you didn't notice in my original post, I don't game on this CPU at 1080p. I game at 4K. Which means I don't care about Ultra-High framerates. If I did, I would be playing competitive games like COD and other FPS games where that matters, and not X4 (A game I only want to maintain 60fps).
This is a big problem with RTS and strategy games because they may require much more complex simulation than FPS/TPS style games. X4 sort of is a hybrid between the two since you can build huge complex economies and fleets while also flying around shooting stuff in kind of epic battles. This interface is likely where the performance bottleneck comes from.
3200MHz RAM is kind of a given now seeing how all third gen Ryzen parts support it natively. To use anything less than that is kind of wasteful seeing how cheap RAM is up to 3200MHz. Only beyond that does the price per GB increase exponentially which is likely why AMD choose 3200MHz as the native speed for Ryzen third gen. And yes it does make a difference when gamming, but that is largely due to the reduction in memory latency rather than memory bandwidth as that only becomes a bottleneck in massively parallel tasks which few games are.
I did not raise this as we now are in 2019 with third gen Ryzen where even a $200 3600 will beat the 1950X it in games easilly, let alone if the 3900X or 3950X (when released) worked properly (currently admitted AMD bug, fix coming soon).bignick217 wrote: ↑Mon, 9. Sep 19, 11:59On top of that, if you had done your homework before speaking on this topic, you would know, back then, that the 1950X actually outperformed the 1800X, 1700X and 1600X in gaming in that generation in most games. Some by a little. Some by a considerable margin. There were only a few games where the 1950X ended up being slower, and that was usually do to an issue between the game and the UMA/NUMA memory configurations, which you could usually easily fix by simply changing a setting. The 1950X was actually so good while gaming, that when the 2nd Gen Ryzen processors released, while the AM4 chips saw a considerable increase in gaming performance, the 2950X only saw a marginal 5-10% improvement that only equated to about 5 additional frames per second on average. There were one or two outliers that saw a better gain of about 10 frames, but for the most part it was only a few frames difference.
Well explain to me how Ryzen third gen pulled off this impossible feet of well over 10% increase in IPC? Intel has done the same with their 10nm as well, except that will only hit desktops in ~2021.bignick217 wrote: ↑Mon, 9. Sep 19, 11:59And what the hell do you mean that "GHz mean nothing"? GHz and IPC go hand in hand when it comes to determining single thread (or better said, per thread) performance. You can't have one without the other when determining performance. But you are right that we have pretty much hit the limit of Frequency. And IPC can only get you so far. IPC is all about efficiency and you can only make something so efficient before you start running out of ideas. That's why you see big IPC improvements in the first few generations of a new architecture, and then there after, each successive generation's "improvements" become smaller and smaller (diminishing returns). Intel has illustrated this concept for years. That's why you're now seeing an explosion of increased cores and threads in processors. Because it's now a lot easier to add more cores and threads, than it is to push frequencies higher. And that's why games are becoming a lot more multithreaded these days, because you can process a lot more instructions on two threads at 4Ghz than you can on one thread at 5Ghz. And you can do that while using less power and producing less heat because you're not overtaxing one thread when you have other threads just sitting there doing nothing, wasting resources. The term I heard one developer use is they are now needing to start programming wide instead of tall.
A Ryzen 3600 will run practically every modern game better than a 2700X despite having 2 fewer cores largely due to its better IPC so better single thread performance.
Yes ideally a game should be massively parallel so scale infinitely with core count. However no games are and cores are often left blocked waiting for synchronization or other cores to complete their tasks. Writing a game to be highly parallel threaded is not easy at all. If you do not believe me, then try so yourself. Keep in mind that such game has to perform well on 4 thread systems as well as 32 thread systems, as dictated by target audience.
Except the people running 4 cores 4 threads would. Yes I know people who play X4 on such systems. The multi threading overhead would degrade their performance as they stand little to gain from it.bignick217 wrote: ↑Mon, 9. Sep 19, 11:59Do I think it's easy. No, of course not. But when you have a bunch of people complaining about performance issues (even people using "gaming" CPUs (even though there's technically no such thing as a "gaming" CPU)) and everyone keeps saying the problem is with the CPU even though these people have CPU's that have threads to spare while only 2 are being said to be getting slammed, it's not outside the realm of reason to think that maybe the developers should do something about it and maybe give a couple of those extra unused threads "something" to do. While I've been playing this game, my overall CPU usage barely ever hits 10% (usually sits around 8%). If the problem is as others have said and it's the CPU at fault because 2 threads isn't enough, then please by all means, use more of my cores. That's what they're there for! I doubt anyone else rocking 6 cores, 8 cores, 6 threads, 12 threads or 16 threads would have a problem with X4 using more of their processor resources if it meant the game would run better.
I played X4 on an old I7 920 with 4C and 8T. Someone I know played and is still playing X4 on a similar generation I5 with 4C and 4T. X4 is more than playable on those. Sure the frame rate is nowhere near 60FPS but one does not need 60FPS for a game to be playable and fun. After my 10 year old system died last month I spent 3 weeks playing Heroes of the Storm at 15-20 FPS on an Intel Core2 Quad system and did not think of once complaining about the performance to the developers even if the game was using only 50% of the CPU. When I fired up X4 on my new Ryzen 9 3900X system the other day and got butter smooth FPS I was amazed, however I still do not regret having played it with my old I7 920 system at anywhere from 15 to 40 FPS.
That is largely due to a bug causing the AI to spam over 1,000 useless food factories. The NPCs end up with more food factories than most people end up with ships late game.AquilaRossa wrote: ↑Mon, 9. Sep 19, 16:13Scoob are you getting over a week of game days in and still getting that FPS? I am reading about people with 9900K and 1080ti caliber of systems getting bogged down to 20FPS when the map is open, or at a station.