Performance not impacted by Graphical settings

This forum is the ideal place for all discussion relating to X4. You will also find additional information from developers here.

Moderator: Moderators for English X Forum

Imperial Good
Moderator (English)
Moderator (English)
Posts: 4764
Joined: Fri, 21. Dec 18, 18:23
x4

Re: Performance not impacted by Graphical settings

Post by Imperial Good » Mon, 9. Sep 19, 19:09

bignick217 wrote:
Mon, 9. Sep 19, 11:59
I'm giving you a fair bit of latitude with a lot of your responses. Some things you get right. Others you missed the mark, but were close enough. This one, you are way off and it needs to be addressed. I think I know my own processor better than you do. First off, I did not "incorrectly" state my processor initially as the R9 1950X. While Threadripper is the colloquially accepted name designation for all "Threadripper" processors, R9 is it's actual class designation. In the newest 3rd generation of Ryzen processors (Technically "Zen 2"), the R9 designation has been taken up AM4 class processors that exceed 8 physical cores, but in the first generation of chips, R9 was Threadripper's designation.
Feel free to correct the Wikipedia article then. That was my first stop and they referred to your CPU as "Ryzen Threadripper 1950X".
https://en.wikipedia.org/wiki/Ryzen

It is possible that the Ryzen 9 class name was dropped from those CPUs around generation 2 when they decided to push Threadripper branding up market to compete with Intel HEDT. This would make sense from the cost point of view, since those Threadripper parts cost similar to the new Ryzen 9 3900X and 3950X, minus the more expensive motherboard.
bignick217 wrote:
Mon, 9. Sep 19, 11:59
Second, are you out of your mind. No, it's not slow for games. Never has been. It is more than capable of handling everything thrown at it with good high framerates. Especially if you pair it with 3200Mhz RAM (and even better if you pair it with CL14 Samsung B-Die RAM modules) like I have (which speeds up the Infinity Fabric and in turn the die-to-die communication speed). What it is not good for, is ultra high framerates if you're wanting to push past 200fps for use with 200hz+ monitors at 1080p resolution. But at higher resolutions like 1440p, there is very little difference between the 1950X and other processors of it's generation and at 4K, the differences are virtually indistinguishable. It was Intel's IPC and Frequency advantage that allowed them to hang on to the ultra high framerate lead. A lead, they have been quickly losing over successive Ryzen generations. But in case you didn't notice in my original post, I don't game on this CPU at 1080p. I game at 4K. Which means I don't care about Ultra-High framerates. If I did, I would be playing competitive games like COD and other FPS games where that matters, and not X4 (A game I only want to maintain 60fps).
Except when the game you are playing is CPU bottlenecked. In which case it does make a huge difference. X4 is largely CPU bottlenecked. The uplift in performance will correspond directly to frames per second increase because the CPU cannot generate them fast enough to keep up with the GPU.

This is a big problem with RTS and strategy games because they may require much more complex simulation than FPS/TPS style games. X4 sort of is a hybrid between the two since you can build huge complex economies and fleets while also flying around shooting stuff in kind of epic battles. This interface is likely where the performance bottleneck comes from.

3200MHz RAM is kind of a given now seeing how all third gen Ryzen parts support it natively. To use anything less than that is kind of wasteful seeing how cheap RAM is up to 3200MHz. Only beyond that does the price per GB increase exponentially which is likely why AMD choose 3200MHz as the native speed for Ryzen third gen. And yes it does make a difference when gamming, but that is largely due to the reduction in memory latency rather than memory bandwidth as that only becomes a bottleneck in massively parallel tasks which few games are.
bignick217 wrote:
Mon, 9. Sep 19, 11:59
On top of that, if you had done your homework before speaking on this topic, you would know, back then, that the 1950X actually outperformed the 1800X, 1700X and 1600X in gaming in that generation in most games. Some by a little. Some by a considerable margin. There were only a few games where the 1950X ended up being slower, and that was usually do to an issue between the game and the UMA/NUMA memory configurations, which you could usually easily fix by simply changing a setting. The 1950X was actually so good while gaming, that when the 2nd Gen Ryzen processors released, while the AM4 chips saw a considerable increase in gaming performance, the 2950X only saw a marginal 5-10% improvement that only equated to about 5 additional frames per second on average. There were one or two outliers that saw a better gain of about 10 frames, but for the most part it was only a few frames difference.
I did not raise this as we now are in 2019 with third gen Ryzen where even a $200 3600 will beat the 1950X it in games easilly, let alone if the 3900X or 3950X (when released) worked properly (currently admitted AMD bug, fix coming soon).
bignick217 wrote:
Mon, 9. Sep 19, 11:59
And what the hell do you mean that "GHz mean nothing"? GHz and IPC go hand in hand when it comes to determining single thread (or better said, per thread) performance. You can't have one without the other when determining performance. But you are right that we have pretty much hit the limit of Frequency. And IPC can only get you so far. IPC is all about efficiency and you can only make something so efficient before you start running out of ideas. That's why you see big IPC improvements in the first few generations of a new architecture, and then there after, each successive generation's "improvements" become smaller and smaller (diminishing returns). Intel has illustrated this concept for years. That's why you're now seeing an explosion of increased cores and threads in processors. Because it's now a lot easier to add more cores and threads, than it is to push frequencies higher. And that's why games are becoming a lot more multithreaded these days, because you can process a lot more instructions on two threads at 4Ghz than you can on one thread at 5Ghz. And you can do that while using less power and producing less heat because you're not overtaxing one thread when you have other threads just sitting there doing nothing, wasting resources. The term I heard one developer use is they are now needing to start programming wide instead of tall.
Well explain to me how Ryzen third gen pulled off this impossible feet of well over 10% increase in IPC? Intel has done the same with their 10nm as well, except that will only hit desktops in ~2021.

A Ryzen 3600 will run practically every modern game better than a 2700X despite having 2 fewer cores largely due to its better IPC so better single thread performance.

Yes ideally a game should be massively parallel so scale infinitely with core count. However no games are and cores are often left blocked waiting for synchronization or other cores to complete their tasks. Writing a game to be highly parallel threaded is not easy at all. If you do not believe me, then try so yourself. Keep in mind that such game has to perform well on 4 thread systems as well as 32 thread systems, as dictated by target audience.
bignick217 wrote:
Mon, 9. Sep 19, 11:59
Do I think it's easy. No, of course not. But when you have a bunch of people complaining about performance issues (even people using "gaming" CPUs (even though there's technically no such thing as a "gaming" CPU)) and everyone keeps saying the problem is with the CPU even though these people have CPU's that have threads to spare while only 2 are being said to be getting slammed, it's not outside the realm of reason to think that maybe the developers should do something about it and maybe give a couple of those extra unused threads "something" to do. While I've been playing this game, my overall CPU usage barely ever hits 10% (usually sits around 8%). If the problem is as others have said and it's the CPU at fault because 2 threads isn't enough, then please by all means, use more of my cores. That's what they're there for! I doubt anyone else rocking 6 cores, 8 cores, 6 threads, 12 threads or 16 threads would have a problem with X4 using more of their processor resources if it meant the game would run better.
Except the people running 4 cores 4 threads would. Yes I know people who play X4 on such systems. The multi threading overhead would degrade their performance as they stand little to gain from it.

I played X4 on an old I7 920 with 4C and 8T. Someone I know played and is still playing X4 on a similar generation I5 with 4C and 4T. X4 is more than playable on those. Sure the frame rate is nowhere near 60FPS but one does not need 60FPS for a game to be playable and fun. After my 10 year old system died last month I spent 3 weeks playing Heroes of the Storm at 15-20 FPS on an Intel Core2 Quad system and did not think of once complaining about the performance to the developers even if the game was using only 50% of the CPU. When I fired up X4 on my new Ryzen 9 3900X system the other day and got butter smooth FPS I was amazed, however I still do not regret having played it with my old I7 920 system at anywhere from 15 to 40 FPS.
AquilaRossa wrote:
Mon, 9. Sep 19, 16:13
Scoob are you getting over a week of game days in and still getting that FPS? I am reading about people with 9900K and 1080ti caliber of systems getting bogged down to 20FPS when the map is open, or at a station.
That is largely due to a bug causing the AI to spam over 1,000 useless food factories. The NPCs end up with more food factories than most people end up with ships late game.

CBJ
EGOSOFT
EGOSOFT
Posts: 51974
Joined: Tue, 29. Apr 03, 00:56
x4

Re: Performance not impacted by Graphical settings

Post by CBJ » Mon, 9. Sep 19, 19:13

Can we take the in-depth discussion of CPU architecture somewhere else please?

bignick217
Posts: 95
Joined: Sat, 15. Jan 05, 15:08
x4

Re: Performance not impacted by Graphical settings

Post by bignick217 » Mon, 9. Sep 19, 23:47

Imperial Good wrote:
Mon, 9. Sep 19, 19:09
bignick217 wrote:
Mon, 9. Sep 19, 11:59
On top of that, if you had done your homework before speaking on this topic, you would know, back then, that the 1950X actually outperformed the 1800X, 1700X and 1600X in gaming in that generation in most games. Some by a little. Some by a considerable margin. There were only a few games where the 1950X ended up being slower, and that was usually do to an issue between the game and the UMA/NUMA memory configurations, which you could usually easily fix by simply changing a setting. The 1950X was actually so good while gaming, that when the 2nd Gen Ryzen processors released, while the AM4 chips saw a considerable increase in gaming performance, the 2950X only saw a marginal 5-10% improvement that only equated to about 5 additional frames per second on average. There were one or two outliers that saw a better gain of about 10 frames, but for the most part it was only a few frames difference.
I did not raise this as we now are in 2019 with third gen Ryzen where even a $200 3600 will beat the 1950X it in games easilly, let alone if the 3900X or 3950X (when released) worked properly (currently admitted AMD bug, fix coming soon).
OMG! Stop the presses. A brand new CPU is faster than the CPU's released last year and earlier! Who'd have thunk it! In all seriousness man, I think you're definition of "slow" is pretty broad. Just because a new processor is faster than an older processor (as it should be), doesn't mean the older processor is automatically "slow". "Not as fast" does not equate to "slow". A 7700K is older than an 8700K too. Does that mean the 7700K is a slow processor for gaming? No, of course not. A 7700K is still a damn good processor for gaming. It's just not "as fast" as the 8700K. It's bang for buck ratio was terrible compared to the 1600X of the time, but it was still, and is still, a damn good processor for gaming. Come on man, the Ryzen 1000 series processors and Intel 7th Gen processors are only 2 years old. Not 10 years old. TWO! Seriously man, you either really need to get some perspective to learn how to properly assess "relative" performance and not just look at a bar graph and go "That bar is a little shorter. Wow, that processor is really slow." :gruebel: , or you need to figure out how to construct your sentences better to not come off as some arrogant elitist giving the impression that anyone's processor who doesn't meet your definition of fast (which at this point seems to only include the NEWEST CPU's on the market) and we all just have "SLOW" processors. Come on dude, wake up and realize how you are coming across to everyone on here.
Imperial Good wrote:
Mon, 9. Sep 19, 19:09
bignick217 wrote:
Mon, 9. Sep 19, 11:59
And what the hell do you mean that "GHz mean nothing"? GHz and IPC go hand in hand when it comes to determining single thread (or better said, per thread) performance. You can't have one without the other when determining performance. But you are right that we have pretty much hit the limit of Frequency. And IPC can only get you so far. IPC is all about efficiency and you can only make something so efficient before you start running out of ideas. That's why you see big IPC improvements in the first few generations of a new architecture, and then there after, each successive generation's "improvements" become smaller and smaller (diminishing returns). Intel has illustrated this concept for years. That's why you're now seeing an explosion of increased cores and threads in processors. Because it's now a lot easier to add more cores and threads, than it is to push frequencies higher. And that's why games are becoming a lot more multithreaded these days, because you can process a lot more instructions on two threads at 4Ghz than you can on one thread at 5Ghz. And you can do that while using less power and producing less heat because you're not overtaxing one thread when you have other threads just sitting there doing nothing, wasting resources. The term I heard one developer use is they are now needing to start programming wide instead of tall.
Well explain to me how Ryzen third gen pulled off this impossible feet of well over 10% increase in IPC? Intel has done the same with their 10nm as well, except that will only hit desktops in ~2021.
It was a combination of things. Some small, some major. For one, the node shrink to 7nm definitely helped which shortens the pathways between circuits and speeds up end to end communications. Allowing you to fit more transistors in the same die area. Reduces the amount of power/voltage required to operate the chip, in turn reducing the heat produced which in turn frees up headroom to increase attainable frequencies. Reducing memory latencies (not necessarily RAM, cache memory latencies are just as important) also contributes considerably to over all IPC. Doubling the L3 cache size also contributes considerably to overall IPC because it allows the CPU to keep more 'often used' data sets closer to the processing cores ready for when they need it. AMD also detached the Infinity Fabric frequency from the memory from, which was having a negative impact on overall IPC and performance and causing issues with memory compatibility. Or better said, achievable stable memory settings. Plus there were a lot of other smaller tweaks that I don't know about or may not be publicly documented that all helped towards increasing Ryzen's IPC. And this is just from what I remember off hand. I didn't even have to look this up. Believe me when I tell you I've more than done my homework when it comes to Ryzen.

There's also lots of other things used to commonly increase IPC in processors. Instruction sets being one major one. An instruction set is a set of instructions that tells the processor how to more efficiently process certain types of workloads. A very simplistic way to illustrate this would be like having an instruction set to tell a processor that if asked to process 5+5+5+5+5, instead process it as 5x5. So instead or processing 4 instructions of 5+5, you only need to process one instruction of 5x5. Overly simplistic example, but it gets the point across. Meltdown was caused by one thing originally implemented in processors designed to increase IPC. I can't remember the specifics, but if I recall correctly it had something to do with branch prediction (prediction used to try to predict a given action and prepare for it before that action is actually requested for) or something about inferencing. Can't remember for sure. But because Meltdown was an exploit that preyed on a system that was specifically designed to increase IPC, it's also the reason why people saw an IPC degradation when it had to be fixed.

With all that said, just like with anything else, there are always going to be eventual limits to just how efficient you can make any one architecture because you are going to be limited to the over all limitations of the architecture itself. Processors are very complicated, so there are lots of things in lots of areas that can be tweaked to increase IPC. But it's not an infinite pool possibilities. Eventually you need to design a whole new architecture to overcome the fundemental limitations of the older architecture. At which point the process starts again. Big IPC improvements in the first few generations, then smaller less impactful increases there after. If you really need something to illustrate this point, go play SpaceChem. That game illustrates this point beautifully. If playing that game can't make you understand this concept, nothing will. Seriously go play it. Find a nice complex level and design a system to solve it. Then go back in and find ways to make your process more efficient. Find ways to use less components. Find ways to take as few cycles as possible. Just keep tweaking until there's literally nothing you can think of to increase your original design's efficiency any further. Then throw it all away and come up with an entirely new design using everything you learned from the previous design to make a new overall better design. If that doesn't make you understand this concept, nothing will. It is literally because of this process that people in the industry use the terms "young" and "mature" to describe architectures and processes.

As for Intel's 10nm.... Ooooh boy, that is a can of worms you really don't want to open. Trust me on that. You'll only be disappointed.

Anyway, I think CBJ is correct, so I'm going to stop going any further down this particular rabbit hole here. Computers are my profession, so I could literally go on like this forever and it's probably best I don't. As for you, I really do think you need to get some perspective on relative performance when assessing hardware and not be so critical of other peoples hardware. Not as fast, is not the same thing as slow.

Imperial Good
Moderator (English)
Moderator (English)
Posts: 4764
Joined: Fri, 21. Dec 18, 18:23
x4

Re: Performance not impacted by Graphical settings

Post by Imperial Good » Tue, 10. Sep 19, 01:19

bignick217 wrote:
Mon, 9. Sep 19, 23:47
Anyway, I think CBJ is correct, so I'm going to stop going any further down this particular rabbit hole here. Computers are my profession, so I could literally go on like this forever and it's probably best I don't. As for you, I really do think you need to get some perspective on relative performance when assessing hardware and not be so critical of other peoples hardware. Not as fast, is not the same thing as slow.
If you think he was correct why did you just reply with a massive post further discussing computer hardware? If I was to answer that then it would just escalate the issue further.

I was pointing out that one could significantly increase the performance of X4 by moving to a newer processor like a Ryzen third generation or Intel I7 9700K or I9 9900K from a first generation Ryzen processor. That is what one has to do if performance is not satisfactory enough for one. I am fairly certain the developers are already trying their best to optimize and multithread when possible, however as someone dealing with computers daily you are probably already aware of how difficult this is, especially when one has limited resources and pushing out new content and features is important.

If there are particular situations which perform inexplicably bad then report them as a bug. I did this for the massive performance reduction that occurs when one moves a L destroyer inside an unconstructed Xenon defence platform administration module.

Panos
Posts: 848
Joined: Sat, 25. Oct 08, 00:48
x4

Re: Performance not impacted by Graphical settings

Post by Panos » Wed, 2. Oct 19, 02:56

I was playing with the Graphical settings tonight as don't like 2x MSAA however 2x SSAA perf hit was pretty severe.
So after half hour found that the biggest FPS crippling setting is SSAO. It doesn't matter if is Low/Normal/High the perf hit is the same.
However when set to OFF, the fps jump 20% at the settings screen.

What that does to the screen image quality, here are the settings on
LOW
https://i.imgur.com/Eb9CzEd.jpg

HIGH
https://i.imgur.com/wszivjT.jpg

OFF
https://i.imgur.com/tM92Nz4.jpg

As you can see with Low & High SSAO setting the image is exactly same and fps stay at 53. With SSAO OFF the fps jumps to 67 on that scene.
Tbh shouldn't happen that perf hit remained the same with LOW & HIGH settings which means that might be a bug with 2.6b4 at least and doesn't scale down SSAO.
I will make a report on the bug section and see what response we get.

Next one up is Space Screen Reflections. I cannot see any visual difference between Off & Low, however the performance impact is significant.
From 53fps, turning SSR OFF but keeping SSAO on High results to 79fps on the settings screen.

Here is the screenshot

https://i.imgur.com/TCVmRrw.jpg

Post Reply

Return to “X4: Foundations”