One thing I know is that NTFS does perform quite poorly when it comes to file lookup operations (not file I/O). Or at least the Window driver for it. This was one of the issues Microsoft had with their native Linux support since Linux handles file lookup operations an order of magnitude faster and depends on such speed to run efficiently.Alm888 wrote: ↑Sun, 6. Sep 20, 07:51The thing is, NTFS is a very old file system from 1993 and Microsoft has never bothered to replace it with something modern.
It should be noted though, that finding a sane benchmark is not a trivial task. The filesystems' "ecosystems" do not overlap and most researchers focus on either Windows (NTFS/exFAT/FAT32) or Linux (ext4/reiserFS/BtrFS) clusters, and those who do compare NTFS vs. ext4 often do so incorrectly (by using NTFS via FUSE in Linux, which absolutely trashes performance).
That said this does not really affect save/load of X4 since that is only a single large file with huge processing time.
The OSes support virtualization, but it is not enabled by default. Enabling it does come at a performance penalty since then a hypervisor is used. This mode is needed to run things like other virtual machines and is how native Linux compatibility was implemented on Windows 10.dtpsprt wrote: ↑Sun, 6. Sep 20, 08:20What's strange about that Imperial? To add insult to injury Microsoft started in Vista the total virtual running of the computer. This means that it creates a "virtual machine" for the player to use with no direct access to the hardware. This means that there is a "delay" for data to be actually transferred to the Hard Disk, the process is done once in the "virtual computer" and once in the real one.
Windows does not work like that.dtpsprt wrote: ↑Sun, 6. Sep 20, 08:20I have (very old) news on that. Ever since Win 3.1 Windows retains control of the physical RAM to be used by Microsoft Applications (so they'll seem to be the fastest of all) pushing all other applications in the Hard Disk RAM created on startup (most usually fractured of course).
The OS manages all physical memory as all modern OSes do (such as Linux and OSX). It then assigns each application running on it a private virtual memory address space. Virtual memory is a long existing feature supported by high performance processors that allows the creation of a custom and reprogrammable address space that is not directly related to the physical memory address layout of the system. Virtual memory is managed in a unit called "memory page" which represents some uniform amount of memory. The OS is then responsible for managing the virtual memory of each application, keeping track of and setting up mappings from virtual memory to physical memory. Since pages do not benefit much from being organized in a continuous way in memory and are uniform this makes managing memory and giving it to applications a lot easier and is one of the main reasons this technique is used.
Since memory pages can be moved around in physical memory easily, it becomes possible to write out memory pages to some persistent or bulk storage medium to free up physical memory. Hence why all modern operating systems use a page file, which can be used to hold memory pages when available physical memory is depleted and as such avoid out of memory crashes. Logically bulk storage is a lot slower than memory, and as such a victim algorithm is used to select pages that are infrequently touched in preference to ones frequently used to minimize the performance impact. When an application touches an address that is currently not resident in physical memory due to being paged out to the page file, a page fault interrupt is generated that the OS processes and uses to move the corresponding memory page back into physical memory before resuming application execution as if nothing had happened. Since using the page file does have an overhead, the OS tries its best to avoid using it when possible by first mostly filling physical memory before starting to evict infrequently used memory pages to the page file.
This also allows a different kind of I/O, memory mapped I/O. An application can request that part of its virtual address space be mapped to parts of arbitrary files within the file system as long as alignment requirements are respected. Since file data becomes memory pages, it also means that a feature such as file cache becomes trivial to implement. This allows frequently accessed file data to be stored in memory pages so when an application requests it then either the data is copied directly in bulk to the destination address, or if memory mapped I/O is used then the memory page is trivially remapped to the application address space without any copying occurring.
Since the Kernel, and the drivers running in it, have to also use memory there is special handling of such memory separate from applications. For example pages might be locked down and forced to reside in physical memory for correct DMA functionality, such as used by underlying I/O. This applies to all Windows drivers, third party drivers and the kernel itself. It does not apply to Microsoft applications, which run like all normal applications using standard user space virtual memory. Some Microsoft applications which have associated drivers might use this functionality within the driver part, but the user space application part will be like any other application. The logical reason for this is that Microsoft designed the OS and features of it around supporting applications like their own, so they do not need special treatment as what is provided to every application is good. Some Microsoft applications interface with the OS using obscure or non-public APIs, but this is not related to memory and more to OS interaction.
All those operations should happen in memory assuming you installed enough physical memory in your system (8 GB minimum, 16 GB recommended) and are not running a lot of memory hungry background applications (32 GB or more needed for this). The only time I/O should occur is when the file has completely been written, in which case the OS will eventually flush the file cache and write the save file out to disk. Of course if write through mode is used for I/O then the file cache is ignored, but this should only be used when manipulating complex persistent data files that are manipulated in small parts and prone to corruption if changes are not written out consistently such as databases. Save files are generated fresh each time so this is not really a concern, with maximum data loss period likely being only a few seconds after the save completes before the OS efficiently writes it out to disk from the file cache.dtpsprt wrote: ↑Sun, 6. Sep 20, 08:20So we have one Hard Disk Operation to write the save on HD (virtual machine), second one to compress it (virtual machine again), write it down (finally) on the "actual" Hard Disk. All these operations are done inside the Hard Disk since there is no physical RAM available.
The I/O will likely involve kernel level copying. Direct read and write through has very specific memory alignment requirements and given how small the resulting save files are, this is unlikely to make much of a performance difference. It also can lower performance since it does not allow the file cache to work efficiently and schedule the data to be written out asynchronously later (a few seconds at most after the file is written).