Jump to content

Welcome to MSFN Forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. This message will be removed once you have signed in.
Login to Account Create an Account



Photo

Windows 7: Possible, Advisable, to Disable the Page File?

Page File Paging File Virtual Memory

  • Please log in to reply
23 replies to this topic

#1
Radish

Radish

    Newbie

  • Member
  • 28 posts
  • Joined 11-May 15
  • OS:Windows 7 x64
  • Country: Country Flag

OS = Windows 7 x64 SP1

 

Hi,

 

I intend getting more RAM for my system, basically because I want to use a large ramdisk. When I do the upgrade I'll have 16GB of RAM. 10GB of that for a ramdisk and the remaining 6GB for the system. I'm thinking that it used to be the case that Microsoft recommended 1.5 times the amount of RAM installed for a page file. However, that advice was from the year dot and things have changed these days when people routinely install amounts of RAM that would have been unheard of years ago. Certainly for the last few years I've just set the page file to a fixed size equal to the amount of RAM installed and have never noticed any problems with that.

 

So, my questions are:

 

1) With the amount of RAM I intend to install would it be safe to just disable the page file completely?

 

2) If it isn't advisable to completely disable the page file what would be the recommended mimimum fixed size for the page file?

 

3) Would disabling the page file lead to an increase in speed of the system?

 

I think I should add that I don't do things on my system that really stress the system. Nowadays I just use my home computer for mundane stuff. Certainly I don't do high powered gaming.




How to remove advertisement from MSFN

#2
NoelC

NoelC

    Software Engineer

  • Member
  • PipPipPipPipPipPipPipPip
  • 2,608 posts
  • Joined 08-April 13
  • OS:Windows 8.1 x64
  • Country: Country Flag

What makes you believe you should disable the page file?  Installing more RAM will likely help, but the best general advice is to just leave the page file alone.  There are some things the system uses it for that are not a result of running short of RAM.

 

Also, what makes you feel you will increase performance by using a RAMDISK?  Your file system already provides you with RAM caching, which becomes quite effective if you have a lot of RAM.  There's a setting you can throw - "Turn off Windows write-cache buffer flushing on the device" - that makes the file system cache fully write-back without waits as well, improving performance of most things that write to the disk.

 

Since in general it sounds like you are striving for increased system responsiveness, may I suggest - if you haven't already done so - migrating your system to use SSD storage.  THAT is the single biggest thing you can do nowadays to increase the responsiveness of a system.

 

-Noel



#3
jaclaz

jaclaz

    The Finder

  • Developer
  • 15,762 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

The pagefile has two main uses:

a. to "help" in case of high RAM usage (more than available RAM)

b. to make a dump of memory in case of crash

 

Some programs however, as an example some Adobe ones, will *want*  the presence of a pagefile in order to even start.

 

There are three theories on which everyone is arguing about since years:

1) the pagefile is better left alone and MS (actually the Windows OS) can manage it fine

2) the pagefile makes little sense if there is *enough* RAM but since it is needed by some program the best thing is to have one FIXED size (NOT system managed) as small as possible.

3) the pagefile makes no sense whatsoever if there is *enough* RAM

 

Since I don't use any program that actually wants a pagefile I have run Windows 2K and XP systems without a pagefile just fine, JFYI:

http://www.msfn.org/...le-at-shutdown/

but it's not something that should be done at home.

 

Making a Dynamic pagefile makes anyway very little (please read as "no") sense.

 

If you believe that you need a full RAM dump on crash you need to set it's fixed size to at least the size of RAM the system has.

Please understand how all the people that can actually use the info in a several Gb RAM dump for troubleshooting can be counted on your fingers.

 

The good ol' rule of the thumb about 1.5 to 2.5x the amount of RAM makes no sense whatsoever (it was accurate enough when systems had 128, 256 or 512 Mb of RAM, but not nowadays with 3 or more GB of RAM).

 

Given that on the same modern machine that has *enough* RAM there is also *enough* space on hard disk, a FIXED size pagefile 500 Mb or 1 Gb in size is more than enough (unless you really-really want to save space on hard disk) but making one the size of the RAM wouldn't make any harm (if not taking up a few more Gb's), and as well using a "magic formula" like 1.5 x RAM will do the same, only taking even some more space on disk.

 

Still given that the machine has *enough* RAM, there will be NO difference whatsoever in performance with *any* of the settings, the pagefile will never be actually used if not in case of crash.

 

The advice to set it as fixed derives from the fact that in some cases of crash a dynamic pagefile will expand possibly overwriting some areas of the disk where some data needed for recovery resides (a very remote possibility, but still a possibility) and anyway it will take more time to crash (while you can't do anything about it).

 

JFYI, there are people that believe to be smart to put the pagefile on a RAMdisk (on systems with plenty of RAM), something that, in the words of Mark Russinovich, is ridiculous:

http://www.overclock...e-on-a-ram-disk

 

The new rule of the thumb is "try the system, if you never hit the max amount of RAM you have you are good to go, if you consistently go over it, add more RAM, settings of the pagefile make not any difference in real life if there is *enough* RAM".

 

jaclaz


Edited by jaclaz, 22 June 2015 - 12:07 AM.

  • JodyT likes this

#4
Radish

Radish

    Newbie

  • Member
  • 28 posts
  • Joined 11-May 15
  • OS:Windows 7 x64
  • Country: Country Flag

Thanks for the responses. I did go and read http://www.overclock...e-on-a-ram-disk (yes all thirty pages of it). God, the range of opinion there is breathtaking. There is everything from don't touch the page file; to only use a fixed size page file; to completely disable the page file; to putting a small page file to ramdisk; to doing away with the page file completely or putting a small page file to a ramdisk because you are using a SSD!

 

Having read that, and noting the range of differing opinion, I've decided that I might just as well experiment and see what works out okay for me. There is no way that I want to start involving myself in using a SSD - that is well beyond my computing needs so I might as well just save myself the expense.

 

Thanks again.



#5
JodyT

JodyT

    Not a Win10 Fan :P

  • Member
  • PipPipPip
  • 318 posts
  • Joined 05-April 11
  • OS:Vista Ultimate x64
  • Country: Country Flag

I'm with jaclaz on this one.  I tested my Vista and XP x64 installations by running simultaneously the most applications I could possibly want to run at once.  I did it with two 1 gb page files (one on each drive), and without paging at all.  Without paging I still had a hard time going beyond 5 gb or RAM usage.  I loaded up my audio editor with dozens of wave files, opened two browser instances with about 50 tabs each, office apps with four or five docs and spreadsheets, you name it.  Performance was nearly the same.

 

I will say that my workstation still seemed to run faster without paging, but I had less free memory of course.  I'd probably be "safer" with a page file, but I'll never near the limit.

 

Apparently, paging can make meory use more efficient by placing memory address and data pointers in the page file for quick acccess.  But I figure, if it places that in RAM too, all the better.

 

And I really agree that a memory dump will only be useful to a handful of people, and I don't know any of them.  :P

 

.


Cheers,

Jody Thornton

(Thornhill, Ontario)

 


#6
Kelsenellenelvian

Kelsenellenelvian

    WPI Guru

  • Developer
  • 8,759 posts
  • Joined 18-September 03
  • OS:Windows 7 x64
  • Country: Country Flag

Myself I have always set ram on any of my machines that have more than six gigs to a 1gb fixed size page file. Small enough to not be noticed and doesn't ever cause fragmentation.



#7
dencorso

dencorso

    Iuvat plus qui nihil obstat

  • Supervisor
  • 6,004 posts
  • Joined 07-April 07
  • OS:98SE
  • Country: Country Flag

Donator

And I really agree that a memory dump will only be useful to a handful of people, and I don't know any of them.  :P


Well, I've got good news for you: you actually do know two of them, you just didn't know you do! :)
They're MagicAndre1981 and cluberti. I hope you never need their help, but if you ever do, now you know whom to ask. :yes:

#8
NoelC

NoelC

    Software Engineer

  • Member
  • PipPipPipPipPipPipPipPip
  • 2,608 posts
  • Joined 08-April 13
  • OS:Windows 8.1 x64
  • Country: Country Flag

There is no way that I want to start involving myself in using a SSD - that is well beyond my computing needs so I might as well just save myself the expense.

 

You should re-think that.  What's the downside?

 

SSD storage is cheap nowadays, and the responsiveness of the very same computer running from HDD and SSD is night and day.

 

FYI, I have 6 SSDs in RAID 0 my main workstation and 2 SSDs in RAID 0 in my small business server.  Once you run a system from SSD storage and gargantuan I/O speeds with near-zero latency, you begin to understand what the original designers of virtual memory systems were dreaming of.  Such a system just doesn't bog down.  And you'll never, ever be able to stand to go back to an HDD-based system.

 

-Noel



#9
jaclaz

jaclaz

    The Finder

  • Developer
  • 15,762 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

Well, I've got good news for you: you actually do know two of them, you just didn't know you do! :)
They're MagicAndre1981 and cluberti. I hope you never need their help, but if you ever do, now you know whom to ask. :yes:

Yep :thumbup:
... and when they happen to discuss the matter on a same thread, it becomes an useful resource to calculate an appropriate size of the pageflie (fixed, NOT dynamic) if you actually want/need a full dump:
http://www.msfn.org/...nclusive-bsods/
 
 
 

You should re-think that.  What's the downside?

I would guess a few benjamins changing hands :unsure:
 

FYI, I have 6 SSDs in RAID 0 my main workstation and 2 SSDs in RAID 0 in my small business server.  Once you run a system from SSD storage and gargantuan I/O speeds with near-zero latency, you begin to understand what the original designers of virtual memory systems were dreaming of.  Such a system just doesn't bog down.  And you'll never, ever be able to stand to go back to an HDD-based system.

Just in case   ;):

http://www.imdb.com/...?item=qt0335291

You know, I've always liked that word... 'gargantuan'... so rarely have an opportunity to use it in a sentence. 

 

 

jaclaz


Edited by jaclaz, 23 June 2015 - 03:03 AM.


#10
NoelC

NoelC

    Software Engineer

  • Member
  • PipPipPipPipPipPipPipPip
  • 2,608 posts
  • Joined 08-April 13
  • OS:Windows 8.1 x64
  • Country: Country Flag

 

I would guess a few benjamins changing hands :unsure:

 

 

Sure, just don't lose sight of the fact that the thread started with "I intend getting more RAM".

 

FYI, I bought 3 120 GB SSDs from eBay last month for $45 each.  They're not exactly worth their weight in gold.

 

People who have not used an SSD-equipped system often don't understand the potential for the increase of the level of responsiveness.  I perceive that's what this thread is about.

 

But hey, I understand.  People mostly have to learn things for themselves.

 

-Noel



#11
jaclaz

jaclaz

    The Finder

  • Developer
  • 15,762 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

People who have not used an SSD-equipped system often don't understand the potential for the increase of the level of responsiveness.  I perceive that's what this thread is about.

 

But hey, I understand.  People mostly have to learn things for themselves.

 

Well, to be fair, people who have not used RAMDISK's (which I perceive is what this thread is about :unsure:) often don't understand the potential for the further increase of the level of responsiveness.

 

Of course such a setup has it's own drawbacks, unlike the switch from conventional HDD's to SSD's which is perfectly "transparent" and surely it is much more costly on a per Gb base.

 

Should you have some (several) spare C-notes, you can do with it something really nice (JFYI):

http://www.bjorn3d.c...h-capacity-ram/

 

jaclaz



#12
Radish

Radish

    Newbie

  • Member
  • 28 posts
  • Joined 11-May 15
  • OS:Windows 7 x64
  • Country: Country Flag

Just to clear up any misunderstanding, my main interest in using a large ramdisk doesn't really concern performance but wear on the HDD. (Though if I got an increase in performance because of using a ramdisk then fine, I'd be happy with that too.)

 

Mostly I want to use a large ramdisk for downloading into and seeding torrents - which is, as far as I understand, is pretty wearing on an HDD. My previous computer which I had for many years over that time had multiple HDD fails and, as best as I could work out, that always seemed connected to the partition that I would save torrents to.

 

Years ago I used to obsess about performance, but not now. Computers nowadays are pretty well fast enough to do most things at a fair pace and I'm of an age now that I don't mind waiting a second or two for something to happen - the world isn't going to fall on my head if things don't happen instantaneously. I kind of chuckle now at my previous self and smile at the way that I learned a valuable lesson in my own time. Bliss. :sneaky:

 

I can though and do appreciate that computer pro's do obsess about performance - their jobs rely on it and without their concern there would be no real improvement over time for us all. So more power to them and their concerns - they do us all good in the long run. :thumbup  Just realise that not everyone needs to share that obsession and it's possible for learning to have more than one outcome - it depends on where you're at.



#13
jaclaz

jaclaz

    The Finder

  • Developer
  • 15,762 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

I have one thing to say about speed, which is:

Spoiler

 

As a side note, yet another way, WHOLE OS and programs on Ramdisks, of course a tadbit slow at power on, but once loaded very, very fast:

http://reboot.pro/to...n-many-ramdisk/

 

jaclaz



#14
NoelC

NoelC

    Software Engineer

  • Member
  • PipPipPipPipPipPipPipPip
  • 2,608 posts
  • Joined 08-April 13
  • OS:Windows 8.1 x64
  • Country: Country Flag

 

Well, to be fair, people who have not used RAMDISK's (which I perceive is what this thread is about :unsure:) often don't understand the potential for the further increase of the level of responsiveness.

 

 

Heh, that's kind of like the difference between using Nitrous Oxide in an engine and just having an engine that can generate Real Power all day.

 

Radish, SSDs don't have mechanical parts to wear out.  Yes, flash memory does have a limited life, but given a very conservative 1000 write cycles capability per flash block, you'd have to write 250 terabytes to a 250 GB SSD before getting close to wearout.  Most people would take decades to write that much.  Do a bunch of peer to peer networking and you might get that down to 10 years. 

 

Show me an HDD that will last that long.

 

As an example, I have 480 GB SSDs that run all the virtual machines I use to test with.  These systems - especially the Win 10 ones - do quite a bit of writing.  So far, per their SMART stats, I've written less than 1 TB of data to them per month.  At this rate projected wearout will be in the year 2055.

 

-Noel



#15
jaclaz

jaclaz

    The Finder

  • Developer
  • 15,762 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

Heh, that's kind of like the difference between using Nitrous Oxide in an engine and just having an engine that can generate Real Power all day.

Yes and no, the fact that a similar setup has it's own drawbacks doesn't mean that it is anyway unreliable, after all RAM is used much more than storage on any computer and it has proved to be a very solid piece of hardware, in my experience it is very rare that a stick of memory becomes bad, usually defective memory sticks are either DOA or suffer a very early death (though of course it is possible that they wear out over the years of use, I don't think there is an actual "wear" or degrading performances as there can be for SSD's) anyway, to remain OFF topic  :w00t:

Spoiler

While, almost back on topic, the issues with the proposed setup are not that much, you would need a good UPS, and - just like we did in the good ol'times - in the morning you would switch on the PC, go get a cup of coffee and by the time you are back the OS would be fully loaded, then you could work all day on the very fast/responsive thingy and when leaving you would need to wait a few minutes because during the shutdown changes would need to be committed to permanent storage, with some tricks (thinking of something like a rsync demon running in the background) the amount of data to be committed may be very little and the shutdown could be quite fast.

Just for the record, many, many years ago I used to have workstations that at shutdown robocopied changed data files to a network storage for redundancy/backup and it wasn't that bad.

 

jaclaz



#16
JorgeA

JorgeA

    FORMAT B: /V /S

  • MSFN Sponsor
  • 3,815 posts
  • Joined 08-April 10
  • OS:Vista Home Premium x64
  • Country: Country Flag

I have yet to upgrade a system that came with an HDD to an SSD, so I can't speak on that. FWIW, the most dramatic performance improvements I've seen on various computers have come from (1) using ReadyBoost and (2) adding more RAM.

 

This thread has been educational for me. In this day and age of multi-GB RAM systems, I didn't realize that there was any real point to RAMdisks, with however the real danger that whatever valuable stuff you had on the RAMdisk would go *POOF* if and when Windows crashed.

 

--JorgeA



#17
NoelC

NoelC

    Software Engineer

  • Member
  • PipPipPipPipPipPipPipPip
  • 2,608 posts
  • Joined 08-April 13
  • OS:Windows 8.1 x64
  • Country: Country Flag

An alternative might be to use something like Primo Cache, which separates the file system from disk via a dedicated low level cache and implements (very) lazy writes - though again there's the possibility of loss due to instability, and in my experience the cache subsystem adds a bit of its own instability.

 

With the lazy write process, if a temporary file is created, used, then deleted (which happens quite often), the data never ever makes it to the disk - which does reduce the I/O load quite a bit. 

 

It doesn't, however, appear to push performance up over what you get with the normal NTFS file system cache.

 

-Noel


Edited by NoelC, 24 June 2015 - 11:03 PM.


#18
TELVM

TELVM

    Advanced Member

  • Member
  • PipPipPip
  • 395 posts
  • Joined 09-February 12
  • OS:Windows 7 x64
  • Country: Country Flag
...  I did go and read http://www.overclock...e-on-a-ram-disk ... ... Having read that, and noting the range of differing opinion, I've decided that I might just as well experiment and see what works out okay for me ...

 

^ Wise decision.

 

Here's an awesome ramdisk roundup and testing: http://www.overclock...dup-and-testing



#19
jaclaz

jaclaz

    The Finder

  • Developer
  • 15,762 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

 

Here's an awesome ramdisk roundup and testing: http://www.overclock...dup-and-testing

 

Yep :), though maybe now a tadbit "dated".

 

JFYI there were some news on the topic:

http://reboot.pro/to...ks-even-faster/

 

And, though IMDISK is not among the faster ones, through the IMDISK Toolkit:

http://reboot.pro/to...imdisk-toolkit/

the issue of automatically backing up contents at shutdown have been recently solved (among other features, more image formats. etc.)

 

jaclaz



#20
Radish

Radish

    Newbie

  • Member
  • 28 posts
  • Joined 11-May 15
  • OS:Windows 7 x64
  • Country: Country Flag

Radish, SSDs don't have mechanical parts to wear out.  Yes, flash memory does have a limited life, but given a very conservative 1000 write cycles capability per flash block, you'd have to write 250 terabytes to a 250 GB SSD before getting close to wearout.  Most people would take decades to write that much.  Do a bunch of peer to peer networking and you might get that down to 10 years. 

 

Show me an HDD that will last that long.

 

Okay Noel, you've convinced me that SSD might be worth a try. My main reason for taking that tack now is not to do with performance, in terms of speed, but you do make valid points in your comparison of SSD to HDD - provided your quoted specs are accurate of course and I have some doubts. Where are you getting your figures from? Doubtless SSD manufacturers will be optimistic in their claims at best and indulge in pure fabulation at worst - so I'd take manufacturer's claims with a huge pinch of salt.

 

Nevertheless I'm going for a big ramdisk first and I'd even want to use that even if I did have SSD. Don't know when I'll get round to the SSD though, I am on a new system just bought now and it will take time to iron out teething problems I'm experiencing (not least of which is fairly often getting BSOD on shutdown) but when I do I'll doubtless be contacting this forum asking for help in how to set it all up - I'm not a geek so would definitely need some guidance.

 

One other question though. Are you saying that you're running systems with no HDD in them at all, only SSD?

 

Thanks for the thoughts.


Edited by Radish, 25 June 2015 - 09:47 AM.


#21
NoelC

NoelC

    Software Engineer

  • Member
  • PipPipPipPipPipPipPipPip
  • 2,608 posts
  • Joined 08-April 13
  • OS:Windows 8.1 x64
  • Country: Country Flag


 

 I'd take manufacturer's claims with a huge pinch of salt.

 

My math is based on the tech inside SSDs, not manufacturer's claims.

 

Flash memory is good for many thousands of write/erase cycles (a good rule-of-thumb number has been 10,000).  Given that there's wear leveling and write amplification (owing to the way the internal controllers work) a figure of 1,000 write cycles for any given logical block on the disk is reasonable, if a bit conservative.  But you're right to be wary of claims.

 

The good news is that some tech reporting sites have taken it upon themselves to actually test write endurance of various SSDs until they actually fail.  Google "SSD wear test" or "SSD endurance test".  Maybe throw the word "torture" in there.  You'll find that, for example, in testing some 240 GB drives will actually run up to near 1 Exabyte (1000 Terabytes) of data writes before actually failing.  This shows my 1000 x capacity figure is a decent estimate of expected life for planning purposes.

 

My main workstation has 6 SSDs and 3 HDDs in it, along with two external USB HDDs.  The system boots and runs from the SSD array, backed by a HighPoint 2720 SGL PCIe RAID controller, 24/7.  The HDDs are only there for backup and very low access data, and they literally stay spun down virtually all the time.

 

My small business server in my office has 3 SSDs and 1 HDD in it, along with one external USB HDD.  Same reasons, same characteristics.  HDDs are only for backup and stay spun down.  The thing boots and runs from the SSD array (RAID 5 in this case) and literally stays cold to the touch.  Total system power draw is about 15 watts when the monitor is sleeping and the machine is idle.

 

I only have HDDs at all because I had them before I got the SSDs.  I have been running essentially off of SSD since April 2012.

 

My one piece of advice:  Don't skimp on the storage capacity.  If you feel you really need 100 GB, opt to get a 256 GB drive.  If you think you need 200 GB, consider getting a 512 GB drive (or better yet, a pair of 256 GB SSDs and set up a RAID 0 array).  SSDs run best when you leave a fair bit of free space (it's called "overprovisioning").  Actually any operating system runs best with a fair bit of free space, so it's a good idea to overprovision for multiple reasons.

 

SSDs actually RAID better than any HDD ever dreamed of, since there's virtually no latency.  You literally add up the performance of the individual drives right up to the point where the other parts of the system can't keep up.

 

My workstation can sustain about 1.6 gigabytes / second low level I/O throughput (yes, I said sustained throughput).  That becomes 3.5 gigabytes / second with caching.  Latency is something like 0.1 milliseconds.  This means that even if I have several really high demand applications (e.g., Photoshop, maybe some VMs, Visual Studio, Subversion, and virtually anything else I can want to use) all running simultaneously I just don't feel a slowdown.

 

By comparison the typical throughput of an HDD is 120 megabytes / second.

 

Try doing something like this with an HDD equipped system.

 

PracticalDiskSpeed.png

 

-Noel

 

 

 

 

P.S., if you want to dabble with the tech and get started for not much green, look on eBay specifically for OCZ Vertex 3 drives.  They're not overly expensive, and are the ones I've found tried and true in real usage (all my drives are OCZ Vertex 3 models).  Just now I saw three different Vertex 3 240 GB drives listed for under a hundred dollars.  These really work.


Edited by NoelC, 25 June 2015 - 10:24 AM.


#22
albator

albator

    Nlite Supporter

  • Member
  • PipPipPipPip
  • 666 posts
  • Joined 18-August 04
  • OS:Windows 7 x64
  • Country: Country Flag

I run windows 7 x64 without a paging file. I have 8 gig of ram and never had a problem including playing Battlefield 4 and other recent games.


NTlite supporter


#23
jaclaz

jaclaz

    The Finder

  • Developer
  • 15,762 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

And now, if you think that your conventional SATA SSD's are fast enough, it's time to get PciE ones  :w00t: that can seemingly run circles around them:
http://www.theregist...480gb_pcie_ssd/

These numbers are crazy:
kingston_hyperx_predator_480gb_hhhl_pciekingston_hyperx_predator_480gb_hhhl_pcie

 

jaclaz


Edited by jaclaz, Yesterday, 11:53 AM.


#24
NoelC

NoelC

    Software Engineer

  • Member
  • PipPipPipPipPipPipPipPip
  • 2,608 posts
  • Joined 08-April 13
  • OS:Windows 8.1 x64
  • Country: Country Flag

I have numbers better than those in many categories using an array of "traditional" SATA III SSDs that are now 3 years old, and the ATTO numbers shown where reads and writes differ markedly imply that there are problems.

 

That being said, the numbers published above for that Kingston HyperX Predator are a good bit better than mine with regard to accessing tiny data blocks, and THAT's very significant.  High 4K numbers implies low latency.  The lower the better.  Note the comment about it not being NVMe.  That's significant too - it says that the hardware could potentially perform even better.

 

In practice, RAM caching - which Windows provides - makes small I/O numbers less an issue, though when reading a buttload of tiny (or fragmented) files that are not already in the cache a low-latency device will really shine.  This will equate to the system feeling more responsive on the first run of applications that haven't been run yet.  I'm imagining 1 second Photoshop cold startup times, for example (that happens for me in 3 seconds).

 

I'd love to see what the timing (in files enumerated per second) doing Properties on the contents of the root folder in drive C: would be on a system with that HyperX Predator serving as the boot volume.  480 GB is too small to be practical, though (says the man with 6 x 480 GB SSDs in his array).

 

-Noel


Edited by NoelC, Yesterday, 02:07 PM.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users