HDD performance <-> Allocation unit size
Posted 24 May 2012 - 01:26 PM
In an attempt to find out the relationship between HDD performance and Allocation unit size (AUS), I formatted a 1 GB partition with a range of AUSs going from 512 bytes to 64 KiB and for every AUS I ran the ATTO Disk Benchmark software (32 bit) to determine the Read/Write performance of that 1GB partition as a function of AUS.
Strangely enough the R/W transfer rate saturated at 120 MB/sec for every AUS starting from 4 KiB to 64 KiB. So whatever the AUS (between 4KiB and 64KiB) the HDD transfer rate (performance) remained approximately the same at 120 MB/sec. This is not what I expected; I was hoping to have a performance that would go through the roof at an AUS of 64 KiB, but apparently that is not the case.
Is there anybody who can confirm such a "flat" relationship between HDD performance and Allocation unit size??
Posted 24 May 2012 - 01:58 PM
First, the Rotation Speed is a factor. Second, the Buffer Size is a factor. Third, UDMA is a factor. Fourth, Bus Speed is a factor. Fifth, CPU Speed is a factor.
AUS is not the "leading factor" for I/O Speed. Basically, "it all depends"...
...and more (I'm sure I'll be corrected if wrong).
This post has been edited by submix8c: 24 May 2012 - 01:59 PM
Posted 24 May 2012 - 03:48 PM
1) it would show I'm a cheapskate
2) It was not 'scientifically' tested and I wasn't sure if I made a mistake.
But I still hold the opinion that you can't gain noticeable performance from increasing AUS and that such stories are overrated. Excluding RAID stripe size and SSDs.
Posted 25 May 2012 - 04:03 AM
The 120MB/s data transfer rate I measured is consistent with the specs from Seagate where the Sustained data transfer rate OD (max) = 125MB/s. It's just that I was expecting to see the maximum data transfer rate at an AUS of 64KiB only. Instead, I see that 120MB/s for a whole range of AUSs. (Note that the 1GB test partition is not even located on the outermost tracks of the HDD). Well ok, good then. I just wonder why is there the introduction of large AUSs if a smaller AUS has the same data rate?
From my tests I assume ATTO Disk Benchmark software (32 bit) measures the Sustained data transfer rate OD (max). I was wondering if there exists any software/utilities to measure the maximum burst data rate of the HDD? (So using data packs smaller than the HDD buffer size, which is 32MB in my HDDs case)?
Posted 25 May 2012 - 05:22 AM
Jaclaz will probably add many other tools that may be a lot better.
Posted 25 May 2012 - 05:58 AM
Posted 25 May 2012 - 09:33 AM
This post has been edited by DiracDeBroglie: 25 May 2012 - 09:34 AM
Posted 25 May 2012 - 09:35 AM
The whole point is that benchmarks are - generally speaking - benchmarks and they are ONLY useful to compare different settings (or different OS or different hardware), BUT the results need to be verified.
In no way thy are (or can be) representative of "real usage".
In other terms, it is perfectly possible that the result of a benchmark (which is an "absract" set of copying data with a given method) seem "almost the same" but on real usage a BIG difference is actually "felt", or viceversa, it is perfectly possible that in a benchmark a given setting produces an astoundingly "better" result, but then when the setting is applied in "real life" no (or very little difference) is "felt".
I personally find that most of the bla-bla about sector/cluster size and alignment is just "bla-bla" and one setting that gives SOME advantages in a given usage will produce a few disadvantages in another usage (or will have anyway some strings attached).
In some cases some advantages can be found "all round", example:
but quite obviously the actual relevance only is noticeable with implicitly "slow" devices .
The only thing that I can say is :
Posted 25 May 2012 - 10:44 AM
I often see some benchmarking sites use Iometer for advanced analysis. For me, it's a bit more than I'd really want to know.
Posted 28 May 2012 - 02:01 AM
In no way thy are (or can be) representative of "real usage".
Hi Jaclaz, how're doing,
I know that benchmarks can sometimes be only little representative for "real world" applications. I'm using benchmarks merely to get an idea about hardware specs, and check if hardware specs come close to what manufactures claim in their marketing specs. The system drive of mine is a SATAIII (600MB/s) drive, but I haven't seen yet any indication of that in the benchmarks. I did some tests with HDTune Pro v.5, IOmeter, CrystalDiskMark and HDSpeed (all the latest versions, and where possible the 64-bit version). They all give me the same result: Sustained data transfer rate of 120MB/s, which is ok compared to the specs in the documentation of Seagate.
The problem, however, is measuring the burst transfer rate on the SATA link of the drive; that measurement should give me at least an idea about the absolute maximum bandwidth of the HDD and the chipset on the motherboard. With HD Tune Pro, tab |File Benchmark|, the top graph shows peaks up to 240MB/s. I just wonder if that can be considered as the burst rate? I have no idea how reliable the graph is. Then, there is also the first tab |Benchmark|, which gives a burst rate figure in a little [Burst rate] bar on the right side of the pane, along with Access time and Transfer rate: Minimum, Maximum, Average. In my test the [Burst rate] figure was lower than the Maximum Transfer rate, and sometimes even lower than the Average Transfer rate. Has anyone seen something similar with HDTune Pro??
Posted 28 May 2012 - 03:37 AM
Maybe, just maybe, this fits (if you have a suitable nvidia chipset):
And now, some BAD news, actual instruments needed to actually measure Disk/SATA speed (among many other things) (example):
Please take a seat before reading the price tag
Posted 28 May 2012 - 10:33 AM
With HD Tune pro v.5, tab |Benchmark|, I managed to get a burst of 155MB/s; the graphical data, on the other hand, shows bursts up to 240MB/s. However that is still a long way from the SATA III burst speed.
Maybe I could give it one more last try with HDparm from *allen2*. It seems to be a Linux utility, but is there any recent up to date version for Windows 7 available?
This post has been edited by DiracDeBroglie: 28 May 2012 - 10:44 AM
Posted 28 May 2012 - 11:01 AM
Cannot say about it, in the sense that Windows 7 and "direct disk" access are "long time enemies" , but yes, there is a Hdparm port to win32 (cannot say about 64) see:
that does work under Windows 7 too, but really cannot say if *all* functions will be accessible/working/whatever.
Posted 30 May 2012 - 03:54 AM
I've also found some other, maybe interesting, HDD performance tester called HDDScan: see http://hddguru.com/s....01.22-HDDScan/ Also that app you need to "Run (it) as administrator". It can also show Disk ID, features and settings, but also with this utility I couldn't find any feature measuring the burst transfer rate.
Posted 30 May 2012 - 04:40 AM
Most Linux distro's will have hdparm included, BUT (isn't there always a "but") probably it won't get you much nearer to your goal.
You can try parted magic (which is a fairly "light" LiveCD):
and (for NO apparent reason - if not to scare you some more after the "equipment price shock" ):
Posted 30 May 2012 - 08:13 AM
Posted 12 June 2012 - 02:14 PM
Files use to be little fragmented on a disk, and to read a chunk of them, the Udma host tells the disk "send me sectors number xxx to yyy". How the sectors are spread along the clusters makes zero difference there. They only tell where a file shall begin (and sometimes be cut if fragmented), and since mechanical disks have no understandable alignment issues, clusters have no importance.
This would differ a bit on a CF card or an SSD, which have lines and pages of regular size, and there aligning the clusters on line or page boundaries makes a serious difference. Special cluster sizes, related to the line size (or more difficult, with the page size) allow a repeatable alignment. Exception: the X25-E has a 10-way internal Raid-0, so it's insensitive to power-of-two alignment.
So cluster size is not a matter of hardware. Much more of Fat size (important for performance), Mft size (unimportant), and lost space.
In a mechanical disk, you should use clusters ok 4kiB (at least with 512B sectors) to allow the defragmentation of Ntfs volumes.
Posted 17 June 2012 - 06:29 AM
I've done the same test with a 2TB external SATA-III (6Gbit/Sec) drive in a USB3.0 enclosure, and the results are the same as with my internal 1TB system HDD: that is, the read/write performance of the 2TB HDD (Write=105MB/Sec; Read=145MB/Sec) seems to be independent of AUS under Win7.
So, from your explanation I infer that large chunks of data, linearly scooped up from the HDD (from sector X to sector Y), are being dumped into the RAM area that is foreseen for the DMA on the motherboard!? (Correct my if I'm wrong.) In that view it is understandable that HDD performance should be insensitive to the AUS of the filesystem on the HDD.
However, ... I got (nonetheless) a question about how the HDD performance test is implemented in software. I've been using ATTO Disk Benchmark (v2.47) and there are several options like *Direct I/O*, *I/O Comparison* and *Overlapped I/O*. The option *Direct I/O* is always checked during my tests.
According to the HELP in ATTO Disk Benchmark, *Direct I/O* means that there is no system buffering nor any system caching used during the HDD performance test. I assume with buffering or caching ATTO means the RAM-DMA buffering on the motherboard (I cannot imagine they're talking about the HDDs cache); I tacitly assumed that ATTO Disk Benchmark tested the performance between the motherboard (RAM-DMA) and the HDD, so meaning the performance over the SATA-III link itself. I've done a performance test on my 2TB external HDD (and earlier on my system 1TB HDD too) with *Direct I/O* UNchecked and the results were stunning; the graphical performance reading in ATTO Disk Benchmark went up to 1600 MBytes/Sec, almost 3 times the maximum SATA-III bandwith!!! Hence that I think that with buffering or caching ATTO means the RAM-DMA on the motherboard. Consequently, in order to see any realistic performance output in ATTO I think the option *Direct I/O* needs to be checked, thereby deactivating any RAM-DMA buffering or caching on the motherboard.
As a result, I'm a bit confused here. If the read/write performance of the HDD is insensitive with respect to the HDDs AUS because of the usage of the RAM-DMA on the motherboard, then this argument seems to conflict the assumption that the RAM-DMA is deactivated in ATTO Disk Benchmark by its option *Direct I/O*. I'm sure I got it wrong somehow somewhere, but where exactly did I make a mistake in my assumptions or my reasoning?
Posted 17 June 2012 - 07:38 AM
I'm not familiar with SSDs, but I'm very much interested in getting deeper into the workings and fine tuning of SSDs, as I may purchase an SSD in the near future. Hence the question, do you know any documents, websites, links, references, or whatever reading, which could give me a deeper insight about how SSDs are designed and work? Especially, I need to acquire a better insight in notions like line size, page size, partition alligmend in SSDs, clusters size (and the difference with Allocation Unit Size in Win7). I would like to get a deeper insight in SSDs and the difference with HHDs, especially in the context of optimizing their performance.
This post has been edited by DiracDeBroglie: 17 June 2012 - 07:41 AM
- ← Problem with my Seagate Barracuda 7200.11
- Hard Drive and Removable Media issues
- [SOLUTION] Seagate Barracuda 7200.7 PATA/IDE Diagnostic Serial/UART Po →