Jump to content

Welcome to MSFN Forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. This message will be removed once you have signed in.
Login to Account Create an Account



Photo

HDD performance <-> Allocation unit size

- - - - -

  • Please log in to reply
45 replies to this topic

#1
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag
Hi,

In an attempt to find out the relationship between HDD performance and Allocation unit size (AUS), I formatted a 1 GB partition with a range of AUSs going from 512 bytes to 64 KiB and for every AUS I ran the ATTO Disk Benchmark software (32 bit) to determine the Read/Write performance of that 1GB partition as a function of AUS.

Strangely enough the R/W transfer rate saturated at 120 MB/sec for every AUS starting from 4 KiB to 64 KiB. So whatever the AUS (between 4KiB and 64KiB) the HDD transfer rate (performance) remained approximately the same at 120 MB/sec. This is not what I expected; I was hoping to have a performance that would go through the roof at an AUS of 64 KiB, but apparently that is not the case.

Is there anybody who can confirm such a "flat" relationship between HDD performance and Allocation unit size??

regards


How to remove advertisement from MSFN

#2
submix8c

submix8c

    Inconceivable!

  • Patrons
  • 4,410 posts
  • Joined 14-September 05
  • OS:none specified
  • Country: Country Flag
BAHAHAHAH!!!! Jaclaz is going to have fun with this one!

First, the Rotation Speed is a factor. Second, the Buffer Size is a factor. Third, UDMA is a factor. Fourth, Bus Speed is a factor. Fifth, CPU Speed is a factor.

AUS is not the "leading factor" for I/O Speed. Basically, "it all depends"...

...and more (I'm sure I'll be corrected if wrong).

Edited by submix8c, 24 May 2012 - 01:59 PM.

Someday the tyrants will be unthroned... Jason "Jay" Chasteen; RIP, bro!

Posted Image


#3
GrofLuigi

GrofLuigi

    GroupPolicy Tattoo Artist

  • Member
  • PipPipPipPipPipPip
  • 1,365 posts
  • Joined 21-April 05
  • OS:none specified
  • Country: Country Flag
Before I moved to SSD, I did similar tests and found out that even the difference between 512b and 4096b was negligable. There is some difference, but nowhere as big as when we read various documents on the subject. I didn't want to discuss about it because:

1) it would show I'm a cheapskate
2) It was not 'scientifically' tested and I wasn't sure if I made a mistake.

But I still hold the opinion that you can't gain noticeable performance from increasing AUS and that such stories are overrated. Excluding RAID stripe size and SSDs.

GL

#4
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag
The HDD I have is a Seagate 1TB, Model ST31000524AS. The specs can be found at http://www.seagate.c.../100636864b.pdf

The 120MB/s data transfer rate I measured is consistent with the specs from Seagate where the Sustained data transfer rate OD (max) = 125MB/s. It's just that I was expecting to see the maximum data transfer rate at an AUS of 64KiB only. Instead, I see that 120MB/s for a whole range of AUSs. (Note that the 1GB test partition is not even located on the outermost tracks of the HDD). Well ok, good then. I just wonder why is there the introduction of large AUSs if a smaller AUS has the same data rate?

From my tests I assume ATTO Disk Benchmark software (32 bit) measures the Sustained data transfer rate OD (max). I was wondering if there exists any software/utilities to measure the maximum burst data rate of the HDD? (So using data packs smaller than the HDD buffer size, which is 32MB in my HDDs case)?

regards
Johan

#5
allen2

allen2

    Not really Newbie

  • Member
  • PipPipPipPipPipPipPip
  • 1,814 posts
  • Joined 13-January 06
There are tools like linux the hdparm or hdtach (for windows). Also i think the raw performance speed given by seagate are right for a raw drive (i.e. not formated /partitioned). Hdtach can test an unformated drive.
Jaclaz will probably add many other tools that may be a lot better.

#6
GrofLuigi

GrofLuigi

    GroupPolicy Tattoo Artist

  • Member
  • PipPipPipPipPipPip
  • 1,365 posts
  • Joined 21-April 05
  • OS:none specified
  • Country: Country Flag
HDTune is one of the most used tools to measure mechanical drives. It's used in most of the reviews, so you can quickly compare your results. HDTach is outdated (except maybe for IDE drives) because it's single-threaded and doesn't know about queue depths and other SATA things.

GL

#7
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag
Just did a test with HDTune pro 5.0 trial version, more specifically I used File Benchmark. The *Sustained data transfer rate OD (max)* comes very close to 120MB/s, as I measured with Atto disk benchmark. HDTune shows burst rates of up to 240MB/s, which is still a lot less than the *I/O data-transfer rate (max)* of 600MB/s (SATA III). I wonder if there are any other software tools more effective (than HDTune) in measuring burst rates.

johan

Edited by DiracDeBroglie, 25 May 2012 - 09:34 AM.


#8
jaclaz

jaclaz

    The Finder

  • Developer
  • 14,852 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag
Jaclaz actually is not getting much fun with this. :(

The whole point is that benchmarks are - generally speaking - benchmarks ;) and they are ONLY useful to compare different settings (or different OS or different hardware), BUT the results need to be verified.
In no way thy are (or can be) representative of "real usage".
In other terms, it is perfectly possible that the result of a benchmark (which is an "absract" set of copying data with a given method) seem "almost the same" but on real usage a BIG difference is actually "felt", or viceversa, it is perfectly possible that in a benchmark a given setting produces an astoundingly "better" result, but then when the setting is applied in "real life" no (or very little difference) is "felt".

I personally find that most of the bla-bla about sector/cluster size and alignment is just "bla-bla" and one setting that gives SOME advantages in a given usage will produce a few disadvantages in another usage (or will have anyway some strings attached).

In some cases some advantages can be found "all round", example:
http://www.msfn.org/...n-its-clusters/
http://reboot.pro/16775/
http://reboot.pro/16783/
but quite obviously the actual relevance only is noticeable with implicitly "slow" devices :ph34r:.

The only thing that I can say is ;):
http://www.imdb.com/...es?qt=qt0362962

jaclaz

#9
GrofLuigi

GrofLuigi

    GroupPolicy Tattoo Artist

  • Member
  • PipPipPipPipPipPip
  • 1,365 posts
  • Joined 21-April 05
  • OS:none specified
  • Country: Country Flag

I wonder if there are any other software tools more effective (than HDTune) in measuring burst rates.


I often see some benchmarking sites use Iometer for advanced analysis. For me, it's a bit more than I'd really want to know. :wacko:

GL

#10
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag

Jaclaz actually is not getting much fun with this. :(
In no way thy are (or can be) representative of "real usage".


Hi Jaclaz, how're doing,

I know that benchmarks can sometimes be only little representative for "real world" applications. I'm using benchmarks merely to get an idea about hardware specs, and check if hardware specs come close to what manufactures claim in their marketing specs. The system drive of mine is a SATAIII (600MB/s) drive, but I haven't seen yet any indication of that in the benchmarks. I did some tests with HDTune Pro v.5, IOmeter, CrystalDiskMark and HDSpeed (all the latest versions, and where possible the 64-bit version). They all give me the same result: Sustained data transfer rate of 120MB/s, which is ok compared to the specs in the documentation of Seagate.

The problem, however, is measuring the burst transfer rate on the SATA link of the drive; that measurement should give me at least an idea about the absolute maximum bandwidth of the HDD and the chipset on the motherboard. With HD Tune Pro, tab |File Benchmark|, the top graph shows peaks up to 240MB/s. I just wonder if that can be considered as the burst rate? I have no idea how reliable the graph is. Then, there is also the first tab |Benchmark|, which gives a burst rate figure in a little [Burst rate] bar on the right side of the pane, along with Access time and Transfer rate: Minimum, Maximum, Average. In my test the [Burst rate] figure was lower than the Maximum Transfer rate, and sometimes even lower than the Average Transfer rate. Has anyone seen something similar with HDTune Pro??

johan

#11
jaclaz

jaclaz

    The Finder

  • Developer
  • 14,852 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

The problem, however, is measuring the burst transfer rate on the SATA link of the drive; that measurement should give me at least an idea about the absolute maximum bandwidth of the HDD and the chipset on the motherboard.

Maybe, just maybe, this fits (if you have a suitable nvidia chipset):
http://www.sevenforu...real-world.html

And now, some BAD :( news, actual instruments needed to actually measure Disk/SATA speed (among many other things) :ph34r: (example):
http://www.lecroy.co...spx?mseries=343
http://www.lecroy.co...spx?mseries=331
Please take a seat before reading the price tag :w00t:

jaclaz.

#12
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag
Yes, I just had a look about the Lecroy equiment, turns out very costly: $100,000 !? This is 4 times the price of a normal car, and ... 100 times my notebook.

With HD Tune pro v.5, tab |Benchmark|, I managed to get a burst of 155MB/s; the graphical data, on the other hand, shows bursts up to 240MB/s. However that is still a long way from the SATA III burst speed.

Maybe I could give it one more last try with HDparm from *allen2*. It seems to be a Linux utility, but is there any recent up to date version for Windows 7 available?

johan

Edited by DiracDeBroglie, 28 May 2012 - 10:44 AM.


#13
jaclaz

jaclaz

    The Finder

  • Developer
  • 14,852 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

Maybe I could give it one more last try with HDparm from *allen2*. It seems to be a Linux utility, but is there any recent up to date version for Windows 7 available?

Cannot say about it, in the sense that Windows 7 and "direct disk" access are "long time enemies" :ph34r:, but yes, there is a Hdparm port to win32 (cannot say about 64) see:
http://reboot.pro/13601/
http://reboot.pro/13...125#entry119855
http://hdparm-win32.dyndns.org/hdparm/
that does work under Windows 7 too, but really cannot say if *all* functions will be accessible/working/whatever.

jaclaz

#14
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag
I tested the Win32 version of HDparm on Win7; note that you need to run HDparm as "(right click) Run as administrator" . A plus point is that it can show the Disk ID, features and settings. Minus point is that there is no grip on the size of the test file, nor on its block size (at least, I could not find any info about it). Also I couldn't find any test features to verify the burst transfer rate over the SATA link of the drive. I just wonder if the most recent LINUX version of HDparm is available somewhere on a Live CD (running on some Linux kernel)?? Maybe the most recent Linux version does more than the Win32 version.

I've also found some other, maybe interesting, HDD performance tester called HDDScan: see http://hddguru.com/s....01.22-HDDScan/ Also that app you need to "Run (it) as administrator". It can also show Disk ID, features and settings, but also with this utility I couldn't find any feature measuring the burst transfer rate.

johan

#15
jaclaz

jaclaz

    The Finder

  • Developer
  • 14,852 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag
Hey, it wasn't me suggesting hdparm as a benchmark tool, was I? :unsure: ;)

Most Linux distro's will have hdparm included, BUT (isn't there always a "but") probably it won't get you much nearer to your goal.

You can try parted magic (which is a fairly "light" LiveCD):
http://partedmagic.com/doku.php

JFYI:
http://www.linuxinsi..._your_disk.html
http://www.coker.com.au/bonnie++/

and (for NO apparent reason :w00t: - if not to scare :ph34r: you some more after the "equipment price shock" ;)):
http://etbe.coker.co...ta-performance/

jaclaz

#16
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag
I was under the tacit (and maybe naive) impression HDparm was a benchmark, but indeed, it isn't. Although, it can do some HDD performance testing. I think for benchmarking HD Tune Pro is th e most suitable. For getting Disk ID, features, commands, settings of the HDD, HDparm and HDDScan are OK.
j

#17
pointertovoid

pointertovoid

    Advanced Member

  • Member
  • PipPipPip
  • 472 posts
  • Joined 16-January 09
The contiguous read throughput is essentially independent of the cluster size. Your observations are correct.

Files use to be little fragmented on a disk, and to read a chunk of them, the Udma host tells the disk "send me sectors number xxx to yyy". How the sectors are spread along the clusters makes zero difference there. They only tell where a file shall begin (and sometimes be cut if fragmented), and since mechanical disks have no understandable alignment issues, clusters have no importance.

This would differ a bit on a CF card or an SSD, which have lines and pages of regular size, and there aligning the clusters on line or page boundaries makes a serious difference. Special cluster sizes, related to the line size (or more difficult, with the page size) allow a repeatable alignment. Exception: the X25-E has a 10-way internal Raid-0, so it's insensitive to power-of-two alignment.

So cluster size is not a matter of hardware. Much more of Fat size (important for performance), Mft size (unimportant), and lost space.

In a mechanical disk, you should use clusters ok 4kiB (at least with 512B sectors) to allow the defragmentation of Ntfs volumes.

#18
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag

Files use to be little fragmented on a disk, and to read a chunk of them, the Udma host tells the disk "send me sectors number xxx to yyy". How the sectors are spread along the clusters makes zero difference there. They only tell where a file shall begin (and sometimes be cut if fragmented), and since mechanical disks have no understandable alignment issues, clusters have no importance.


I've done the same test with a 2TB external SATA-III (6Gbit/Sec) drive in a USB3.0 enclosure, and the results are the same as with my internal 1TB system HDD: that is, the read/write performance of the 2TB HDD (Write=105MB/Sec; Read=145MB/Sec) seems to be independent of AUS under Win7.

So, from your explanation I infer that large chunks of data, linearly scooped up from the HDD (from sector X to sector Y), are being dumped into the RAM area that is foreseen for the DMA on the motherboard!? (Correct my if I'm wrong.) In that view it is understandable that HDD performance should be insensitive to the AUS of the filesystem on the HDD.

However, ... I got (nonetheless) a question about how the HDD performance test is implemented in software. I've been using ATTO Disk Benchmark (v2.47) and there are several options like *Direct I/O*, *I/O Comparison* and *Overlapped I/O*. The option *Direct I/O* is always checked during my tests.

According to the HELP in ATTO Disk Benchmark, *Direct I/O* means that there is no system buffering nor any system caching used during the HDD performance test. I assume with buffering or caching ATTO means the RAM-DMA buffering on the motherboard (I cannot imagine they're talking about the HDDs cache); I tacitly assumed that ATTO Disk Benchmark tested the performance between the motherboard (RAM-DMA) and the HDD, so meaning the performance over the SATA-III link itself. I've done a performance test on my 2TB external HDD (and earlier on my system 1TB HDD too) with *Direct I/O* UNchecked and the results were stunning; the graphical performance reading in ATTO Disk Benchmark went up to 1600 MBytes/Sec, almost 3 times the maximum SATA-III bandwith!!! Hence that I think that with buffering or caching ATTO means the RAM-DMA on the motherboard. Consequently, in order to see any realistic performance output in ATTO I think the option *Direct I/O* needs to be checked, thereby deactivating any RAM-DMA buffering or caching on the motherboard.

As a result, I'm a bit confused here. If the read/write performance of the HDD is insensitive with respect to the HDDs AUS because of the usage of the RAM-DMA on the motherboard, then this argument seems to conflict the assumption that the RAM-DMA is deactivated in ATTO Disk Benchmark by its option *Direct I/O*. I'm sure I got it wrong somehow somewhere, but where exactly did I make a mistake in my assumptions or my reasoning?

regards,
johan

#19
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag

This would differ a bit on a CF card or an SSD, which have lines and pages of regular size, and there aligning the clusters on line or page boundaries makes a serious difference. Special cluster sizes, related to the line size (or more difficult, with the page size) allow a repeatable alignment. Exception: the X25-E has a 10-way internal Raid-0, so it's insensitive to power-of-two alignment.


I'm not familiar with SSDs, but I'm very much interested in getting deeper into the workings and fine tuning of SSDs, as I may purchase an SSD in the near future. Hence the question, do you know any documents, websites, links, references, or whatever reading, which could give me a deeper insight about how SSDs are designed and work? Especially, I need to acquire a better insight in notions like line size, page size, partition alligmend in SSDs, clusters size (and the difference with Allocation Unit Size in Win7). I would like to get a deeper insight in SSDs and the difference with HHDs, especially in the context of optimizing their performance.

thanks,
johan

Edited by DiracDeBroglie, 17 June 2012 - 07:41 AM.


#20
jaclaz

jaclaz

    The Finder

  • Developer
  • 14,852 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag
Semi-random thought - as always :ph34r:
Which role (if any) would NCQ (Native Command Queuing) have in the benchmark(s)?

jaclaz

#21
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag

Which role (if any) would NCQ (Native Command Queuing) have in the benchmark(s)?
jaclaz


In ATTO Disk Benchmark I tried out Queque Depth (QD)=4 and =10. On my recently purchased Win7 notebook there was no difference in the performance test between QD=4 and =10. However, on my 8 year old WinXP notebook, the difference in the performance test between QD=4 and =10 is clearly visible in the output graph. I am not sure what the exact relation is between QD in ATTO, and NCQ. Furthermore, my HDDs have an NCQ depth of 32, while QD in ATTO goes only to a maximum of 10. On my Win7 notebook, I also don't see the relevance of NCQ or QD if large chunks of data (from sector X to sector Y) from the HDD are being dumped into the RAM-DMA during the performance test with ATTO; there is no need for complicated searching on the HDDs where the read/write heads need to wobble over the platters.

Well, the performance of the HDDs (internal and external) on my Win7 notebook is what it is, there is no way to go beyond it physical limits. To me, the most important is that I can get the maximum out of the HDDs and also understand why and how particular parameters in the hardware (HDDs) related to the performance (optimization).

By the way, some time ago we had discussion about misaligned partitions in advanced format HDDs (4K sector drives), like my 2TB exteranl USB3.0 drive. I now performed an ATTO Disk Benchmark test on a partition (on the 2TB HDD) which was first correctly partition-aligned after being created under Win7, and did the ATTO test again after the partition was deleted and re-created under WinXP (so misaligned). The WinXP-created partition had a slightly lower read performance (1% less, not even that maybe) compared to the Win7-created partition. However, the write performance for the WinXP-created partition was something like 10% less than for the Win7-created partition. I get the impression that partition-misalignment may not be that much of an performance issue in advanced format drives. All performance test (including for the WinXP-created partition) where done on the Win7 notebook.

In the process of doing those test, I ran into trouble with my 2TB HDD, which is also my data backup drive. The first partition is a primary partition, followed by an extended partition containing 4 logical partitions; all partition were created under Win7. Then, on my WinXP notebook, after having deleted and re-created the first (primary) partition, the second, third and fourth logical partitions disappeared; the first logical partition, however, and the extended (shell) partition remained intact (checked that out with PTEdit32 and PartInNT). I tried to retrieve the data from the lost partitions using GPartED but after almost 10 hours of "retrieving", GPartED gave up. So, all my data on those 3 logical partitions is gone. Lesson to be learned: WinXP and Win7 are not quite compatible when it comes to partitioning (which I already knew) and that can have disastrous consequences (that I have now learned the hard way).

johan

#22
jaclaz

jaclaz

    The Finder

  • Developer
  • 14,852 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

In the process of doing those test, I ran into trouble with my 2TB HDD, which is also my data backup drive. The first partition is a primary partition, followed by an extended partition containing 4 logical partitions; all partition were created under Win7. Then, on my WinXP notebook, after having deleted and re-created the first (primary) partition, the second, third and fourth logical partitions disappeared; the first logical partition, however, and the extended (shell) partition remained intact (checked that out with PTEdit32 and PartInNT). I tried to retrieve the data from the lost partitions using GPartED but after almost 10 hours of "retrieving", GPartED gave up. So, all my data on those 3 logical partitions is gone. Lesson to be learned: WinXP and Win7 are not quite compatible when it comes to partitioning (which I already knew) and that can have disastrous consequences (that I have now learned the hard way).

Yep, that is seemingly "by design" (as the good MS guys would put it) :w00t: .
Sorry for your mishap, but you are not the first one:
http://reboot.pro/9897/
AND you were been already warned :ph34r: about the issue:
http://www.msfn.org/...post__p__984342
BTW the data is normally perfectly recoverable, that is if you use properly a data recovery app that GPARTED is not AFAIK.

jaclaz

#23
DiracDeBroglie

DiracDeBroglie

    Member

  • Member
  • PipPip
  • 104 posts
  • Joined 07-December 11
  • OS:Windows 7 x64
  • Country: Country Flag

BTW the data is normally perfectly recoverable, that is if you use properly a data recovery app that GPARTED is not AFAIK.
jaclaz


Based on your experience, which recovery apps/soft are worth having at hand when it comes to this sort of partition (table) damage?

johan

PS: In Win7 I deleted the very first (primary and WinXP-created) partition (on my 2TB HDD), and then created a New Simple Volume in the Disk Manager in Win7. After checking the 2TB HDD with GPartED I noticed there was a gap of UNallocated disk space between the newly created primary partition and the next extended partition shell (which has never been damaged) of exactly 1MB. I could only see the 1MB gap in GPartED, but not at all in the Disk Manager in Win7. So, in GPartED I got rid of the 1MB gap by expanding/extending the newly created primary partition.
But also the very last partition (which is primary one following the extended (shell) partition) was followed by an UNallocated area of 2MB!!?? I got rid of that too by extending the last prim partition in GPartED. But also here the 2MB gap was not visible in the Win7 DM.

Edited by DiracDeBroglie, 18 June 2012 - 10:47 AM.


#24
jaclaz

jaclaz

    The Finder

  • Developer
  • 14,852 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

Based on your experience, which recovery apps/soft are worth having at hand when it comes to this sort of partition (table) damage?

It depends.
For a semi-automated attempt, TESTDISK.
For manual recovery I tend to use Tiny Hexer (with my small viewers/templates for it).
A very good app in my view is dmde (though very powerful, more handy for filesystem recovery)
BUT read this:
http://www.msfn.org/...post__p__943645
(and links within it)


PS: In Win7 I deleted the very first (primary and WinXP-created) partition (on my 2TB HDD), and then created a New Simple Volume in the Disk Manager in Win7. After checking the 2TB HDD with GPartED I noticed there was a gap of UNallocated disk space between the newly created primary partition and the next extended partition shell (which has never been damaged) of exactly 1MB. I could only see the 1MB gap in GPartED, but not at all in the Disk Manager in Win7. So, in GPartED I got rid of the 1MB gap by expanding/extending the newly created primary partition.
But also the very last partition (which is primary one following the extended (shell) partition) was followed by an UNallocated area of 2MB!!?? I got rid of that too by extending the last prim partition in GPartED. But also here the 2MB gap was not visible in the Win7 DM.

Well, the general idea is to STOP fiddling with a disk as soon as you find an issue :ph34r:.

In Windows XP "way of thinking", since everything is Cylinder boundary related, only steps of around 8 Mb are "sensed" by the Disk manager (1x255x63=16065x512=8225280).
Cannot say about 7, but most probably it has similar issues but with the same "pre-sets" that affect diskpart and that can be overridden through the Registry:
http://www.911cd.net...showtopic=21186
http://www.911cd.net...pic=21186&st=18
http://support.micro...kb/931760/en-us

The 1 Mb (which i s not a "measure", bytes are units, Megabytes or Mibibytes or whatever can be displayed in several different way by different utilities using different conventions) could be "normal", the 2 Mb less so :unsure:

A typical Vista :ph34r: or 7 first partition is aligned to 1 Mb (2048 sectors), but there may be variations, it is possible that - for any reason - the gap you found around 1 Mb in size is aactually smaller than 1 Mb (and thus results in "a suffusion of yellow") and that the other around 2 Mb gap is not a myltiple of 1 Mb and thus is ignored.
Cannot say.

Here is an example of a partition recovery (to give you an idea of the kind of approach):
http://www.msfn.org/...-after-bsy-fix/
and another one:
http://www.msfn.org/...ith-value-data/

jaclaz

#25
pointertovoid

pointertovoid

    Advanced Member

  • Member
  • PipPipPip
  • 472 posts
  • Joined 16-January 09

...from your explanation I infer that large chunks of data, linearly scooped up from the HDD (from sector X to sector Y), are being dumped into the RAM area that is foreseen for the DMA on the motherboard!?

Bonsoir - Goedenavond Johan! Yes, this is the way I believe to understand it. Until I change my mind, of course.

...with *Direct I/O* UNchecked and the results were stunning; the graphical performance reading in ATTO Disk Benchmark went up to 1600 MBytes/Sec, almost 3 times the maximum SATA-III bandwith!!! Hence that I think that with buffering or caching ATTO means the RAM-DMA on the motherboard...

What Atto could mean with "direct I/O" is that is does NOT use Windows' system cache.

The system cache (since W95 at least) is a brutal feature: every read or write to disk is kept in main Ram as long as no room is requested in this main memory for other puposes. If a data has been read or written once, it stays available in Ram.

Win takes hundreds of megabytes for this purpose, with no absolute size limit. So the "cache" on a hard disk, which is much smaller than the main Ram, cannot serve as a cache with Win, since this OS will never re-ask for recent data. Silicon memory on a disk can only serve as a buffer - which is useful especially with Ncq.

Very clear to observe with a diskette. Reading or writing on it takes many seconds, but re-opening a file already written is immediate.

-----

Cluster size matters little as a result of the Udma command. It specifies a number of sectors to be accessed, independently of cluster size and position. A disk doesn't know what a cluster is. Only the OS knows it to organize its Mft (or Fat or equivalent) and compute the corresponding sector address. The only physical effect on a disk is where files (or file chunks if fragmented) begin; important on Flash storage, less important in a Raid of mechanical disks, zero importance on a single mechanical disk where tracks have varied unpredictable bizarre sizes within one disk.

Edited by pointertovoid, 18 June 2012 - 04:20 PM.





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users