Jump to content

Welcome to MSFN Forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. This message will be removed once you have signed in.
Login to Account Create an Account



Photo

Seagate Barracuda 7200.11 Troubles

- - - - -

  • Please log in to reply
1247 replies to this topic

#1076
poolcarpet

poolcarpet

    Member

  • Member
  • PipPip
  • 108 posts
  • Joined 02-January 09
If it's true it's a 1:65,536 chance at failure - I think I should go buy some lottery tickets or something. I'm bound to win SOMETHING if I'm that 'lucky'..... For those with two or more 7200.11 bricks, I suggest you STOP reading now and go buy your tickets or enter some contests or whatever! the odds mentioned might just be true....

You estimate a 1:65,536 chance at failure. You honestly think there's only about 1,800 drives that will ever be affected by this issue? Seagate wouldn't even have an intern look at a problem that small, let alone stonewall for months, release firmware updates, and offer free data recovery. They'll end up spending a lot more than the $10,000 they made selling those drives.
Anyways.




How to remove advertisement from MSFN

#1077
dadou

dadou
  • Member
  • 3 posts
  • Joined 27-January 09

2. majority of the people here, when contacting Seagate, was told that there were no problems and to send in the drive for RMA. for data recovery please pay. it has since changed, as seagate is now admitting a problem with the hard disk and has asked people to update firmware and contact them for free recovery (incidentally, has anyone actually got their drives repaired for FREE??)


I did get my drive repaired for free by Seagate data recovery (confirmation on the bill) but still I am waiting to get it back (should arrive within days).

#1078
jaclaz

jaclaz

    The Finder

  • Developer
  • 14,689 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

Also, make up your mind. You say on your site, and I quote, "...ships 10,000,000 Barracuda disk drives a month..." - and yet you chide jaclaz for his numbers. Your 120,000,000 drives a year and his 111,111,111 drives a year are remarkably similar, don't you think? Actually, your number is higher. Huh. :blink:


Well, you cannot do 12*10,000,000, you have to take into account some holidays....;)

jaclaz

#1079
Oliver.HH

Oliver.HH
  • Member
  • 7 posts
  • Joined 28-January 09

Many of you are still making outrageous statements about the depth of the problem.

I'd say some of us made attempts at judging the affected drive population. They put all their numbers and assumptions on the table for further discussion.

Try getting the exact number of 7200.11 disks seagate shipped

That's not a published number, right? Are you saying this simply to disparage any other attempt to estimate that number?

So think about how many QC test stations there are on the floor and consider that it is more likely that only one of them was configured to leave the trigger code on the disks.

Now you're making wild guesses without any factual basis. You don't know the percentage of test stations writing the "trigger code". That's a number Seagate didn't dare to publish so far and that might be for a reason.

A “lot” of people are having this issue? Millions? I don’t see Dell, HP, SUN, IBM, EMC, and Apple making press releases about how Seagate burned them. You don’t think apple would drop Seagate in a heartbeat if they felt Seagate had a real-world, high-risk problem?

Pure speculation. You claim to be a technical expert but you're making assumptions based on corporate psychology. In addition, you're ignoring two little facts: (1) OEMs may be legally responsible for damages incurred by their customers. (2) There are not that many disk drive manufacturers around that a large scale product buyer would light-heartedly agree to reduce the number of competitors.

The only way to explain the quiet from the PC vendors is that the risk is profoundly low.

That's the only way you can imagine. We might or might not agree. Anyway, it's probably just too early to tell.

So all of these other vendors HAD to have known about the problem from the beginning. It would not be unreasonable for them to also receive the complete lists of affected serial numbers (But I am not saying they were given the list as fact, it is my opinion that they were given lists of the affected drives that Seagate shipped them).

You still believe this though Seagate took several attempts to publish a working online serial number check?

Here is a nice little post that shows you that the 7200.11 disks you all have are "rated" for only 2400 hours use per year.

You are misstating the facts. Seagate simply states the usage patterns employed for AFR and MTBF calculations (2400 power-on-hours, 10,000 start/stop cycles). That does not mean at all, that desktop drives have a higher probability of failure when used 4800 hours per year or any other number for that matter. You cannot tell. Seagate didn't publish data for alternative usage patterns. So you're the one spreading FUD here.

BTW, in some respects server disk drives operating 24/7 can have a weaker design compared to desktop, let alone notebook drives: They don't need to withstand the high number of start/stop cycles. So higher price point doesn't necessarily mean more robust design for every usage scenario.

#1080
poolcarpet

poolcarpet

    Member

  • Member
  • PipPip
  • 108 posts
  • Joined 02-January 09
The 7200.11 MTBF is 750,000 hours and the AFR is 0.34%. These numbers are supposedly based on 2400 hours use per year (that's about 6 hours every day over 1 year).

Even if they are calculated based on 2400 hours per year or 8760 hours per year, it doesn't mean anything to us.

Firstly, AFR + MTBF values are for HARDWARE failures. Not firmware defects.

Secondly, assuming the initial 2400 hours usage per year:

750000/2400=300 hours
1/300*100=0.33% (which matches the AFR rate quoted by Seagate at 0.34%.

This means over one year, 0.34% of all 7200.11 drives will fail.

Now assuming we use the hard disks 24x7 or 8760 hours per year.

750000/8760=85.61 hours
1/85.61*100 = 1.16%

An increase in failure rates by almost 3 times. Assuming Seagate does ship 120,000,000 cudas in a year, we're talking about:

408,000 disk failures in one year out of 120,000,000 disks - 0.34% AFR
1,392,000 disk failures in one year out of 120,000,000 disks - 1.16% AFR

while the absolute numbers look huge, they are still a small fraction out of the 120,000,000 disks. Still does not explain how an AFR/MTBF calculation based on 2400 hours is supposed to 'justify' the apparently 'higher rates' of failures.

Besides, as mentioned, MTBF and AFR are quoted for HARDWARE FAILURES. Not firmware defects. If you have a firmware defect with the proper conditions, the AFR is going to be 100% - why do you think Seagate is releasing firmware SD1A to fix this problem?




Now for something else ... all you 7200.11 owners, to help counteract the flames many of you sent me in privacy or online somewhere .. i am not some stealth P/R firm hired by Seagate. Here is a nice little post that shows you that the 7200.11 disks you all have are "rated" for only 2400 hours use per year. http://storagesecret...usage-annually/. These disks are desktop disks and were never designed for 24x7 use, or even high duty. Even though I have some myself, I use them as part of a RAID group, so when one of them dies I will just replace it. if any of you have non-RAIDed (RAID with parity, not RAID0) 7200.11s, then you need to make sure you backup often.



#1081
icefloe01

icefloe01

    Junior

  • Member
  • Pip
  • 61 posts
  • Joined 04-January 09

How do I know which problem I have. The only thing my HDD does and not boot up. It worked up until this morning, and after a reboot, got stuck at "Boot System Failure", when I ran a diagnostics, it did not find the "HDD".
Thanks
Tony


Does the harddrive spin up when you power on the system? If it does, yet the BIOS does not find it, and you do NOT hear any constant click-click-click noise (it would be quite noticeable) then you most likely have the "BSY" problem and would need to reset the drive through it's serial port. That dlethe guy cannot make the ASSumption you have a hardware failure as he did not bother to ask you any specifics about what is happening in your situation.

#1082
aerostop

aerostop
  • Member
  • 1 posts
  • Joined 30-January 09
I'm just going to quickly post this before I goto bed, sorry if I've put it in the wrong place.

After refusing to accept that my only option was to send my ST3500320AS (part number ending in 303) to seagate for a month, I thought I'd give the UART connection a go but couldn't find a ready made RS232 - TTL level converter in my town. Not wanting to wait 'till next week for something from ebay, I decided to follow this simple example to build my own.

I'm happy to report that this example worked perfectly at 5v TTL voltage from the MAX232 chip with the only difference from the schematics on that page being that I pulled power from a stripped usb lead which I plugged in separately.

The parts cost less than $20 AUD and were all available, in stock, from my local 'Jaycar' dealer. If you're not experienced with this sort of thing, just get a 'breadboard' like I did, a couple of lengths of different coloured wire, some pins and some heatshrink to ensure insulation and you can't really go wrong. Just remember to check the polarization of your capacitors and apply as indicated with the + on the diagram.

To test the device I connected the TTL Tx and Rx together and transmitted some random data, e.g. 'hello', and had it echoed back through the Rx but it is important to note that if the power isn't connected to the device then it will echo back the same no matter what. So also test with the TTL Tx and Rx disconnected to make sure it isn't doing that, thus confirming that the chip is actually getting power.

I followed the instructions and pics here to remove the BSY state on my drive but also checked everything against the instructions on the LBA 0GB fix page. I did notice that between the different sets of instructions, there is one discrepancy. The first power cycle of the drive is said to be done before the 'G-list Erase' command on one set of instructions, but after it on the other. I power cycled the drive before AND after, while holding my breath, and it worked fine.

Excellent. Thanks Gradius2 and Fatlip + anyone else who had anything to do with this fix. It's saved me a month of down time.

If anyone would like any more info or help with this, I'll be glad to do what I can.


Aerostop

#1083
anonymous

anonymous
  • Member
  • 4 posts
  • Joined 27-January 09
Google says "Santools Inc." is at:

3133 Freedom Ln
Plano, TX 75025

and has a picture of their "office." :-)

http://maps.google.c...&...=1&ct=image

(Click on "Street View" to see the lawn.)

#1084
WeatherPaparazzi

WeatherPaparazzi

    Newbie

  • Member
  • 20 posts
  • Joined 20-January 09
Ok, I have the drive hooked up but in hyperterminal all I get is jiberish. It shows it is communicating with the drive. I tried fliping the tx and rx connection and then all I got was an arrow but I could not get anything by pressing Control Z

The board on the HDA and I got this then I removed the board from the HDA and still got this. Power is up with 2AA and the led power light comes on the RS232 TTL Board.

Is there something wrong in hyperterminal?

#1085
Gibby

Gibby
  • Member
  • 8 posts
  • Joined 28-January 09
Sorry, guys... there's no way on earth Seagate made or shipped anywhere near 100M 7200.11's last year.

Here's the July - Sept '08 10Q (quarterly financial statement) Seagate filed in October.

On page 46:

Unit shipments for our products in the quarter ended October 3, 2008 were as follows:

Enterprise —5.2 million, flat from the immediately preceding quarter and up from 4.6 million units in the year-ago quarter.
Mobile —9.8 million, up from 6.9 million and 7.9 million units in the immediately preceding quarter and year-ago quarters, respectively.
Desktop —28.2 million, up from 25.4 million units in the immediately preceding quarter and down from 29.0 million units in the year-ago quarter.
Consumer —4.8 million, down from 5.7 million units in both the immediately preceding and year-ago quarters.


The Desktop channel includes all the OEM drives. Although some may be branded vanilla Seagate and Maxtor, they're what's already installed in or available at the OEMs - Dell, HP, etc. The drives reported in various forums across the internet are part of the 4.8M Consumer channel drives sold that quarter through non-OEM distributors and retailers. A drive from an onlike distributor like Newegg is still a Consumer drive, despite being advertised as 'OEM' (= jut not retail packaged).

The Desktop channel (real) OEM drives are the Dell and HP variants that use different firmware. We're not hearing about massive problems at OEMs because there apparently IS NO problem. Different production line, different firmware, different testing. Fore example, the DE15 firmware Dell drives use is not subject to this problem (that we know of). Firmware updates to OEM 7200.11 drives only affect the hesittion/stuttering problem. And nobody is reporting problems here (that I have seen) with a Seagate-branded OEM drive shipped WITH a computer. Almost everyone here with problems bought the drive(s) separately through online distributors or regular retailers, i.e., Seagate's 'Consumer' channel.

So we're talking about 5M consumer drives in the third calendar quarter of 2008, or 1.6M/month - only a portion of which actually were 1) Seagate (vs. Maxtor units) and 2) specifically 7200.11s (which didn't start shiping in volume until sometime in Feb.). So we're looking at a potential population of MAYBE 10M affected 7200.11 drives for all of 2008 tops. The only way you can get to 100M+ drives is to miscount all the OEM Desktop-channel drives along with the Consumer- channel drives. Considering ALL the consumer-channel drives Seagate shipped in 2008, I would guess that the Non-Chinese, non-CCxx firmware 7200.11s potentially affected are in the single-digit millions. And a substantial portion of THOSE have not even been sold or installed yet - they're still in the distribution channels.

#1086
Gibby

Gibby
  • Member
  • 8 posts
  • Joined 28-January 09
On a different note, the 7200.11 MOOSE firmware seems to apply to 188GB platter drives while BRINKS firmware seems to go with the 250GB platter version of the 7200.11s. Is this consistent with what people are seeing in terminal diagnostics? The information is available at the T> prompt by using CTRL + L key combo.

Seagate's initial firmware 'fix' (later pulled) was only suppose to be for MOOSE drive firmware, although I think it would load on a BRINKS drive and brick it. The latest firmwares are for either the MOOSE or BRINKS drives, depending on what Seagate directes you to download based on S/N and P/N, and can be seen in the firmware file name. I don't know of any app that will give you this information outside of the diagnostic TTY port command above.

#1087
anonymous

anonymous
  • Member
  • 4 posts
  • Joined 27-January 09
Gibby said, "We're not hearing about massive problems at OEMs because there apparently IS NO problem. Different production line, different firmware, different testing. Fore example, the DE15 firmware Dell drives use is not subject to this problem (that we know of)."

Gibby, have you looked at the Dell web site? From:

http://en.community....spx?PageIndex=3

"On Wednesday, January 28, 2009, Dell has issued a firmware update DE13 for the ST3500620AS, stating:

Level of Importance: Urgent
Dell highly recommends applying this update as soon as possible. The update contains changes to improve the reliability and availability of your Dell system.

Fixes and Enhancements
The DE13 firmware corrects potential hang on power up. Drive will appear not accessible after hang."

#1088
poolcarpet

poolcarpet

    Member

  • Member
  • PipPip
  • 108 posts
  • Joined 02-January 09
I just fixed mine and I had no problems. Makre sure your hyperterminal is correct:

1. Make sure you are connecting to correct COM port
2. Make sure settings are correct - refer to screenshot given here:

http://www.msfn.org/...p...st&p=828228

If it still doesn't work - then I guess you need to check your TTL board.


Ok, I have the drive hooked up but in hyperterminal all I get is jiberish. It shows it is communicating with the drive. I tried fliping the tx and rx connection and then all I got was an arrow but I could not get anything by pressing Control Z

The board on the HDA and I got this then I removed the board from the HDA and still got this. Power is up with 2AA and the led power light comes on the RS232 TTL Board.

Is there something wrong in hyperterminal?



#1089
Gibby

Gibby
  • Member
  • 8 posts
  • Joined 28-January 09
Didn't see that one - I was Googling for Dell, HP, Seagate, firmware etc. and didn't get anything intresting back. I see the Dell forum thread you posted was actually started by Oliver.HH and he subsequently posted a reply with Dell's firmware fix. This is apparently just for the ST3500620AS. What's up with that, Oliver.HH? It almost looks like they had no idea about the problem until recently. Did Dell support have anything to say?

Looking at the HP site, I can only find a single unanswered post here regarding the problem.

On one hand, I can't believe this problem is 'as big' with OEM drives considering a single post on either forum about the problem. On the other hand, it wouldn't be hard to imagine Dell or HP treadmill support farms missing such a widespread problem - they just send out another drive under warranty whenever anyone calls about a bricked drive and the OEMs return the old one to Seagate for credit. With all the drives they have out there, you would expect them to have recognized the problem months ago and worked out the firmware issues with Seagate. Is it possible that they simply have not recognized a pattern yet? Or is it an extremely rare issue on OEM drives?

I guess I'm really confused, now.

#1090
poolcarpet

poolcarpet

    Member

  • Member
  • PipPip
  • 108 posts
  • Joined 02-January 09
You all are assuming that Dell, HP, and other large vendors are using this 7200.11 SD15 in their setup. Who knows if that is a fact? If they are not even using 7200.11 SD15, then it kinda explains why there is no big news in that space, isn't it?

Didn't see that one - I was Googling for Dell, HP, Seagate, firmware etc. and didn't get anything intresting back. I see the Dell forum thread you posted was actually started by Oliver.HH and he subsequently posted a reply with Dell's firmware fix. This is apparently just for the ST3500620AS. What's up with that, Oliver.HH? It almost looks like they had no idea about the problem until recently. Did Dell support have anything to say?

Looking at the HP site, I can only find a single unanswered post here regarding the problem.

On one hand, I can't believe this problem is 'as big' with OEM drives considering a single post on either forum about the problem. On the other hand, it wouldn't be hard to imagine Dell or HP treadmill support farms missing such a widespread problem - they just send out another drive under warranty whenever anyone calls about a bricked drive and the OEMs return the old one to Seagate for credit. With all the drives they have out there, you would expect them to have recognized the problem months ago and worked out the firmware issues with Seagate. Is it possible that they simply have not recognized a pattern yet? Or is it an extremely rare issue on OEM drives?

I guess I'm really confused, now.



#1091
WeatherPaparazzi

WeatherPaparazzi

    Newbie

  • Member
  • 20 posts
  • Joined 20-January 09

I just fixed mine and I had no problems. Makre sure your hyperterminal is correct:

1. Make sure you are connecting to correct COM port
2. Make sure settings are correct - refer to screenshot given here:

http://www.msfn.org/...p...st&p=828228

If it still doesn't work - then I guess you need to check your TTL board.


The board is an RS232-ttl board. No cable hooked up since I pluged it right into the RS232 port on the back on an older laptop. I tried both connections and it still just gave me jiberish on the screen.

Do I need to have a cable go between the RS232 board and the computer? That is the only step that I am missing since I just pluged it directly to the computer.

That board I got is an MDRS3232m Ver 1.0

The power LED lights up when I hook up the 3V to it with 2AA bateries.

It's trying to communicate but when I type Control Z it just gives me garbage on the screen.

The hard disk does spin up if I have power to it. Just stuck in BSY.

Any ideas?

#1092
poolcarpet

poolcarpet

    Member

  • Member
  • PipPip
  • 108 posts
  • Joined 02-January 09
perhaps you can take a photo of it and see if anyone else here is using the same board??

I just fixed mine and I had no problems. Makre sure your hyperterminal is correct:

1. Make sure you are connecting to correct COM port
2. Make sure settings are correct - refer to screenshot given here:

http://www.msfn.org/...p...st&p=828228

If it still doesn't work - then I guess you need to check your TTL board.


The board is an RS232-ttl board. No cable hooked up since I pluged it right into the RS232 port on the back on an older laptop. I tried both connections and it still just gave me jiberish on the screen.

Do I need to have a cable go between the RS232 board and the computer? That is the only step that I am missing since I just pluged it directly to the computer.

That board I got is an MDRS3232m Ver 1.0

The power LED lights up when I hook up the 3V to it with 2AA bateries.

It's trying to communicate but when I type Control Z it just gives me garbage on the screen.

The hard disk does spin up if I have power to it. Just stuck in BSY.

Any ideas?



#1093
Gibby

Gibby
  • Member
  • 8 posts
  • Joined 28-January 09

You all are assuming that Dell, HP, and other large vendors are using this 7200.11 SD15 in their setup. Who knows if that is a fact? If they are not even using 7200.11 SD15, then it kinda explains why there is no big news in that space, isn't it?


No - I was assuming that they were using their OEM version of the 720011 with DE12 or HP12 or whatever non-SDxx firmware, hence my comment that they don't appear to be affected by the bricking problem. It turns out that Dell IS having this issue with at least one Seagate model and THEIR Dell OEM firmware: DE12. The link shows Dell published a firmware update (DE13) two days ago for the Dell OEM ST3500620AS to prevent the bricking/dissappearing drive problem being discussed here.

A Dell or HP user is a lot less likely than people here to be Googling problems with Seagate drives and firmware. They're not likely to show up here reporting their bad drives. They're much more likely just to call Dell or HP support and just get a replacement drive. And neither Dell, HP nor Seagate would be eager to publish ANY news about significant problems with their OEM drives. So I'm wondering if there IS a problem with OEM drives that the OEMs haven't recognized yet or are just trying to keep quiet about. Or the other obvious possibility is that there have been NO significant problems with OEM 7200.11 drives and corresponding firmware at all - with the single Dell exception just posted.

Edited by Gibby, 30 January 2009 - 10:41 PM.


#1094
jaclaz

jaclaz

    The Finder

  • Developer
  • 14,689 posts
  • Joined 23-July 04
  • OS:none specified
  • Country: Country Flag

Considering ALL the consumer-channel drives Seagate shipped in 2008, I would guess that the Non-Chinese, non-CCxx firmware 7200.11s potentially affected are in the single-digit millions. And a substantial portion of THOSE have not even been sold or installed yet - they're still in the distribution channels.


Well, using the "safety factor", I already used a speculative "base" of 10,000,000.

As already posted, them being 5,000,000 instead:

In other words, the hypothesys is that throughout 2008 Seagate manufactured between 10,000,000 and 100,000,000 drives of the "family".

Then, the lower number is taken and multiplied by the smallest possible incidence of "affected drives" (per Seagate statement) 0.002, i.e. that 1/500 of the drives in the family may be affected.

Since "some percentage" can mean ANY number <1, the found 20,000 can easily come out by a lesser number of drives "in the family" manufactured multiplied by a higher percentage:
10,000,000*0.002=20,000 (1/500)
5,000,000*0.004=20,000 (1/250)
2,500,000*0.008=20,000 (1/125)
1,000,000*0.02=20,000 (1/50)
while still within the same definition of "some percentage", and at the lower end of it......


And in any case, the "objective" of the speculation:

Without official data, and as clearly stated, the above numbers are just speculative, and, while they might be inaccurate, the order of magnitude seems relevant enough to rule out that the 100÷150 reports here on MSFN, represent NOT a significant fraction (1/7 or 1/8) of all affected drives.

was NOT to determine exactly:
  • how many drives were produced
  • how many drives are affected and have developed the problem
only to check if it should be a matter of a few hundreds (a "few" or a "handful") as opposed to several thousands (a "lot" or "too many to count").

jaclaz

Edited by jaclaz, 31 January 2009 - 08:04 AM.


#1095
Oliver.HH

Oliver.HH
  • Member
  • 7 posts
  • Joined 28-January 09

Didn't see that one - I was Googling for Dell, HP, Seagate, firmware etc. and didn't get anything intresting back. I see the Dell forum thread you posted was actually started by Oliver.HH and he subsequently posted a reply with Dell's firmware fix. This is apparently just for the ST3500620AS. What's up with that, Oliver.HH? It almost looks like they had no idea about the problem until recently. Did Dell support have anything to say?

I became aware of the problem by the media and found out that Seagate's support website offered no remedy for OEM drives while the Dell website had no information about the problem at all. I contacted Dell and Seagate support to find out whether DE12 firmware was affected. Seagate provided only an automated answer not related to my case while Dell support was not aware of the issue at that time. Then Dell's fix quietly appeared on their website while German support staff still did not have any further information. However, I do have a support contact who tries to help as much as possible. He is now attempting to find out whether the published Dell fix is really the correct one (I'm not sure as there might be a BRINKS/MOOSE confusion).

Looking at the HP site, I can only find a single unanswered post here regarding the problem.

On one hand, I can't believe this problem is 'as big' with OEM drives considering a single post on either forum about the problem. On the other hand, it wouldn't be hard to imagine Dell or HP treadmill support farms missing such a widespread problem - they just send out another drive under warranty whenever anyone calls about a bricked drive and the OEMs return the old one to Seagate for credit. With all the drives they have out there, you would expect them to have recognized the problem months ago and worked out the firmware issues with Seagate. Is it possible that they simply have not recognized a pattern yet? Or is it an extremely rare issue on OEM drives?

I guess what we're seeing here are huge delays in corporate information pipelines. My impression is that many OEM customers aren't even aware that they've got a Seagate drive in their PCs, let alone that there's a firmware bug out there. While drive failures happen, the reason may not be diagnosed correctly for some time so it may just be too early for these huge support organisations to discover an unusual pattern of failures.

Initially, I could not tell whether my drive was affected so my first question was whether Dell's DE12 firmware was based on Seagate's SD15 (or another affected version). Dell support did not have that information. So I read the manufacturing date from my drive's label (September) and compared that to the manufacturing dates of drives which had already failed (from the fail/fine thread in this forum). My impression was that my firmware had a high probability of being derived from the buggy ones and this turned out to be true. In contrast, I've read statements on the web where people just compare the firmware version DE12 to the versions confirmed by Seagate as buggy and then incorrectly deduct that their firmware is OK (try googling "7200.11 +DE12").

By the way, Dell has another fix on their site for ST3750630AS and ST31000340AS drives.

Edited by Oliver.HH, 31 January 2009 - 08:41 AM.


#1096
mikesw

mikesw

    Advanced Member

  • Member
  • PipPipPip
  • 365 posts
  • Joined 05-October 05
If one searches for "seagate firmware" on dells site you get 2291 hits. Of course some of it may be problem discussions, and
some of it the actual firmware patch. Hmmm, a click per day is about a years worth of clicking through all these links!

http://search.dell.c...&...cat=sup&p=1

:rolleyes:

Edited by mikesw, 31 January 2009 - 11:11 AM.


#1097
anonymous

anonymous
  • Member
  • 4 posts
  • Joined 27-January 09
Well, the first link may be the most interesting currently: advising customers to keep power cycles for Seagate 3.5” 7.2K Nearline SATA Barracuda ES2 drives (running MA07) to a minimum until new firmware is available at Dell.com in early February (target date: 2/10/2009).

http://support.dell....o...lang=en&cs=

#1098
poolcarpet

poolcarpet

    Member

  • Member
  • PipPip
  • 108 posts
  • Joined 02-January 09
Has anyone fixed their hard disk and updated to SD1A firmware? Those who have done so, can you check in Seagate Sea Tools, does the long DST fail? Mine is failing the long DST test, but the disk is working fine. Managed to format to 100%, managed to do surface scan in Windows XP as well and looks all ok. Just wondering if any one else is failing long DST too.

Thanks!

#1099
Gibby

Gibby
  • Member
  • 8 posts
  • Joined 28-January 09

And in any case, the "objective" of the speculation:

Without official data, and as clearly stated, the above numbers are just speculative, and, while they might be inaccurate, the order of magnitude seems relevant enough to rule out that the 100÷150 reports here on MSFN, represent NOT a significant fraction (1/7 or 1/8) of all affected drives.

was NOT to determine exactly:
  • how many drives were produced
  • how many drives are affected and have developed the problem
only to check if it should be a matter of a few hundreds (a "few" or a "handful") as opposed to several thousands (a "lot" or "too many to count").

jaclaz


But those last two questions ARE what I'm trying to figure out. I'm not arguing with the figures you have come up with - they seem perfectly reasonable to me.

I should probably state what I'm looking for like this: If I bought a 7200.11 recently or I plan to,

1) what are the chances that it is/will be one of the affected ones (per Seagate), and
2) If it is one of the affected ones, what are the chances that it will fail by locking on boot?

#2 is a no-brainer: if I had a known affected drive, I would just update the firmware immediately because there is a substantial enough (to me) chance of it locking up in the near future.

#1 is easy if you have the machine or drive right in front of you - just punch the number in on the Seagate site. You either have an affected one or you don't.

#1 becomes a little trickier if the machines are not local, you don't have network access to them and you don't know the exact model, S/N or firmware version, but you know they're almost all aftermarket 7200.11s. This is a problem one of my coworkers 'inherited' from a different company. These machines will all be replaced in the next year or two, and they can probably get away with just replacing whichever ones go down first. But how many are likely to be in the affected group now? They may choose to take their chances if 1% are affected, but they would need to accellerate the replacemnt schedule if 30% of the machines have an affected drive and might lock up.

The population he's interested in is whatever the distributors/retailers, i.e., TigerDirect/Newegg/Amazon/Big Box stores (USA) have been selling the last year - that's where the drives supposedly came from. I think that also applies to a lot of the people posting here as well, except I have no idea what the big retailers are outside the US.

The only other information (aside from here) is the occasional customer review or posting in other forums where someone bought these in bulk from the same sources. There have been reports of 30 - 40% failure rate or at least included in the affected group. I would guess that's at the high end of the range simply because unhappy customers are more likely to post reviews than satisfied customers. But is the 'real' figure .2% like Seagate claims for the entire product family? It appears to be higher than that, but does that make it .5% or 10% or 35% for retail drives? I was just scrounging around for a reasonable number in the absence of anything useful from the vendors or Seagate.

The issue of if or how many OEM drives are affected has no real impact on me, personally. I'm just curious about that one since it came up.

My experience (quite meaningless) is six 7200.11s, two affected one bricked. I unbricked it successfully with the instructions here, so thanks to Gradius, fatlip and the hddguru forum people for your help.

#1100
christophocles

christophocles
  • Member
  • 1 posts
  • Joined 31-January 09

My experience (quite meaningless) is six 7200.11s, two affected one bricked. I unbricked it successfully with the instructions here, so thanks to Gradius, fatlip and the hddguru forum people for your help.

I also have six 7200.11 drives and one went down a week ago to the BSY bug after about 5 months of use. I am waiting on a TTL adapter so that I can attempt to unbrick it, and the reports here seem very encouraging. I don't want to go through the hassle of RMAing the drive and waiting an eternity to get it back. I currently have all of the drives powered down until I can fix the affected one and make sure my RAID is still OK before I patch each individual drive's firmware.

It's really frustrating how your drive can be perfectly fine one day and then *zap* the data is inaccessible. HDDs are not something I play around with -- I purchased an expensive high-end PSU and RAID controller just to help prevent random failure/data loss from occuring. Seagate really dropped the ball badly here and they darn well better have it fixed with the new firmware. I'm not totally convinced yet. Thank %deity% I went with RAID-6. Funny coincidence, though, that I lost a drive within a couple days of first reading about the issue.

I also think it's a total crock that only 0.2% of all drives are affected. It's public relations damage control. If the root cause analysis on this forum is correct then its only a matter of time before EVERY drive prematurely fails. You are rolling the dice every single time you reboot, and I hit the mark only rebooting about once a week for five months. Most people power down every night. Seagate even recommends proactively patching firmwares, but NO ONE who isn't already "in the know" is going to even think about patching a HDD firmware until after their drive is already borked. It seems crazy that they'd ask someone like my grandmother to patch drive firmware on a drive that already contains sensitive data that (probably) isn't backed up. This whole issue is a timebomb and it's going to get much much worse as regular non-technical folks start losing all of their family photos and documents and then start flipping out and suing Seagate. This "free data recovery" is really going to put the hurt on Seagate as well. Repairing and patching all of these drives and attempting to copy data from them is a laborious process, I'm sure. We are only at the onset of the crapstorm -- I expect this to make mainstream headline news this year.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users