Jump to content

Gibby

Member
  • Posts

    8
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    United States

About Gibby

Gibby's Achievements

0

Reputation

  1. Some of these monitors had problems with bad capacitors on the power supply. This wouldn't apply if it has an external brick for power. If the power cord runs directly into the back of the monitor, then the power supply circiut is inside the back of the monitor. It probably wouldn't help you at this point, but it would be interesting to know if your monitors had that particular problem. You can see if the capacitors had this problem just by looking at the top of them. Some examples are shown here. The ones to look at inside the monitor will be the largest ones, and anything physically close to the big aluminum heatsinks. Replacing bad caps isn't too difficult. I doubt you would want to break out a soldering iron to replace components as part of your job, but if you're going to scrap them out anyway... Might make a nice monitor at home. I'm in the process of recapping a Samsung SyncMaster 204B. These don't get the pink stripe. The monitor just takes longer and longer to get bright when it's powered on. Eventually it won't come up at all. Different monitors exhibit all kinds of unusual problems when the power supply is failing, so this is just a guess. I have not heard of the pink stripe necessarily being a bad internal power supply, but a lot of the large Dell monitors were manufactured with bad power caps and start failing with all kinds of interesting artifacts on the screen.
  2. We could probably figure out some of this information with the controller diagnostics port (the RS232 / terminal thing) with the output of the Ctrl+L command, e.g.: [UNKNOWN MODEL] TetonST 2.0 SATA Moose Gen 3.0 (RAP fmt 10) w / sdff (RV) Product FamilyId: 27, MemberId: 03 HDA SN: 9QK01LX2, RPM: 7206, Wedges: 108, Heads: 6, Lbas: 575466F0, PreampType: 47 A8 PCBA SN: 0000C816C5P6, Controller: TETONST_2(6399)(3-0E-3-1), Channel: AGERE_COPPERHEAD_LITE, PowerAsic: MCKINLEY Rev 51, BufferBytes: 2000000 [UNKNOWN MODEL] TetonST4 SATA Brinks Gen3.1 (RAP14) Product FamilyId: 2D, MemberId: 07 HDA SN: 6SZ<censored>, RPM: 7203, Wedges: 108, Heads: 2, Lbas: 2542EAB0, PreampType: 59 21 PCBA SN: <censored>, Controller: TETONST_4(63A0)(3-0E-4-0), Channel: AGERE_COPPERHEAD_LITE, PowerAsic: MCKINLEY DESKTOP LITE Rev 94, BufferBytes: 1000000 7200.11 ST3320613AS SD22 TetonST4 SATA Brinks Gen3.1 (RAP14) Product FamilyId: 2D, MemberId: 07 HDA SN: 6SZ02ZXZ, RPM: 7203, Wedges: 108, Heads: 2, Lbas: 2542EAB0, PreampType: 73 01 PCBA SN: 0000M847KPRX, Controller: TETONST_4(63A0)(3-0E-4-0), Channel: AGERE_COPPERHEAD_LITE, PowerAsic: MCKINLEY DESKTOP LITE Rev 94, BufferBytes: 1000000 7200.11 ST3320613AS TetonST4 SATA Brinks Gen2 1-Disc (RAP14) Product FamilyId: 2D, MemberId: 03 HDA SN: 9SZ081HN, RPM: 7203, Wedges: 108, Heads: 2, Lbas: 2542EAB0, PreampType: 73 21 PCBA SN: 0000C832VTX8, Controller: TETONST_4(63A0)(3-0E-4-0), Channel: AGERE_COPPERHEAD_LITE, PowerAsic: MCKINLEY Rev 94, BufferBytes: 1000000 We would be left guessing what this reveals about the drive internals, but we could probably figure out some kind of patterns with enough data. And if you want to see your posts deleted quickly on the Seagate forums, just try asking about ANY of the information above. They will not even acknowledge the diagnostics port exists and certainly don't want users poking around in the drive controller's ROM.
  3. was NOT to determine exactly: how many drives were produced how many drives are affected and have developed the problem only to check if it should be a matter of a few hundreds (a "few" or a "handful") as opposed to several thousands (a "lot" or "too many to count"). jaclaz But those last two questions ARE what I'm trying to figure out. I'm not arguing with the figures you have come up with - they seem perfectly reasonable to me. I should probably state what I'm looking for like this: If I bought a 7200.11 recently or I plan to, 1) what are the chances that it is/will be one of the affected ones (per Seagate), and 2) If it is one of the affected ones, what are the chances that it will fail by locking on boot? #2 is a no-brainer: if I had a known affected drive, I would just update the firmware immediately because there is a substantial enough (to me) chance of it locking up in the near future. #1 is easy if you have the machine or drive right in front of you - just punch the number in on the Seagate site. You either have an affected one or you don't. #1 becomes a little trickier if the machines are not local, you don't have network access to them and you don't know the exact model, S/N or firmware version, but you know they're almost all aftermarket 7200.11s. This is a problem one of my coworkers 'inherited' from a different company. These machines will all be replaced in the next year or two, and they can probably get away with just replacing whichever ones go down first. But how many are likely to be in the affected group now? They may choose to take their chances if 1% are affected, but they would need to accellerate the replacemnt schedule if 30% of the machines have an affected drive and might lock up. The population he's interested in is whatever the distributors/retailers, i.e., TigerDirect/Newegg/Amazon/Big Box stores (USA) have been selling the last year - that's where the drives supposedly came from. I think that also applies to a lot of the people posting here as well, except I have no idea what the big retailers are outside the US. The only other information (aside from here) is the occasional customer review or posting in other forums where someone bought these in bulk from the same sources. There have been reports of 30 - 40% failure rate or at least included in the affected group. I would guess that's at the high end of the range simply because unhappy customers are more likely to post reviews than satisfied customers. But is the 'real' figure .2% like Seagate claims for the entire product family? It appears to be higher than that, but does that make it .5% or 10% or 35% for retail drives? I was just scrounging around for a reasonable number in the absence of anything useful from the vendors or Seagate. The issue of if or how many OEM drives are affected has no real impact on me, personally. I'm just curious about that one since it came up. My experience (quite meaningless) is six 7200.11s, two affected one bricked. I unbricked it successfully with the instructions here, so thanks to Gradius, fatlip and the hddguru forum people for your help.
  4. No - I was assuming that they were using their OEM version of the 720011 with DE12 or HP12 or whatever non-SDxx firmware, hence my comment that they don't appear to be affected by the bricking problem. It turns out that Dell IS having this issue with at least one Seagate model and THEIR Dell OEM firmware: DE12. The link shows Dell published a firmware update (DE13) two days ago for the Dell OEM ST3500620AS to prevent the bricking/dissappearing drive problem being discussed here. A Dell or HP user is a lot less likely than people here to be Googling problems with Seagate drives and firmware. They're not likely to show up here reporting their bad drives. They're much more likely just to call Dell or HP support and just get a replacement drive. And neither Dell, HP nor Seagate would be eager to publish ANY news about significant problems with their OEM drives. So I'm wondering if there IS a problem with OEM drives that the OEMs haven't recognized yet or are just trying to keep quiet about. Or the other obvious possibility is that there have been NO significant problems with OEM 7200.11 drives and corresponding firmware at all - with the single Dell exception just posted.
  5. Didn't see that one - I was Googling for Dell, HP, Seagate, firmware etc. and didn't get anything intresting back. I see the Dell forum thread you posted was actually started by Oliver.HH and he subsequently posted a reply with Dell's firmware fix. This is apparently just for the ST3500620AS. What's up with that, Oliver.HH? It almost looks like they had no idea about the problem until recently. Did Dell support have anything to say? Looking at the HP site, I can only find a single unanswered post here regarding the problem. On one hand, I can't believe this problem is 'as big' with OEM drives considering a single post on either forum about the problem. On the other hand, it wouldn't be hard to imagine Dell or HP treadmill support farms missing such a widespread problem - they just send out another drive under warranty whenever anyone calls about a bricked drive and the OEMs return the old one to Seagate for credit. With all the drives they have out there, you would expect them to have recognized the problem months ago and worked out the firmware issues with Seagate. Is it possible that they simply have not recognized a pattern yet? Or is it an extremely rare issue on OEM drives? I guess I'm really confused, now.
  6. On a different note, the 7200.11 MOOSE firmware seems to apply to 188GB platter drives while BRINKS firmware seems to go with the 250GB platter version of the 7200.11s. Is this consistent with what people are seeing in terminal diagnostics? The information is available at the T> prompt by using CTRL + L key combo. Seagate's initial firmware 'fix' (later pulled) was only suppose to be for MOOSE drive firmware, although I think it would load on a BRINKS drive and brick it. The latest firmwares are for either the MOOSE or BRINKS drives, depending on what Seagate directes you to download based on S/N and P/N, and can be seen in the firmware file name. I don't know of any app that will give you this information outside of the diagnostic TTY port command above.
  7. Sorry, guys... there's no way on earth Seagate made or shipped anywhere near 100M 7200.11's last year. Here's the July - Sept '08 10Q (quarterly financial statement) Seagate filed in October. On page 46: The Desktop channel includes all the OEM drives. Although some may be branded vanilla Seagate and Maxtor, they're what's already installed in or available at the OEMs - Dell, HP, etc. The drives reported in various forums across the internet are part of the 4.8M Consumer channel drives sold that quarter through non-OEM distributors and retailers. A drive from an onlike distributor like Newegg is still a Consumer drive, despite being advertised as 'OEM' (= jut not retail packaged). The Desktop channel (real) OEM drives are the Dell and HP variants that use different firmware. We're not hearing about massive problems at OEMs because there apparently IS NO problem. Different production line, different firmware, different testing. Fore example, the DE15 firmware Dell drives use is not subject to this problem (that we know of). Firmware updates to OEM 7200.11 drives only affect the hesittion/stuttering problem. And nobody is reporting problems here (that I have seen) with a Seagate-branded OEM drive shipped WITH a computer. Almost everyone here with problems bought the drive(s) separately through online distributors or regular retailers, i.e., Seagate's 'Consumer' channel. So we're talking about 5M consumer drives in the third calendar quarter of 2008, or 1.6M/month - only a portion of which actually were 1) Seagate (vs. Maxtor units) and 2) specifically 7200.11s (which didn't start shiping in volume until sometime in Feb.). So we're looking at a potential population of MAYBE 10M affected 7200.11 drives for all of 2008 tops. The only way you can get to 100M+ drives is to miscount all the OEM Desktop-channel drives along with the Consumer- channel drives. Considering ALL the consumer-channel drives Seagate shipped in 2008, I would guess that the Non-Chinese, non-CCxx firmware 7200.11s potentially affected are in the single-digit millions. And a substantial portion of THOSE have not even been sold or installed yet - they're still in the distribution channels.
  8. True, but we don't have to know. The probability of a drive failing is the same as long as at least one event is logged per power cycle. No, the chance of a drive failing due to this condition is zero unless it is powered off. All that matters is that the event counter changes at all from power-on to power-off. It does not matter whether it increases by 1, or by 50 or by any other value as long as such values are equally probable. But the events are hardly equally probable. It's much more likely that you're going to get a very small number each power cycle. The chances of dozens or hundreds of entries each power cycle are almost non-existant unless your drive is hosed to begin with. And consider this: if the log incremented by EXACTLY one each power cycle (I don't know if that's even possible), what's the probability an (affected) drive will fail? It's 100%. It will fail with certainty because it WILL occur on the 320th power cycle. It will take just under a year or so for this to happen for a lot of home users assuming a power cycle per day. Just an example of course. We have to consider that a lot of drives from the list can be seen failing after around 60 - 100 days. Would this be something roughly like 60 - 100 power cycles for those drives? So maybe for the first 'batch' of bad drives, you're seeing something like a 3 - 5 log entries on average per power cycle. My point is that the probability of an affected drive failing may be as high as something like 3, 4 or 5:1. We have probably not seen the bulk of failures yet - it's too early! And the lower the average number of log entries per power cycle, the higher the probability eventually becomes for the initial 320th entry and each 256th circulation after that. It will take longer, i.e., more power cycles, but there's a better chance of hitting the bad entry each complete cycle. Even if the average number of entries is very low, like .5 per power cycle, there is an extremely high chance of the drive failing - eventually. It's just going to take around 640 power cycles, but you are unlikely to skip ending exactly on entry 320 (or x*254 thereafter). Figuring out the probability of failure on any single power cycle isn't really useful. The question most 7200.11 owners have is: What are the chances my drive will fail AT ALL in the next year or two?
×
×
  • Create New...