Terastation Drive Replacement

February 26th, 2008

Back in 2005, i bought a Buffalo Terastation 1TB NAS (Network Attached Storage). Basically its a backup device – a mini-computer box with 4 harddrives, each 250GB, or 1TB total. Last week one of the drives died after a power outage (we get a lot of outages on san jose ave), and today i fixed it. However, it wasn’t that easy – I spent over 3 hours on it – when it should have taken less than one. So i’m just gonna note a few things so the next guy might have it easier. Now here’s where i’m gonna geek out, so all non-geeks .. move along.

When i first setup the terastion, i did a RAID5 + RAID1 – that means i got a 750GB partition out of my 4 250GB drives. So if any one of the four drives dies, i don’t lose my data. I just pop in a new drive and rebuild. Easy. This worked great for 3 years, never had to replace a drive. Every now and then we’d lose power and when i powered on my terastation, it would take 1-2 days to check the disks before we were good to go.

Last week Terastation would not recover – it would boot up for a minute, do disk check, then turn off (power light was off). However, each of the four drive status lights would stay red, with disk 3 blinking red. After reading the manual, I decided to replace disk 3. I also read the wiki FAQ, so i knew i could replace with any same-size or bigger drive. I did this, spending almost 45 mins opening the terstation up, switching the drive, and putting it back together. Turned it on, and it stayed on. Yay!

So now all i had to do is connect to the web manager interface and “rebuild the raid array”. Once i logged in, it said Raid array 1 error – i clicked it, and I’d get to array 1, and it listed disk 1, 2, 3, 4 .. but the checkbox to disk 3 was greyed out. I spent a while looking through the web pages and decided something might be wrong with my new disk. I turned it off, took the thing apart (only took 15mins this time), pulled out my new disk, and basically connected it to my PC in a external USB drive box. It worked fine. Ugh. Was terastation broken? I tried the old drive, that terastation thought was dead. It also seemed OK by PC standards. Ugh.

After going back in forth and trying different things, it turned out that the new drive had to have the jumper in Cable Select to work. The older 3 western digital drives were not in Cable Select mode. Whatever.

Also, the LED lights on the front don’t always do exactly what the manual or FAQ says. Specifically, i loved this blog on replacing terastation drive, but at the what he says is different than what i saw. Once I clicked ‘Restructure RAID Array’ , the lights were all going nuts and within a minute it went to a page that said “Restructuring has completed successfully” “Checking RAID Array”. At this point my 8 drive lights are blinking red and green – the 3 old ones have solid red status, new one is not lit, and all 4 have blinking green activitiy. Power light is on, diag light is blinking green. On web interface, i clicked on Raid Array 1 and it says it is “Rearing (x.x % Complete)”. I waited a few mins and refreshed page .. percent complete is increasing. It’s working !!! 4 hours later it finished – my setup is as good as it ever was.

  1. Chris
    April 19th, 2008 at 22:52 | #1

    I just lost one of my TeraStation drives, too. I also noticed a problem with the new drive’s checkbox being greyed out in the browser upon rebuild, but a refresh of the browser cleared up that problem (I didn’t have to redo the drive installation or anything…good ol’ F5). The whole process took less than an hour.

    My TeraStation is currently in the “Rearing 10.1% Complete” stage. I bet that is a typo; it’s probably supposed to say “Reading” instead of “Rearing.” Or perhaps it really means “Rebuilding.” Either way, it’s gonna take about 4 hours.

    By the way, the reason the other 3 drive LEDs were solid red was because they were at over 90% capacity. That occurs to warn you that you are nearly out of drive space. Scared the crap out of me initially, though, since I thought it meant ALL FOUR drives had gone down and/or my RAID went poof. I was relieved to see that it just meant I was running out of free space. So blinking red = bad drive, solid red = nearly full drive.

  2. Matt
    May 5th, 2009 at 11:55 | #2

    Thanks for the help but surprisingly I had no problems at all with my replacement.

    I tested everything befor the rebuild – that saves time 😉

  3. Brian
    June 26th, 2009 at 14:32 | #3

    Hey Chad,

    Exactly the same thing happened to me today…Buffalo’s tech support was very helpful, and we actually got the thing to boot up even though drive 3 was bad. I haven’t been able to find an exact replacement drive, but have high hopes that a 320GB 7200RPM drive will work. 🙂 Thanks for your post..it’s great to know about the CableSelect pin. 🙂


  4. Chris
    November 3rd, 2011 at 21:20 | #4

    I’ve got a Terastation III in Raid 5 mode (6TB). Drive 4 went out. I followed the help provided the the web admin interface for replacing the drive, but always received a bad drive message and failure to resynchronize the drives. I spent a few hours trouble shooting and actually had purchased two drives just in case I had a convenient secondary failure since bad news always travels in pairs. Many reboots and attempts to rediscover and format the drive failed. As soon as I jumpered the new Samsung drive as cable select, it worked right away.

    Here are the jumper settings for the samsung drive: http://support-us.samsung.com/cyber/popup/iframe/pop_troubleshooting_fr.jsp?idx=43061&modelname=SV0411N&modelcode=&session_id=JvnzFYZns1KJyxp6hp3GBWw5LTjGTNWPQZybz5cmT1hv224VFGd2!-2136160717!1761676444!7501!-1!40774747!1761676348!7501!-1!1235414707202

    My jumper block only had four pins which basically allow for only cable select at the outer most pin pairs.

    After inserting the new HD with the jumper set (outermost two pins), the screen on the Terastation will display the message “Press FuncSW I31 New disk”. You are then supposed to press and hold down the Function button for three seconds to automatically rebuild the array.

    Chris (differen’t Chris) in 2008 said his took 4 hours.

    I hope that Buffalo Tech adds a comment to their help guide so others don’t have to waste their time.

    Best of luck,


  5. John
    February 23rd, 2012 at 09:06 | #5

    Came across this blog post while searching the Interet for an explanation of “rearing” of a RAID5 array. As you probably guessed, I also have a Buffalo Technology TeraStation, configured as a RAID5 array, which has had a drive failure.

    I went through this business 18 months ago, when one of the 250 Gb drives went belly-up. That time around, things went about as expected – no data were lost, but the array went into “degrade mode” where it had to reconstruct data on the fly from the parity info on the 3 good drives to make up for the missing data from the fourth (bad) drive.

    Back then, I decided it was a good time to upgrade anyway, since the 750 Gb array was starting to fill up. So instead of buying one 250 Gb drive to put it back as it was, I bought 4 Western Digital Caviar Black 1 Tb drives, to make a 3 Tb RAID5 array. That process also went about as expected. Rather than rebuilding the array and then re-sizing it, I just put in the 4 blank drives, created a new, empty array, and copied everything that had prevously been backed up to the 750 Gb array onto the new 3 Tb array.

    Well, last week

  6. John
    February 23rd, 2012 at 09:26 | #6

    Whoops, I accidentally bombed out before finishing my story…

    Well, last week, the array failed again. Only this time it was not so clear that one drive had failed. Had no access to anything on the TeraStation box, not even to the network interface. I tried removing one drive at a time to see if that would make the error go away or change as the defective drive was removed, since I knew that the box would run in “degrade mode” with one drive missing. No good. And the error messages were talking about being unable to load the kernel, rather than about a disk drive being bad. To make a long story short, I ended up re-flashing the EPROM several times and even the forced flash update routine didn’t quite do the trick, but in desperation, the tech folks suggested that I try one more forced flash with an option checked to “initialize the drive” which fortunately does not mean erase it also. That finally got it to boot, and finally it showed me that it was in “degrade mode” with a single bad drive. So I ordered a new 1 Tb drive under warranty from Western Digital. While waiting for it to arrive, I tried experimenting with the failed drive. Connecting it directly to my computer via eSATA (yeah, these are SATA drives, not PATA as in the prior messages), I was unable to remove all of the Linux partitions, re-partition it with one big Windows partition, format it, and run diagnostics on it, all of which claimed that the drive was working perfectly. Hmmm… really strange. So I figured that the “acid test” would be to put it back into the TeraStation and see what happened. Well, to my surprise, the TeraStation no longer complained of a defective drive, just of a bashed RAID5 array, and with a little bit of poking and prodding through the network interface, I convinced it to re-build the RAID5 array. It has been “rearing” the array for over a day, and is still not even 50% done. So, I am unsure if there is still a problem with the drive or not. Perhaps the rebuilding is taking so long because the array is now 3 Tb instead of 750 Gb.

    In any case, I plan to swap in the replacement drive I got from Western Digital and start over. Maybe the drive that went out is borderline and sometimes shows errors and sometimes doesn’t. But there’s no point in taking a chance of it blowing up again in a month.

    BTW, Western Digital apparently had no more WD1002FAEX refurbished drives to send me, so they sent me a WD2002FAEX drive. About the same, I guess, except for the 2 Tb capacity instead of 1 Tb. Oh well, once it goes into the RAID5 array, the extra 1 Tb will disappear…

    Bottom line… I never heard of “rearing” a damaged RAID5 array, but I guess it is supposed to mean “rebuilding”. Or maybe, like “rearing a child”, “rearing a RAID5 array” means “bringing it up to full maturity” or some such thing.

    Ain’t technology great when it works?


  7. John
    February 23rd, 2012 at 09:31 | #7

    P.S., I guess my typing is not up to par this morning. The sentence starting:

    I was unable to remove all of the Linux partitions, …

    should have said:

    I was able to remove all of the Linux partitions, …

    Maybe another cup of coffee will help. 🙂

  1. April 21st, 2008 at 14:04 | #1
  2. July 12th, 2011 at 04:31 | #2