Page 3 of 3 FirstFirst 123
Results 21 to 27 of 27

Thread: Hybrid Hard Drives

  1. #21
    Gold Member irneb's Avatar
    Join Date
    Apr 2007
    Location
    Jhb
    Posts
    625
    Thanks
    37
    Thanked 111 Times in 97 Posts
    Quote Originally Posted by IanF View Post
    I got Seagate desktop SSHD 1 TB drives.
    Great, personally I like Seagate ... always have. From my own experience they're the drives with the least trouble. Though as I've mentioned before, I've seen some tests done which actually show the opposite: Seagates having the highest failure rate in data servers, then WD, then best of the lot being Toshibas.

    The Toshiba I do have a sample of, which seems to corroborate the test: I had a very old Iomega external 80GB drive (around 9 years old). The casing & USB circuitry has since demolished itself after falling off a cupboard.. So I've stuck the 2.5" Toshiba disc into a SATA port - it still works perfectly.I've got another 250GB Toshiba (around 6 years old) which is the one I replaced recently with a new 3TB - it's now plugged into a Iomega iConnect together with the 80GB using some self-frankensteined Sata-toUSB converters, both still running fine. But even though I've had 100% reliability with these, I don't consider 2 discs statistically significant.

    Some WD's I've had previously have all failed, and of the 20 or so Seagates I've had since the 90's I only ever had 1 fail on me (though that was around 2 years ago on a 2TB 3.5" Green Barracuda), the rest are still running, or in a box (the oldest I'm still using 5 years 500GB 2.5").

    Quote Originally Posted by IanF View Post
    Don't worry about the off topic file systems that is how we learn.
    Thanks! BTW, since you're using it, how easy is it to set up UnRaid using several different sized discs while later swapping / adding new larger discs? With ZFS I had lots of learning to do - the vdev can't be "grown" after first creation (even if you add new larger discs it only uses the original size), only way was to add new vdevs into the zpool. To me that sounded a bit convoluted.
    Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz
    And central banks are the slave clearing houses

  2. #22
    Moderator IanF's Avatar
    Join Date
    Dec 2007
    Location
    Jhb
    Posts
    2,679
    Thanks
    197
    Thanked 529 Times in 405 Posts
    With UnRaid as long as the parity drive is the same size as the largest drive it easy to put in a new hard drive which is bigger. I changed 500 gb drives with 1tb drives after the rebuild there was more space. Just it took a while to choose the right options to get the rebuild going.
    Don't know about ZFS though.
    Only stress when you can change the outcome!

  3. #23
    Gold Member irneb's Avatar
    Join Date
    Apr 2007
    Location
    Jhb
    Posts
    625
    Thanks
    37
    Thanked 111 Times in 97 Posts
    ZFS's ZIL drive (ZFS Intent Log) is usually a partition it creates on one of the drives in the vdev. You can set it to a different disc - some people set it to work on a SSD because this is written to for every single action - makes it faster. Apparently this is not fixed according the drives inside the vdev - it's actually a dynamically expanding volume. Its size is governed by the RAM cache, the bandwidth on the network, the block size of the files (not ZFS has varying block sizes per file, not per disc as nearly all other FS's have), etc.
    Quote Originally Posted by IanF View Post
    Don't know about ZFS though.
    No, it's got a bit of an issue. Especially if you don't have lots of free Sata ports. See the answers on this exact question: http://superuser.com/questions/62224...nt-size-drives

    ZFS doesn't like having varying sized discs inside a single vdev. Especially if you turn on mirroring, in which case the vdev's size is only the same as the smallest disc in the batch. With only striping + log it's al-right, but once the vdev is created, adding another disc to it will not increase the total size. That is what the zpool is for, so you add the new disc into a new vdev, then add that into the zpool.

    FreeNAS (i.e. the plug-n-play OS which uses ZFS) does the striping idea by default. Since ZFS's ZIL be default forms part of a partition inside each vdev, if you only have one disc inside the vdev - part of it is used for the ZIL. FreeNAS's default is to create a new vdev for each new HDD - you can change this is you wish (even through its web-interface), but this way is the simplest to extend the raid's capacity (not the most robust though).

    From most of my research, it seems the "best practise" method is to keep similar sized discs in a vdev, then pool the different sized vdevs together. I.e. when getting new disc(s) you'd need to either add it to a vdev with similar sized discs, or create a new vdev and add that to the zpool as well.
    Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz
    And central banks are the slave clearing houses

  4. #24
    Moderator IanF's Avatar
    Join Date
    Dec 2007
    Location
    Jhb
    Posts
    2,679
    Thanks
    197
    Thanked 529 Times in 405 Posts
    Here is a screen shot of the server control panel.
    I can't see any options for ZFS.
    Click image for larger version. 

Name:	unraid.jpg 
Views:	212 
Size:	33.7 KB 
ID:	4500
    I looked in the other tabs and there is nothing there, you probably have to open a terminal window.
    Only stress when you can change the outcome!

  5. #25
    Gold Member irneb's Avatar
    Join Date
    Apr 2007
    Location
    Jhb
    Posts
    625
    Thanks
    37
    Thanked 111 Times in 97 Posts
    Quote Originally Posted by IanF View Post
    I looked in the other tabs and there is nothing there, you probably have to open a terminal window.
    No, I don't think unRAID has ZFS built-in. unRaid is based on Linux (not Solaris / BSD). Unfortunately there's a license incompatibility between ZFS and Linux, so there's no native ZFS for Linux. There's only a ZFS through FUSE, or a separately installable ZFSonLinux which tries to re-implement ZFS as a native file system.

    From unRAID's FAQ's it seems it uses ReiserFS. It was the first journalling FS for Linux, when ext2 was still the norm. Then ext3 included journalling, but did so much slower than Reiser - it was like an addon to the FS. Now with ext4, they're very close performance-wise. It's not a bad FS at all, just a bit older than ZFS, though that means it's had more time to work out any bugs. It still uses oa technique which I think is the best idea in FS since for ever: Copy-onWrite - which places new data in a new empty space, then only when finished points the file-handle to the new data and releases the old. Thus even if a power failure during a save, the worst that happens is you lose the new data (the old stuff is still in tact), with overwriting FS's a power failure means the file WILL BE corrupt and you probably won't be able to recover any useful data.

    You might want to look through this: http://en.wikipedia.org/wiki/Comparison_of_file_systems

    Note the ZFS I'm referring to is the one made in 2004 by Sun Microsystems for their Solaris Unix (not the zFS by IBM in 2001).
    Gold is the money of kings; silver is the money of gentlemen; barter is the money of peasants; but debt is the money of slaves. - Norm Franz
    And central banks are the slave clearing houses

  6. #26
    Moderator IanF's Avatar
    Join Date
    Dec 2007
    Location
    Jhb
    Posts
    2,679
    Thanks
    197
    Thanked 529 Times in 405 Posts
    I chose unRaid as it was made to be installed on a USB stick and was easier than the the other systems I tried.
    Thanks for the research, looks like a good choice.
    Only stress when you can change the outcome!

  7. #27
    Moderator IanF's Avatar
    Join Date
    Dec 2007
    Location
    Jhb
    Posts
    2,679
    Thanks
    197
    Thanked 529 Times in 405 Posts
    I installed the first hard drive on Friday it took about 90 mins.
    I used seagate disc wizard to clone the drive and it worked well.
    The only problem was with my MIS system I found out they use the drive serial number and a few other things to work out the user number, so the MIS wouldn't work until they updated the details on their registration module.
    I must find the time to change from this system.
    Only stress when you can change the outcome!

Page 3 of 3 FirstFirst 123

Similar Threads

  1. Hybrid Load Switch
    By tec0 in forum Electrical Contracting Industry Forum
    Replies: 20
    Last Post: 30-Oct-12, 03:05 PM
  2. [Question] varible speed drives
    By murdock in forum Electrical Contracting Industry Forum
    Replies: 7
    Last Post: 28-Oct-11, 06:54 AM
  3. 255Gig Flash Drives
    By Sparks in forum General Chat Forum
    Replies: 18
    Last Post: 29-Jul-11, 10:28 PM
  4. Fake USB Flash Drives
    By AndyD in forum Scam Alert Forum
    Replies: 3
    Last Post: 02-Aug-10, 07:15 PM
  5. [Question] Solid State Hard Drives
    By Chatmaster in forum Technology Forum
    Replies: 12
    Last Post: 25-Jun-09, 01:08 PM

Did you like this article? Share it with your favourite social network.

Did you like this article? Share it with your favourite social network.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •