My RAID0 setup with 3TBx2 Drives

edited May 3 in Linux mdadm

Hi Cyril,

BTW, sometime couple of weeks before I did a RAID0 with my two 3TB (WD and Seagate) drives via Linux mdadm (software RAID). I configured via webmin for ease of use although I can do via cli.

For now I did RAID0, since I want continuous maximum space and currently I am using this as my last resort NAS snapshot bkp. I am mounting this readonly if I dont want to write in it, else I mount normally. But is there a reason you are using the hardware RAID in your lab/home.



  • Nice setup and GUI.
    If we consider the cpu overhead, could we measure some meaningful differences between software, chipset and hardware raid solutions ?
  • Cyril,

    I too initially thought getting an exclusive hw RAID PCIe card. But later discovered the true advantages of software RAID. Such as mdadm, or via zfs pools, or btrfs, etc.
    In the case of hw raid, the raid controller probably works much faster than the software one (without CPU overhead), but the issue is if the hard raid controller gets defective, then it is nearly impossible or difficult to get our data back. There is one interesting video covered by Linus Tech Tips:

    After seeing this video I lost faith and panicked. In the case of s/w raid, for example in the case of mdadm/zfs, there is a bit of CPU overhead depending on the setup (or raid level) I found these 2 interesting articles:

    Since it is s/w raid, they did mentioned about cpu overhead, latency issues, etc. But the main advantage of s/w raid is that even if the base installed OS fails (containing mdadm/zfs config) you can reinstall a fresh OS and recover automatically your RAID volumes/pools. And depending on the raid level if any disk/partition fails in the raid array you can replace that as well.

    BTW, I did a complex test zfs pool (in raid10) with FreeNAS in my virtualbox vm. I created many virtual partitions or disk images in virtualbox and connected them to the sata controller. This entire setup is an attempt to learn the reliability of zfs and also freenas GUI (since I am new to both). I did various tests so far. Removing random disks, deleting the disk image, deleting 2 disk images, upgrading disk images (from 50GB to 60GB) in that case I can see slowly zfs pool capacity got larger. After that did few performance tests. I also setup link aggregation, etc. It is never suggested to make such a zfs pool in a vm disk image, since you dont get any smart parameters. If something goes wrong it goes wrong without a warning. But for learning purposes its a great way to explore the s/w raid capability. Here are my screenshots. BTW, zfs uses RAM as its cache. So it needs lots of memory as you can see in one of the screenshots.

    cheers, Kiran

  • edited May 9
    BTW, here is a video I shot on the same FreeNAS vm setup simulating scenarios like: upgrade of drives and failed drives
    in multiple iterations. And successfully be able to rebuild/resilver the ZFS pool (RAID level10).

Sign In or Register to comment.