Did you know that you can navigate the posts by swiping left and right?
Recently I bought myself some NAS storage - a Netgear RND4000 ReadyNAS NV+ v2. It’s a 4-bay NAS enclosure that has a nice web based control panel for creating shares and managing the unit.
I got curious about what XRAID2 actually is, mostly so that when it inevitably goes wrong I have some idea how to fix it. Time to poke around the command line.
Under the hood it’s using the Linux md device driver from kernel 2.6.31, which is where the “XRAID2” expansion capabilities come from. When you’re using the device in XRAID2 mode and insert a new disk, it gets formatted with a GPT partition table with 3 partitions:
root@nas:~# gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.7.0
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): C97568EC-FABC-45C1-B703-96B61603C693
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 64-sector boundaries
Total free space is 4092 sectors (2.0 MiB)
Number Start (sector) End (sector) Size Code Name
1 64 8388671 4.0 GiB FD00
2 8388672 9437247 512.0 MiB FD00
3 9437248 5860529072 2.7 TiB FD00
The first partition will join the RAID 1 device /dev/md0, which is the actual system install. As far as I can tell, it’s always 4GB in size.
The second partition, again always 512MB in size, joins /dev/md1 which is also RAID 1. This is the Linux swap partition.
The 3rd partition will take up whatever space is left on the drive and joins /dev/md2. This is a RAID 5 device and is where all the NAS storage space is. The md devices on my system:
root@nas:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sdc3[2] sda3[0] sdb3[1]
5851089408 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid1 sdc2[2] sda2[0] sdb2[1]
524276 blocks super 1.2 [3/3] [UUU]
md0 : active raid1 sdc1[2] sda1[0] sdb1[1]
4193268 blocks super 1.2 [3/3] [UUU]
The device that actually gets mounted as a filesystem is /dev/c/c with a Linux ext4 file system. This is because on top of the md driver, /dev/md2 is itself a Linux LVM physical volume, with one logical volume on top of it, like so:
root@nas:~# pvdisplay
--- Physical volume ---
PV Name /dev/md2
VG Name c
PV Size 5.45 TiB / not usable 30.75 MiB
Allocatable yes
PE Size 64.00 MiB
Total PE 89280
Free PE 160
Allocated PE 89120
PV UUID ww3P2w-Nj8Z-yemx-4gSQ-VhE4-C0Ri-pZW1hI
root@nas:~# lvdisplay
--- Logical volume ---
LV Name /dev/c/c
VG Name c
LV UUID 9kFN0b-bNgZ-RhWs-aiQQ-kQjJ-vKms-Bfsb1a
LV Write Access read/write
LV Status available
# open 1
LV Size 5.44 TiB
Current LE 89120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
It’s this logical volume that is formatted as ext4 and mounted at /c like so:
root@nas:~# mount
/dev/md0 on / type ext3 (rw,noatime)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw)
tmpfs on /ramfs type ramfs (rw)
tmpfs on /USB type tmpfs (rw,size=16k)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/c/c on /c type ext4 (rw,noatime,acl,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0)
nfsd on /proc/fs/nfsd type nfsd (rw)
This is good news as far as i’m concerned. It means that should the ReadyNAS hardware fail out of warranty for whatever reason, it’s trivial to plug the disks into a generic Linux machine and rebuild the RAID array without losing any data.