Let’s create two image files to use as our fake disks again:
for FAKE_DISK in disk1.img disk2.img
do
dd if=/dev/zero of=`pwd`/$FAKE_DISK bs=1M count=100
done
Again, if you do an ls
, you should now see two img files:
$ ls
disk1.img disk2.img
This time, we are going to create a mirrored vdev , also called RAID-1 , in which a complete copy of all data is stored separately on each drive.
To create a mirrored pool, we run:
sudo zpool create test_pool_with_mirror mirror \
`pwd`/disk1.img \
`pwd`/disk2.img
Note the addition of the word mirror
between the pool name and the disk names.
If we run zpool list
, we should see the new pool:
$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
test_pool_with_mirror 80M 111K 79.9M - - 3% 0% 1.00x ONLINE -
But note that this time the size is only 80M, half what it was before. This makes sense, as we are storing two copies of everything (one on each disk), so we have half as much space.
If we run zpool status test_pool_with_mirror
, we should see that the disks have been put into a mirror vdev named mirror-0
:
$ zpool status test_pool_with_mirror
pool: test_pool_with_mirror
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
test_pool_with_mirror ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/home/user/test_zfs_healing/disk1.img ONLINE 0 0 0
/home/user/test_zfs_healing/disk2.img ONLINE 0 0 0
errors: No known data errors
We can see where our pool has been mounted:
$ zfs mount
test_pool_with_mirror /test_pool_with_mirror
First we’ll change the mountpoint to be owned by the current user:
sudo chown $USER /test_pool_with_mirror
Then let’s change into that mountpoint:
cd /test_pool_with_mirror
Again we will create a text file with some text in it:
echo "We are playing with ZFS. It is an impressive filesystem that can self-heal. Mirror, mirror, on the wall." > text.txt
We can show the text in the file with:
cat text.txt
$ cat text.txt
We are playing with ZFS. It is an impressive filesystem that can self-heal. Mirror, mirror, on the wall.
And we can look at the hash of the file with:
sha1sum text.txt
$ sha1sum text.txt
aad0d383cad5fc6146b717f2a9e6c465a8966a81 text.txt
As we learnt earlier, we first need to export the pool.
cd $ZFS_TEST_DIR
sudo zpool export test_pool_with_mirror
And, again, if we run a zpool list
, test_pool_with_mirror
should no longer appear.
First we will go back to our directory with the disk images:
cd $ZFS_TEST_DIR
Now again we are going to write zeros over a disk to simulate a disk failure or corruption:
dd if=/dev/zero of=$ZFS_TEST_DIR/disk1.img bs=1M count=100
We see something like the following output:
$ dd if=/dev/zero of=$ZFS_TEST_DIR/disk1.img bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.172324 s, 608 MB/s
Now we are going to re-import our pool:
sudo zpool import -d $ZFS_TEST_DIR/disk2.img
And we see something like the following output:
$ sudo zpool import -d $ZFS_TEST_DIR/disk2.img
pool: test_pool_with_mirror
id: 5340127000101774671
state: ONLINE
status: One or more devices contains corrupted data.
action: The pool can be imported using its name or numeric identifier.
see: http://zfsonlinux.org/msg/ZFS-8000-4J
config:
test_pool_with_mirror ONLINE
mirror-0 ONLINE
/home/user/test_zfs_healing/disk1.img UNAVAIL corrupted data
/home/user/test_zfs_healing/disk2.img ONLINE
As expected, disk1.img
is showing as corrupted, as we wrote over it with zeros, but, in contrast to the pool with the striped vdev earlier, instead of failing to import as FAULTED
, the pool is instead showing ONLINE
, with disk2.img showing as ONLINE
and only the disk1.img that we overwrote showing as UNAVAIL
because of its corrupted data.
The output tells us that we can import the pool by using its name or ID, so let’s do that:
sudo zpool import test_pool_with_mirror -d $ZFS_TEST_DIR/disk2.img
We can check the pool status with:
zpool status test_pool_with_mirror
And the output should look something like:
$ zpool status test_pool_with_mirror
pool: test_pool_with_mirror
state: ONLINE
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-4J
scan: none requested
config:
NAME STATE READ WRITE CKSUM
test_pool_with_mirror ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
4497234452516491230 UNAVAIL 0 0 0 was /home/user/test_zfs_healing/disk1.img
/home/user/test_zfs_healing/disk2.img ONLINE 0 0 0
errors: No known data errors
So the pool is online and working, albeit in a degraded state. We can look at the file we wrote earlier:
$ cat /test_pool_with_mirror/text.txt
We are playing with ZFS. It is an impressive filesystem that can self-heal. Mirror, mirror, on the wall.
The status is telling us that we are missing a device and the pool is degraded, so let’s fix that.
Let’s create a new “disk” in our working directory:
cd $ZFS_TEST_DIR
dd if=/dev/zero of=`pwd`/disk3.img bs=1M count=100
Then, let’s follow the instructions from the zpool status
and replace the disk:
sudo zpool replace test_pool_with_mirror $ZFS_TEST_DIR/disk1.img $ZFS_TEST_DIR/disk3.img
We can see how this disk replacement has affected things by checking zpool status test_pool_with_mirror
:
$ zpool status test_pool_with_mirror
pool: test_pool_with_mirror
state: ONLINE
scan: resilvered 274K in 0 days 00:00:00 with 0 errors on Sat Nov 27 22:43:37 2021
config:
NAME STATE READ WRITE CKSUM
test_pool_with_mirror ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/home/user/test_zfs_healing/disk3.img ONLINE 0 0 0
/home/user/test_zfs_healing/disk2.img ONLINE 0 0 0
errors: No known data errors
disk1.img has been replaced by disk3.img and it tells us that it has “resilvered” the data from the mirror (disk2.img) to the new disk (disk3.img).
We can now remove the test pool:
sudo zpool destroy test_pool_with_mirror
and it should no longer show in a zpool list
.
Then we can remove the fake “disks” we created:
cd $ZFS_TEST_DIR
rm disk1.img disk2.img disk3.img
cd ..
rmdir $ZFS_TEST_DIR