1. Configure ZFS to start at boot
Add in /etc/rc.conf the following line:
zfs_enable="YES"
This will make ZFS kernel module (and opensolaris module) to be loaded at boot, and in that way your ZFS drive will be mounted automatically at boot.
zfs_enable="YES"
This will make ZFS kernel module (and opensolaris module) to be loaded at boot, and in that way your ZFS drive will be mounted automatically at boot.
2. Create GPT partition on all drives with the same size
This is useful if we use raidz.
dd if=/dev/zero of=/dev/da0 bs=1k count=4
gpart create -s GPT da0
gpart show da0
gpart add -b 34 -s 3600M -t freebsd-zfs da0
gpart show da0
3600MB is the size we want for our test usb stick pool.
dd if=/dev/zero of=/dev/da0 bs=1k count=4
gpart create -s GPT da0
gpart show da0
gpart add -b 34 -s 3600M -t freebsd-zfs da0
gpart show da0
3600MB is the size we want for our test usb stick pool.
3. Creating zpool
And now we create the raidz pool of 4 usb sticks:
zpool create tank raidz da0p1 da1p1 da2p1 da3p1
Then we check the status of the pool:
zpool status -v
We will get:
zpool create tank raidz da0p1 da1p1 da2p1 da3p1
Then we check the status of the pool:
zpool status -v
We will get:
zpool status -v
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da0p1 ONLINE 0 0 0
da1p1 ONLINE 0 0 0
da2p1 ONLINE 0 0 0
da3p1 ONLINE 0 0 0
errors: No known data errors
To see the pool we use:
zpool list
zpool list
We will get:
pool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 14G 138K 14.0G 0% ONLINE -
To remove/delete the pool we use:
zpool destroy tank
zpool destroy tank
The first create pool example will build a raidz pool. To create a mirrored drive pool with two drives in it we use:
zpool create tank mirror da0p1 da1p1
zpool create tank mirror da0p1 da1p1
4. Doing backup on a mirrored pool
To backup a mirrored pool we can add another drive to the pool, then we can wait for all three drives to sync and then we will remove the third drive from the pool.
Let's say you've created your mirrored pool with the following command:
zpool create tank mirror ada1 ada2
Then to add a third drive to the mirrored pool we will run:
zpool attach tank ada2 ada3
Then we do a scrub on the pool:
zpool scrub tank
To see the resilvering process and status of the pool we will run:
zpool status -v
Then after the sync process is complete we will split the third drive to a new pool called tank2:
zpool split tank tank2
Then you can physically remove ada3 from your machine and move it to other location. Repeat this process from time to time so you would keep an almost up to date offsite backup.
If you want to access data on tank2 you must import tank2 pool (ada3).
Let's say you've created your mirrored pool with the following command:
zpool create tank mirror ada1 ada2
Then to add a third drive to the mirrored pool we will run:
zpool attach tank ada2 ada3
Then we do a scrub on the pool:
zpool scrub tank
To see the resilvering process and status of the pool we will run:
zpool status -v
Then after the sync process is complete we will split the third drive to a new pool called tank2:
zpool split tank tank2
Then you can physically remove ada3 from your machine and move it to other location. Repeat this process from time to time so you would keep an almost up to date offsite backup.
If you want to access data on tank2 you must import tank2 pool (ada3).
5. Replacing a failed drive
If a drived fail we will see the pool as DEGRADED:
zpool status -v # on a degraded pool
zpool status -v
pool: tank
state: DEGRADED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
da1p1 ONLINE 0 0 0
da2p1 REMOVED 0 0 0
da3p1 ONLINE 0 0 0
errors: No known data errors
We add a new drive that will be da0 (and we create partition on it using gpart (see create GPT partitionsection), then we replace old drive with the new one:
zpool replace tank da2p1 da0p1
zpool replace tank da2p1 da0p1
Then we will wait for the pool to resync the new drive (to put data on it), operation that is called resilvering:
zpool status -v
zpool status -v
pool: tank
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h4m, 74.93% done, 0h1m to go
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
da1p1 ONLINE 0 0 0
replacing DEGRADED 0 0 0
da2p1 REMOVED 0 0 0
da0p1 ONLINE 0 0 0 802M resilvered
da3p1 ONLINE 0 0 0
6. Checking/repairing pool errors
zpool scrub tank
Some checks are done during resilvering process but real data check is done via scrub.
Some checks are done during resilvering process but real data check is done via scrub.
7. Saving a snapshot of the file system
To make a snapshot of the pool run:
zfs snapshot -r tank/home/first@backup
To list all snapshots run:
zfs list -t snapshot
If we want to also display creation date we will run:
zfs list -t snapshot -o name,creation
If later we want to roll back to a snapshot version we will run:
zfs rollback tank/home/first@backup
If there are newer snapshots than first@backup you will get a notice that if you do the rollback you will lose those newer snapshots and the rollback would not be performed. In order to perform the rollback anyway run:
zfs rollback -r tank/home/first@backup
Snapshots also can be renamed. In order to rename a snapshot run:
zfs rename tank/home/first@backup tank/home/today@backup
We can also send snapshots to other machine.
To send our snapshot to other machine we will run (10.0.0.10 is the IP of the destination machine):
zfs send tank/home/first@backup | ssh john@10.0.0.10 zfs recv -u tank/home/first
Note: If you get an error when sending snapshots that you do not have permisions the quickest way to solve that (might not be the safest) is prior to that to run the following command on destination machine:
zfs allow everyone mount,create,receive tank/home/
If you want to replace a device on the destination run:
zfs send tank/home_first@backup | ssh john@10.0.0.10 zfs recv -Fdv tank
This asume your ZFS pool on destination is tank, and then after the command is complete you will have on destination tank/home_first device.
Deleting snapshots. If you want to delete a snapshot run:
zfs destroy tank/home/first@backup
where: tank/home/first@backup is your snapshot name.
NOTE! Be carefull not to destroy your pool. If one snapshot is dependent on other you might want to use -Rparameter which will also delete dependent snapshot.
zfs snapshot -r tank/home/first@backup
To list all snapshots run:
zfs list -t snapshot
If we want to also display creation date we will run:
zfs list -t snapshot -o name,creation
If later we want to roll back to a snapshot version we will run:
zfs rollback tank/home/first@backup
If there are newer snapshots than first@backup you will get a notice that if you do the rollback you will lose those newer snapshots and the rollback would not be performed. In order to perform the rollback anyway run:
zfs rollback -r tank/home/first@backup
Snapshots also can be renamed. In order to rename a snapshot run:
zfs rename tank/home/first@backup tank/home/today@backup
We can also send snapshots to other machine.
To send our snapshot to other machine we will run (10.0.0.10 is the IP of the destination machine):
zfs send tank/home/first@backup | ssh john@10.0.0.10 zfs recv -u tank/home/first
Note: If you get an error when sending snapshots that you do not have permisions the quickest way to solve that (might not be the safest) is prior to that to run the following command on destination machine:
zfs allow everyone mount,create,receive tank/home/
If you want to replace a device on the destination run:
zfs send tank/home_first@backup | ssh john@10.0.0.10 zfs recv -Fdv tank
This asume your ZFS pool on destination is tank, and then after the command is complete you will have on destination tank/home_first device.
Deleting snapshots. If you want to delete a snapshot run:
zfs destroy tank/home/first@backup
where: tank/home/first@backup is your snapshot name.
NOTE! Be carefull not to destroy your pool. If one snapshot is dependent on other you might want to use -Rparameter which will also delete dependent snapshot.
8. Creating a file system in a storage pool
Let's say we have tank storage pool. To create a file system we use:
zfs create tank/fs
or if you create a volume which is 100GB in size:
zfs create -V 100G tankfs/volume2
Note that by using previous command you will not able to mount it unless you create a file system on it.
To display all volumes created on a ZFS pool use:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
tankfs 2.77T 151G 46.5K legacy
tankfs/fs 2.66T 151G 2.66T legacy
tankfs/volume2 103G 254G 16K -
We can delete/remove a volume with destroy command too:
zfs destroy tankfs/volume2
zfs create tank/fs
or if you create a volume which is 100GB in size:
zfs create -V 100G tankfs/volume2
Note that by using previous command you will not able to mount it unless you create a file system on it.
To display all volumes created on a ZFS pool use:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
tankfs 2.77T 151G 46.5K legacy
tankfs/fs 2.66T 151G 2.66T legacy
tankfs/volume2 103G 254G 16K -
We can delete/remove a volume with destroy command too:
zfs destroy tankfs/volume2
9. Usefull commands when testing ZFS on USB sticks
camcontrol rescan all
camcontrol devlist
camcontrol devlist
10. Manually mount a ZFS pool
To manually mount a ZFS pool we use:
zfs set mountpoint=legacy tank
mkdir /mnt/tank
mount -t zfs tank /mnt/tank
zfs set mountpoint=legacy tank
mkdir /mnt/tank
mount -t zfs tank /mnt/tank
11. Enabling ZFS Deduplication
To enable deduplication on ZFS which will store identical file once, saving disk space run:
zfs set dedup=on tank
zfs set dedup=on tank
12. Configuring ZFS cache
If you do not have lot of RAM memory available, you can speed up ZFS by adding a separate drive for cache. You can do that with the following command:
zpool add tank cache ad7
In this example /dev/ad7 will be used only for ZFS cache.
To remove cache drive use:
zpool remove tank ad7
where of course ad7 is your cache device.
Note! Be carefull when adding cache drive because if you forget to specify cache keyword you might add the device to the pool as strip device.
zpool add tank cache ad7
In this example /dev/ad7 will be used only for ZFS cache.
To remove cache drive use:
zpool remove tank ad7
where of course ad7 is your cache device.
Note! Be carefull when adding cache drive because if you forget to specify cache keyword you might add the device to the pool as strip device.
13. ZFS Benchmark
dd if=/dev/urandom of=/tank/test_file bs=1M count=1024
zpool iostat 5
zpool iostat 5
14. ZFS Monitor
You can use zfs-stats utility from ports in order to get a status of ZFS:
cd /usr/ports/sysutils/zfs-stats
make install clean; rehash
zfs-stats -a
cd /usr/ports/sysutils/zfs-stats
make install clean; rehash
zfs-stats -a
15. Errors you might get
cannot replace da2p1 with da3p1: cannot replace a replacing device
16. Enabling compression
zfs set compression=lzjb tank
17. Checking for errors
To check for errors use:
zpool scrub tank
To stop scrub process:
zpool scrub -s tank
zpool scrub tank
To stop scrub process:
zpool scrub -s tank
18. Dealing with errors
If you get an error like:
zfs error
# zpool status -v
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scan: scrub repaired 12K in 7h51m with 1 errors on Fri Nov 18 03:30:30 2011
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
ada1p1 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
<metadata>:<0x24>
You can find where the error is using:
find -x /mnt/tank -inum 24
asuming that your zfs pool is mounted in /mnt/tank.
find -x /mnt/tank -inum 24
asuming that your zfs pool is mounted in /mnt/tank.
Notes
- to find ZFS version on your FreeBSD system: zpool upgrade -v
(You will get something like this: This system is currently running ZFS pool version 15.)
- when you test ZFS with USB sticks if you test replacing a drive do a complete dd to the old stick, otherwise you might get an error that the device is already used in that pool.
- after you've upgraded the raidz pool with bigger drives in order to appear the new size you must upgrade all drives and and then you must export and import the pool.
- if you remove all or some drives and then you want to destroy the pool it is possible that the command zpool destroy pool_name to freeze the terminal and then the pool is not destroyed. So is better to first destroy the pools and then remove the drives.
- if you want to use your zfs pool with Samba then run:
zfs set sharesmb=on tankfs/fs
- if your scrub performance is slow try to add following value to /boot/loader.conf (lower ATA queue size):
vfs.zfs.vdev.max_pending="4"
- to find all options for your pool run: zfs get all
- if you get the following error when you try to destroy a ZFS GELI pool (using geli detach /dev/gpt/local0command for example where local0 is our pool):
geli: Cannot destroy device (error=16).
that is because the pool is still in use. To fix this, first export the pool with command:
zpool export tank
then issue the detach command and it will work.
If you've upgraded your drives to a bigger size ones and your pool size did not increase run the following command:
zpool online -e tank ada1 ada2 ada3 ada5 # where tank is your pool and ada1 ada2 ada3 ada5 are your drives
# from the pool
(You will get something like this: This system is currently running ZFS pool version 15.)
- when you test ZFS with USB sticks if you test replacing a drive do a complete dd to the old stick, otherwise you might get an error that the device is already used in that pool.
- after you've upgraded the raidz pool with bigger drives in order to appear the new size you must upgrade all drives and and then you must export and import the pool.
- if you remove all or some drives and then you want to destroy the pool it is possible that the command zpool destroy pool_name to freeze the terminal and then the pool is not destroyed. So is better to first destroy the pools and then remove the drives.
- if you want to use your zfs pool with Samba then run:
zfs set sharesmb=on tankfs/fs
- if your scrub performance is slow try to add following value to /boot/loader.conf (lower ATA queue size):
vfs.zfs.vdev.max_pending="4"
- to find all options for your pool run: zfs get all
- if you get the following error when you try to destroy a ZFS GELI pool (using geli detach /dev/gpt/local0command for example where local0 is our pool):
geli: Cannot destroy device (error=16).
that is because the pool is still in use. To fix this, first export the pool with command:
zpool export tank
then issue the detach command and it will work.
If you've upgraded your drives to a bigger size ones and your pool size did not increase run the following command:
zpool online -e tank ada1 ada2 ada3 ada5 # where tank is your pool and ada1 ada2 ada3 ada5 are your drives
# from the pool
To remove a drive from a mirrored pool:
zpool offline tank ada4
zpool detach tank ada4
No comments:
Post a Comment