For an example, take a test application data redundancy by using different scenarios. First you create a ZFS mirrored pool that contains one mirror. To minimize the chances of losing data, you distribute the data over two mirrors. At this time, to address a policy change, you reconfigure the pool to keep three copies of data, which requires you to create a three-way mirror, which will give you a data redundancy with a zfs mirrored pool.
- Verify that the solaris 11 server is up and running. If it is not running, start it to run.
- Log in to the solaris 11 server as the root user
- Execute the zpoollist command to display the ZFS pools that are currently configured in the server.
root@solaris11:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 31.8G 9.90G 21.9G 31% 1.00x ONLINE –
Note :Currently, the only ZFS pool that is available is the root pool, which is needed to make the ZFS file system a root file system.
- Use the zpool status command to determine the disks that are currently configured for the ZFS rpool.
Note: This display shows that rpool is using the local disk c3t0d0.
So while creating new pools, leave this disk untouched.
- Execute the format command to identify any additional disks configured in the system
root@s11-server1:~# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
- c3t0d0 <ATA-VBOX HARDDISK -1.0 cyl 4174 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0
- c3t2d0 <ATA-VBOX HARDDISK -1.0 cyl 1022 alt 2 hd 64 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0
- c3t3d0 <ATA-VBOX HARDDISK -1.0 cyl 1022 alt 2 hd 64 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0
- c3t4d0 <ATA-VBOX HARDDISK -1.0 cyl 1022 alt 2 hd 64 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0
- c3t5d0 <ATA-VBOX HARDDISK -1.0 cyl 1022 alt 2 hd 64 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0
- c3t6d0 <ATA-VBOX HARDDISK -1.0 cyl 1022 alt 2 hd 64 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0
- c3t7d0 <ATA-VBOX HARDDISK -1.0 cyl 1022 alt 2 hd 64 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0
- c3t8d0 <ATA-VBOX HARDDISK -1.0 cyl 1022 alt 2 hd 64 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0
- c3t9d0 <ATA-VBOX HARDDISK -1.0 cyl 1022 alt 2 hd 64 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0
^C
Note: The display tells you that disks c3t2d0 to c3t9d0 are available for use.
To cancel the format command, press Ctrl + C or Ctrl + D.
- Create a mirrored ZFS pool named oraclecrm by using the disks c3t2d0 and c3t3d0. Show the results
root@solaris11:~# zpool create oraclecrm mirror c3t2d0 c3t3d0
root@solaris11:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
oraclecrm 1008M 112K 1008M 0% 1.00x ONLINE –
rpool 31.8G 9.90G 21.9G 31% 1.00x ONLINE –
- Add another mirror in the oraclecrm pool by using disks c3t4d0 and c3t5d0
root@solaris11:~# zpool add oraclecrm mirror c3t4d0 c3t5d0
root@solaris11:~# zpool status oraclecrm
pool: oraclecrm
state: ONLINE
scan: none
requested config:
NAME STATE READ WRITE CKSUM
oraclecrm ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c3t2d0 ONLINE – – –
c3t3d0 ONLINE – – –
mirror-1 ONLINE 0 0 0
c3t4d0 ONLINE – – –
c3t5d0 ONLINE – – –
errors: No known data errors
Note: If your company is very concerned about losing data because of data or disk corruption. You are asked to spread the data over multiple disks to mitigate the risk of data loss. To satisfy this objective, you create another mirror by using two free disks. Now, the data is distributed over the two mirrors and the respective disks. This means that 50% of the data will be stored in the first mirror and 50% of the data in the second mirror.
- Check the capacity of both the mirrors by issuing the zpool iostat-v oraclecrm command
Note: Here you see the two mirrors listed with their details. Note that the total free space in the pool, 1.97 GB, has been equally distributed between the two mirrors (1008 MB and 1.02 GB respectively). The alloc column shows the ZFS overhead.
- Determine the mount point of the top-level file system.
root@solaris11:~# zfs list oraclecrm
NAME USED AVAIL REFER MOUNTPOINT
oraclecrm 94K 1.94G 31K /oraclecrm
Note: The mount point of the pool or the top-level file system of oraclecrm is /oraclecrm. This is the root of the pool; that is, all the file systems that are created will be within this mount point.
- Create a 2-MB file by using the mkfile command. Check the file storage allocation for the mirrors by running the zpooliostat command.
root@solaris11:~# mkfile 2m /oraclecrm/crmindex
root@solaris11:~# zpool iostat -v oraclecrm
Note: Your display may show different numbers.
Your CRM analyst shared with you that a small file will be needed for storing the index of the CRM application. You create a 2-MB file called crmindex in the pool.
Note how this 2 MB worth of storage has been roughly divided between the two mirrors. This shows that all CRM data will be divided between the two mirrors.
Hint: In some cases, it may help to wait for some time before issuing the zpool iostat command to allow ZFS to complete writing to the mirrors.
- Use the zfs list oraclecrm command to list the capacity summary for the oraclecrm pool.
root@solaris11:~# zfs list oraclecrm
NAME USED AVAIL REFER MOUNTPOINT
oraclecrm 2.09M 1.94G 2.03M /oraclecrm
Note: the space used now at the top-level file system. This reflects the 2 MB of storage used by the crmindex file.
- Use the zpool destroy oraclecrm command to delete the pool. Confirm the deletion by using the zpool list command.
root@solaris11:~# zpool destroy oraclecrm
root@solaris11:~# zpool list oraclecrm
cannot open ‘oraclecrm’: no such pool
Note: Based on a review by the CRM analyst, there was a change in direction. It was agreed that you keep three copies of data and not distribute it over two separate mirror sets. To address this objective, you delete the current data redundancy configuration and destroy the pool to create the new configuration.
- Re-create the mirrored ZFS pool named oraclecrm by using the disks c3t2d0 and c3t3d0. Show the results.
Note: The purpose of the reconfiguration is to create a three-way mirror now and reuse the existing storage disks. This will also assist you in focusing on a cleaner setup, for instance, having one mirror.
- Use the zpool attach command to add another disk to the mirror to make it a three-way mirror. Confirm this action by using the zpool status command.
Note: Now this new configuration meets the objective of maintaining redundancy by keeping three copies of data on three individual disks. The application data can be created as shown earlier.
Notice that the attach command specifies an existing disk in the mirror and a free disk to be included in the mirror. The result is displayed by the status command. The status display also shows the resilvering action. The purpose of resilvering is to replicate data on the newly added disk.
- Use the zpool add command to add a cache device to the mirror to allow the cache device to be used as local pool memory. Confirm this action by using the zpool status command.
Note: This added device will serve as local memory for the pool to boost the input/output performance. Your business analyst had indicated that you may need to boost the I/O performance of the pool.
- Your business analyst has now indicated that you do not need to boost pool performance because of the low volume of data. Use the zpool remove command to delete the cache device. Confirm this action by using the zpool status command.
Note: the cache device does not appear in the display.
- Use the zpool destroy command to delete the pool. Use the zpool list command to confirm the deletion.
Note: The purpose of destroying this pool is to conclude working with the mirrors. In the post you will find how to create a new pool with no mirrors to simplify working with ZFS backup and recovery functions. In addition, you will find how to create a pool with no mirrors.
If you find this post is useful/helpful, please follow, like and share. Thank you for visiting my blog!!!
If some one desires expert view concerning blogging then i suggest him/her to go to see this website, Keep up
the good work.
I read this piece of writing completely about the resemblance of most recent
and preceding technologies, it’s remarkable article.