Oracle Solaris. How-to migrate Data from storage luns in ZPOOL


In this article I will provide solution – How to migrate data between different storage luns in-range one ZFS POOL in Solaris 10.

 

Let take example – we have next ZFS POOL grid:

root@solaris10 # zpool status grid
pool: grid
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
grid ONLINE 0 0 0
c14t60060E800428E400000028E40000010Cd0 ONLINE 0 0 0
c14t60060E800428E400000028E400000111d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000113d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000115d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000117d0 ONLINE 0 0 0
c14t60060E800428E400000028E40000053Ed0 ONLINE 0 0 0
c14t60060E800428E400000028E400000536d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000539d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000547d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000548d0 ONLINE 0 0 0

errors: No known data errors
root@solaris10 #

each disk in zfs pool have the same size:

 

root@solaris10 # prtvtoc /dev/rdsk/c14t60060E800428E400000028E40000010Cd0s2
* /dev/rdsk/c14t60060E800428E400000028E40000010Cd0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 122177280 sectors
* 122177213 accessible sectors
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 4 00 34 122160829 122160862
8 11 00 122160863 16384 122177246
root@solaris10 #

output from format:

root@solaris10 # format c14t60060E800428E400000028E40000010Cd0
selecting c14t60060E800428E400000028E40000010Cd0
[disk formatted]
/dev/dsk/c14t60060E800428E400000028E40000010Cd0s0 is part of active ZFS pool grid. Please see zpool(1M).


FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
inquiry - show vendor, product and revision
volname - set 8-character volume name
! - execute , then return
quit
format> p


PARTITION MENU:
0 - change `0
' partition
1 - change `1'
partition
2 - change `2
' partition
3 - change `3'
partition
4 - change `4
' partition
5 - change `5'
partition
6 - change `6' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
! - execute , then return
quit
partition> p
Current partition table (original):
Total disk sectors available: 122160862 + 16384 (reserved sectors)

Part Tag Flag First Sector Size Last Sector
0 usr wm 34 58.25GB 122160862
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 122160863 8.00MB 122177246

partition> ^D
root@solaris10 #

 

We allocated from another storage trough FC SAN Switch more 10 luns:

 

c14t60060E800545AA00000045AA00001134d0

c14t60060E800545AA00000045AA0000114Fd0

c14t60060E800545AA00000045AA0000122Bd0

c14t60060E800545AA00000045AA00001244d0

c14t60060E800545AA00000045AA00001245d0

c14t60060E800545AA00000045AA0000130Dd0

c14t60060E800545AA00000045AA0000134Cd0

c14t60060E800545AA00000045AA0000140Fd0

c14t60060E800545AA00000045AA0000147Ed0

c14t60060E800545AA00000045AA00001529d0

 

This all new 10 luns have the same size and sectors as present luns in ZFS POOL grid.

in next steps I will provide how to intergrade new luns from new storage to existing ZPOOL without impacting to running application on production system:

 

Schem: zpool attach -f

 

 
root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E40000010Cd0 c14t60060E800545AA00000045AA00001134d0
We can now check how it will looks:
 
root@solaris10 # zpool status grid
pool: grid
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
grid ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E40000010Cd0 ONLINE 0 0 0
c14t60060E800545AA00000045AA00001134d0 ONLINE 0 0 0 29K resilvered
c14t60060E800428E400000028E400000111d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000113d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000115d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000117d0 ONLINE 0 0 0
c14t60060E800428E400000028E40000053Ed0 ONLINE 0 0 0
c14t60060E800428E400000028E400000536d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000539d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000547d0 ONLINE 0 0 0
c14t60060E800428E400000028E400000548d0 ONLINE 0 0 0

errors: No known data errors
root@solaris10 #

Now we will attach last 9 luns from new storage to existing ZPOOL grid luns:

 
root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E40000010Cd0 c14t60060E800545AA00000045AA00001134d0
root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E400000111d0 c14t60060E800545AA00000045AA0000114Fd0

root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E400000113d0 c14t60060E800545AA00000045AA0000122Bd0

root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E400000115d0 c14t60060E800545AA00000045AA00001244d0

root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E400000117d0 c14t60060E800545AA00000045AA00001245d0

root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E40000053Ed0 c14t60060E800545AA00000045AA0000130Dd0

root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E400000536d0 c14t60060E800545AA00000045AA0000134Cd0

root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E400000539d0 c14t60060E800545AA00000045AA0000140Fd0

root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E400000547d0 c14t60060E800545AA00000045AA0000147Ed0

root@solaris10 # zpool attach -f grid c14t60060E800428E400000028E400000548d0 c14t60060E800545AA00000045AA00001529d0
After checking status of ZPOOL grid:
root@solaris10 # zpool status grid
pool: grid
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
grid ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E40000010Cd0 ONLINE 0 0 0
c14t60060E800545AA00000045AA00001134d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E400000111d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000114Fd0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E400000113d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000122Bd0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E400000115d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA00001244d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E400000117d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA00001245d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E40000053Ed0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000130Dd0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E400000536d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000134Cd0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E400000539d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000140Fd0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E400000547d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000147Ed0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c14t60060E800428E400000028E400000548d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA00001529d0 ONLINE 0 0 0 34.5K resilvered

errors: No known data errors

root@solaris10 #

In next steps we will remove FC luns from old storage:

 

root@solaris10 # zpool detach grid c14t60060E800428E400000028E40000010Cd0

root@solaris10 # zpool detach grid c14t60060E800428E400000028E400000111d0

root@solaris10 # zpool detach grid c14t60060E800428E400000028E400000113d0

root@solaris10 # zpool detach grid c14t60060E800428E400000028E400000115d0

root@solaris10 # zpool detach grid c14t60060E800428E400000028E400000117d0

root@solaris10 # zpool detach grid c14t60060E800428E400000028E40000053Ed0

root@solaris10 # zpool detach grid c14t60060E800428E400000028E400000536d0

root@solaris10 # zpool detach grid c14t60060E800428E400000028E400000539d0

root@solaris10 # zpool detach grid c14t60060E800428E400000028E400000547d0

root@solaris10 # zpool detach grid c14t60060E800428E400000028E400000548d0

 

If we will check now ZPOOL grid status:

 

root@solaris10 # zpool status grid
pool: grid
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
grid ONLINE 0 0 0
c14t60060E800545AA00000045AA00001134d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000114Fd0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000122Bd0 ONLINE 0 0 0
c14t60060E800545AA00000045AA00001244d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA00001245d0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000130Dd0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000134Cd0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000140Fd0 ONLINE 0 0 0
c14t60060E800545AA00000045AA0000147Ed0 ONLINE 0 0 0
c14t60060E800545AA00000045AA00001529d0 ONLINE 0 0 0

errors: No known data errors

root@solaris10 #

All data which was present in ZFS file systems in ZPOOL grid – still accessible without any issue. No impact to production system and no down time !!!

 

Eldar Aydayev ©

UNIX Systems Professional Engineer

Aydayev’s Investment Business Group

23rd Ave, Noriega St. 12A

San Francisco, CA 94116

E-mail: eldar@aydayev.com

URL: http://eldar.aydayev.com

LinkedIn: http://www.linkedin.com/in/eldar

Phone: +1 (650) 2062624