ZFS 好用的 Remote SnapShot 
原文:http://esxvm.pixnet.net/blog/post/23339134

主機需求: 兩臺 作業系統:OPENSolarias



STO01:192.168.10.1

STO02:192.168.10.2

兩臺主機並修改/etc/hosts

192.168.10.1 sto01

192.168.10.2 sto02

今天終於把這個功能測出來

設定方法

1.先產生SSH Key

sto01

ssh-keygen -t rsa 連續按兩次Enter

mv /root/.ssh/id_rsa.pub authorized_keys

scp /root/.ssh/authorized_keys sto02:/root/.ssh

sto02 一樣產生key

ssh-keygen -t rsa

cat /root/.ssh/id_rsa.pub >> authorized_keys

scp /root/.ssh/authorized_keys sto01:/root/.ssh



2.暫訂sto02 為backup主機

新增zfs

zfs create zfspool/backup



3.在sto01

3-1 產生snapshot

zfs snapshot -r zfspool/nfs@first

3-2 同步資料

zfs send zfspool/nfs@first | ssh sto02 zfs recv zfspool/backup@backup


[ 發表回應 ] ( 11預覽 )   |  常註連結  |   ( 2.9 / 1373 )
Backups from ZFS snapshots 
Backups from ZFS snapshots

原文:http://breden.org.uk/2008/05/12/home-fileserver-backups-from-zfs-snapshots/

Backups are critical to keeping your data protected, so let’s discover how to use ZFS snapshots to perform full and incremental backups.

In the last article on ZFS snapshots, we saw how to create snapshots of a file system. Now we will use those snapshots to create a full backup and subsequent incremental backups.

Performing backups

Obviously we only created a small number of files in the previous ZFS snapshots article, but we can still demonstrate the concept of using snapshots to perform full and incremental backups.

We’ll write our backups to a backup target file system called ‘tank/testback’.

This backup target could exist within the same pool, like in our simple example, but would most likely exist in another pool, either on the same physical machine, or at any location addressable, using iSCSI or ssh with an IP address etc.

Full backup

Now let’s do a full initial backup from the ‘tank/test@1′ snapshot:

# zfs send tank/test@1 | zfs receive tank/testback

Let’s take a look at the file systems to see what’s happened:

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 766G 598G 28.0K /tank
tank/test 94.6K 598G 26.6K /tank/test
tank/test@1 23.3K - 26.6K -
tank/test@2 21.3K - 26.0K -
tank/test@3 21.3K - 26.0K -
tank/test@4 0 - 26.6K -
tank/testback 25.3K 598G 25.3K /tank/testback
tank/testback@1 0 - 25.3K -


Well, the command not only created the file system ‘tank/testback’ to contain the files from the backup, but it also created a snapshot called ‘tank/testback@1′. The reason for the snapshot is so that you can get the state of the backups at any point in time.

As we send more incremental backups, new snapshots will be created, enabling you to restore a file system from any snapshot. This is really powerful!

Let’s just take a look at the files in the full backup — it should contain the original files referenced from our initial snapshot ‘tank/test@1′.

# ls -l tank/testback
total 4
-rw-r--r-- 1 root root 15 May 12 14:50 a
-rw-r--r-- 1 root root 15 May 12 14:50 b

# cat /tank/testback/a /tank/testback/b
hello world: a
hello world: b

As we expected. Good

Incremental backups

Now let’s do an incremental backup, that will only transmit the differences between snapshots ‘tank/test@1′ and ‘tank/test@2′:

# zfs send -i tank/test@1 tank/test@2 | zfs receive tank/testback
cannot receive incremental stream: destination tank/testback has been modified since most recent snapshot

Oh dear! For some reason, doing the ‘ls’ of the directory, when we inspected the backup contents, has actually modified the file system.

I have no idea how this happens or why, but I have seen this problem, or phenomenon, mentioned elsewhere.

It appears that the solution is to set the backup target file system to be read only, like this:

# zfs set readonly=on tank/testback

Another possibility is to use the ‘-F’ switch with the ‘zfs receive’ command. I don’t know which is the recommended solution, but I will use the switch for now, as I don’t want to make the file system read only, as we have several incremental backups to perform:

# zfs send -i tank/test@1 tank/test@2 | zfs receive -F tank/testback

Let’s just take a look at the files in the full backup — it should contain the original files referenced from our initial snapshot ‘tank/test@2′ — i.e. just file ‘b’:

# ls -l /tank/testback
total 2
-rw-r--r-- 1 root root 15 May 12 14:50 b
# cat /tank/testback/b
hello world: b

Good, as expected

Now let’s send all the remaining incremental backups:

# zfs send -i tank/test@2 tank/test@3 | zfs receive -F tank/testback
# zfs send -i tank/test@3 tank/test@4 | zfs receive -F tank/testback

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 766G 598G 29.3K /tank
tank/test 94.6K 598G 26.6K /tank/test
tank/test@1 23.3K - 26.6K -
tank/test@2 21.3K - 26.0K -
tank/test@3 21.3K - 26.0K -
tank/test@4 0 - 26.6K -
tank/testback 93.2K 598G 26.6K /tank/testback
tank/testback@1 22.0K - 25.3K -
tank/testback@2 21.3K - 26.0K -
tank/testback@3 21.3K - 26.0K -
tank/testback@4 0 - 26.6K -

Here is the final state of the backup target file system after sending all the incremental backups.

As we would expect, it matches the source file system contents:

# cat /tank/testback/b /tank/testback/c
hello world: b
modified
hello world: c


Restore a backup

Now let’s restore all of our four backup target snapshots into four separate file systems, so we can demonstrate how to recover any or all of the data that we snapshotted and backed up:

# zfs send tank/testback@1 | zfs recv tank/fs1
# zfs send tank/testback@2 | zfs recv tank/fs2
# zfs send tank/testback@3 | zfs recv tank/fs3
# zfs send tank/testback@4 | zfs recv tank/fs4

Let’s look at the file systems:

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 766G 598G 33.3K /tank
tank/fs1 25.3K 598G 25.3K /tank/fs1
tank/fs1@1 0 - 25.3K -
tank/fs2 26.0K 598G 26.0K /tank/fs2
tank/fs2@2 0 - 26.0K -
tank/fs3 26.0K 598G 26.0K /tank/fs3
tank/fs3@3 0 - 26.0K -
tank/fs4 26.6K 598G 26.6K /tank/fs4
tank/fs4@4 0 - 26.6K -
tank/test 94.6K 598G 26.6K /tank/test
tank/test@1 23.3K - 26.6K -
tank/test@2 21.3K - 26.0K -
tank/test@3 21.3K - 26.0K -
tank/test@4 0 - 26.6K -
tank/testback 93.2K 598G 26.6K /tank/testback
tank/testback@1 22.0K - 25.3K -
tank/testback@2 21.3K - 26.0K -
tank/testback@3 21.3K - 26.0K -
tank/testback@4 0 - 26.6K -

Let’s check ‘tank/fs1′ - it should match the state of the original file system when the ‘tank/test@1′ snapshot was taken:

# ls -l /tank/fs1
total 4
-rw-r--r-- 1 root root 15 May 12 14:50 a
-rw-r--r-- 1 root root 15 May 12 14:50 b
# cat /tank/fs1/a /tank/fs1/b
hello world: a
hello world: b

Perfect, now let’s check ‘tank/fs2′ - it should match the state of the original file system when the ‘tank/test@2′ snapshot was taken:

# ls -l /tank/fs2
total 2
-rw-r--r-- 1 root root 15 May 12 14:50 b
# cat /tank/fs2/b
hello world: b

Perfect, now let’s check ‘tank/fs3′ - it should match the state of the original file system when the ‘tank/test@3′ snapshot was taken:

# ls -l /tank/fs3
total 2
-rw-r--r-- 1 root root 24 May 12 17:35 b
# cat /tank/fs3/b
hello world: b
modified

Perfect, now let’s check ‘tank/fs4′ - it should match the state of the original file system when the ‘tank/test@4′ snapshot was taken:

# ls -l /tank/fs4
total 4
-rw-r--r-- 1 root root 24 May 12 17:35 b
-rw-r--r-- 1 root root 15 May 12 18:58 c
# cat /tank/fs4/b /tank/fs4/c
hello world: b
modified
hello world: c

Great!

Conclusion
Hopefully, you’ve now seen the power of snapshots. In future posts, I will show what else can be done with snapshots.

For more ZFS Home Fileserver articles see here: A Home Fileserver using ZFS. Alternatively, see related articles in the following categories: ZFS, Storage, Fileservers.


[ 發表回應 ] ( 22預覽 )   |  常註連結  |   ( 3 / 1397 )
zfs volume 使用例子 
ZFS Volumes
以下例子在zfs中建立5GB 的 zfs volume 空間在 mypool/vol
zfs create -V 5GB mypool/vol


Swap 使用 ZFS Volumes
以下例子為swap device加入5GB的volume空間
swap -a /dev/zvol/dsk/mypool/vol
swap -l

swapfile dev swaplo blocks free
/dev/dsk/c1t0d0s1 30,1 8 4209016 4209016
/dev/zvol/dsk/mypool/vol 181,1 8 10485752 10485752


使用 ZFS Volume 作為 Solaris iSCSI Target
# zfs create -V 2g mypool/volumes/v2
# zfs set shareiscsi=on mypool/volumes/v2
# iscsitadm list target

Target: mypool/volumes/v2
iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a
Connections: 0


[ 2 回應 ] ( 114預覽 )   |  常註連結  |   ( 3 / 1148 )
zfs command ( zfs 指令) 
ZFS Command ( ZFS 指令 ) Example ( 例子 )

Create a ZFS storage pool

建立 「ZFS儲藏池」

# zpool create mpool mirror c1t0d0 c2t0d0

Add capacity to a ZFS storage pool

為「ZFS儲存池」增加容量

# zpool add mpool mirror c5t0d0 c6t0d0

Add hot spares to a ZFS storage pool

為「ZFS儲存池」增加後備儲存設備

# zpool add mypool spare c6t0d0 c7t0d0

Replace a device in a storage pool

為「ZFS儲存池」更換儲存設備

# zpool replace mpool c6t0d0 [c7t0d0]

Display storage pool capacity

顯示儲存池的容量

# zpool list

Display storage pool status

顯示儲存池的容量

# zpool status

Scrub a pool

檢查一個儲存池的所有儲存設備是否正常。在RaidZ的狀態下,需要一些時間進行驗證,特別是IO、儲存設備出現過問題或故意移除儲存設備後再裝回的情況下,需要的時間會變得更長。通常使用完這個指令後,您需要用 #zpool status -v mpool 來查看檢查的狀態

# zpool scrub mpool

Remove a pool

移除一個儲存池

# zpool destroy mpool

Create a ZFS file system

建立一個ZFS檔案系統

# zfs create mpool/local

Mount a ZFS file system to System Directory

Mount一個ZFS檔案系統到系統目錄中

# zfs set mountpoint=/usr/local rpool2/local

Create a child ZFS file system

建立一個子系的ZFS檔案系統

# zfs create mpool/devel/data

Remove a file system

移除一個檔案系統

# zfs destroy mpool/devel

Take a snapshot of a file system

為一個檔案系統建立一個「snapshot」

# zfs snapshot mpool/devel/data@today

Roll back to a file system snapshot

灰複到一個檔案系統「snapshot」時的狀態

# zfs rollback -r mpool/devel/data@today

Create a writable clone from a snapshot

把檔案系統「snapshot」複制到另外一個zfs檔案系統中

# zfs clone mpool/devel/data@today mpool/clones/devdata

Remove a snapshot

移除一個檔案系統「snapshot」

# zfs destroy mpool/devel/data@today

Enable compression on a file system

在一個檔案系統中啟動厭縮功能

# zfs set compression=on mpool/clones/devdata

Disable compression on a file system

在一個檔案系統中關閉厭縮功能

# zfs inherit compression mpool/clones/devdata

Set a quota on a file system

在一個檔案系統中設定配額

# zfs set quota=60G mpool/devel/data

Set a reservation on a new file system

在一個新的檔案系統設定預留空間

# zfs create -o reserv=20G mpool/devel/admin

Share a file system over NFS

分享檔案系統給NFS使用

# zfs set sharenfs=on mpool/devel/data

Create a ZFS volume

建立一個ZFS volume

# zfs create -V 2GB mpool/vol

Remove a ZFS volume

移除一個ZFS volume

# zfs destroy mpool/vol


[ 發表回應 ] ( 7預覽 )   |  常註連結  |   ( 3 / 1366 )
ERP的原理與應用 
ERP的原理與應用

主生產計劃(Master Production Schedule,簡稱MPS)是確定每一具體的最終產品在每一具體時間段內生產數量的計劃。這裡的最終產品是指對於企業來說最終完成、要出廠的完成品,它要具體到產品的品種、型號。這裡的具體時間段,通常是以周為單位,在有些情況下,也可以是日、旬、月。

通常主生產計劃是根據客戶合同(訂單)和市場預測,把經營計劃或生產大綱中的產品系列具體化,使之成為展開物料需求計劃的主要依據,起到了從綜合計劃向具體計劃過渡的承上啟下作用。MPS最終將可作為生產部門執行的目標,並成為考核工廠服務水準的依據。在MRP系統中,主生產計劃是作為驅動的一整套計劃數據,反映企業打算生產什麼,什麼時候生產以及生產多少。主生產計劃必須考慮客戶訂單和預測、未完成訂單、可用物料的數量、現有能力、管理方針和目標等等。
More...

[ 發表回應 ] ( 28預覽 )   |  常註連結  |   ( 3 / 1447 )

<< <前一頁 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 下一頁> >>