Toggle navigation
gienginali
:::
主選單
資訊安全
網路測速
線上書籍
登入
登入
帳號
密碼
登入
:::
新聞載入中,請稍後...
所有書籍
「Proxmox VE 4.x 中文初階學習手冊」目錄
MarkDown
9-5-1 實作1 zfs Raid1 陣列替換固障硬碟
1. 導讀 Proxmox VE
1-1 前言 --- 企業級產品 買不起 還好有社群資源 支援 企業級的技術
1-2 建議閱讀參考資料
1-3 各家虛擬軟體比較
1-3-1 Proxmox VE 和 VMWare 比較
1-3-2 圖表
1-4 新版
1-4-1 4.2 版新增功能
1-5 攜碼跳槽前的準備(從 VirtualBox,VMWare,ESXi 轉換前的準備)
2. 安裝 Proxmox VE
2-1 開始安裝 Proxmox VE
2-1-1 proxmox 實體機的建議
2-2 開始在 實體機上 安裝 Proxmox VE
2-2-1 下載 proxmox 5.0 版的 .iso 檔
2-2-2 開始安裝 Proxmox VE
2-2-2-1 BIOS 設定
2-2-2-2 安裝方式1:以光碟開機安裝
2-2-2-3 安裝方式2:以隨身碟方式安裝
2-2-2-4 安裝方式3:直接將系統安裝在 USB 隨身碟上以隨身碟當開機碟
2-2-2-5 版權宣告
2-2-2-6 安裝提示訊息-硬碟格式化
2-2-2-7 時區與鍵盤設定
2-2-2-8 管理員密碼與 Email 設定
2-2-2-9 網路設定
2-2-2-10 複製系統與程式檔案
2-2-2-11 安裝完成第1次開機
2-2-3 管理
2-2-3-1 文字介面的管理-1.本機登入
2-2-3-2 文字介面的管理-2.遠端登入
2-2-3-3 Web 主控台方式登入-Firefox
2-2-3-4 Web 主控台方式登入-Chrome
2-2-4 第1次更新套件(debian update)
2-2-4-1 無購買企業支援授權
2-2-4-2 Proxmox 的 enterprise support
2-2-4-3 套件功能的更新(Proxmox update)
2-2-4-4 安裝其它套件
2-2-4-5 4.x ---> 5.x 升級版本
2-2-4-6 5.x ---> 6.x 升級版本
2-2-5 Proxmox VE 的安全性
2-2-5-1 proxmox ve 使用者權限管理功能
2-2-5-2
2-2-5-3 Root 的密碼 安全性
2-2-5-4 建立單一帳號管理單一虛擬機(for webgui)
2-2-5-5 建立一個具全部管理權限的使用者與群組
2-3 參考:安裝硬碟規劃
2-3-1 安裝時選擇 ZFS 格式
2-3-2 硬碟空間分割設定的規劃(系統預設自動分配)-以虛擬機安裝示範
2-3-3 硬碟空間分割設定的規劃(系統預設自動分配)-以實體機安裝示範
2-3-4 安裝Proxmox前安裝硬碟空間分割設定的規劃(手動分配)
2-3-5 刪除內建的LVM thin
2-3-6 4.2版後不使用 lvm 恢復為 ext4 的方式(官網)
2-4 將 Proxmox VE 安裝在 Debian 8
2-4-1 英文
2-4-2 中文(google 翻譯)
3. 開始建立虛擬機
3-0-0-1 建立虛擬機 的認識
3-0-1 LXC(Container)記憶體的使用
3-0-1 KVM - 文字模式 記憶體的使用
3-0-1 LXC(Container)和KVM 記憶體使用的差異
3-0-1 KVM - 圖形介面 記憶體的使用
3-0-1 虛擬硬碟的介面 (使用 .vmdk 映像檔快取定)
3-1 虛擬機開機
3-1 虛擬機遠端桌面
3-1 建立虛擬機
3-1 安裝虛擬機上的作業系統
3-1 KVM 虛擬機的安裝
3-2 LXC Container 虛擬機的安裝
3-2-1 前言暨建立LXC虛擬機
3-2-1-1 下載 樣版系統
3-2-1-2 開始建立 LXC 虛擬機(Linux Container)
3-2-1-3 LXC虛擬機的更新
3-2-1-4 LXC 虛擬機 OS 的實際大小
3-2-1-5 LXC 虛擬機中 ssh 中文輸入顯示功能/指令補完功能
3-2-2 安裝 SFS3
3-2-2-1 ssh 設定連線範圍
3-2-2-2 /bin/firewall 防火牆程式
3-2-2-3 LAMP的安裝
3-2-2-4 apache2 的設定
3-2-2-5 SFS3網頁連線範圍設定
3-2-2-6 sfs3程式的移轉
3-2-2-7 mysql 資料庫的移轉
3-2-2-8 配合縣網openid的設定
3-2-2-9 IP的設定變更/LXC虛擬機
3-2-2-10 IP的設定變更/kvm虛擬機
3-2-2-11 DNS 主機上的設定
3-2-2-12 cron 等排程備份程式
3-2-2-13 時區調整
3-2-3 LXC 容器 LXC Mounts 的方法
3-2-3-1 LXC 容器 LXC Bind Mounts
3-2-3-2 4.2版 Mount Point GUI 介面
3-2-3-3 4.4版 Mount Point GUI 介面
3-2-3-4 autofs 掛載 cifs (samba) / 實體機上掛載
3-2-3-5 NFS & Automount
3-2-3-6 MountPoint 遷移 解決方法
3-2-4 虛擬機調校
3-2-4-1 虛擬機瘦身
3-3 從實體機移轉為虛擬機 實作(非整機轉換)
3-3-1 sfs3 學籍系統
3-3-1-1 備份主機硬體設定
3-3-1-2 備份原始主機資料
3-3-1-3 備份 sfs3 實體主機設定檔、網頁、資料庫 script
3-3-1-4 準備 樣版 LXC
3-3-1-5 sfs3 實體主機設定檔、網頁、資料庫 ---> Proxmox VE
3-3-1-6 在虛擬機 LXC 212 中解壓縮檔案
3-3-1-7 還原 sfs3 網頁
3-3-1-8 還原 apache2 虛擬網站設定
3-3-1-9 修改 sfs3 設定檔
3-3-1-10 還原 mysql 資料庫
3-3-1-11 變更 mysql root 密碼
3-3-1-12 還原 hosts.allow hosts.deny crontab ACFSsfsBK.txt
3-3-1-13 變更 hostname 和 hosts
3-4 刪除虛擬機
3-4-1 存放位置:local
3-4-2 存放位置:local-lvm
3-4-3 ZFS pool
4. VirtualBox、VMWare 移轉至 proxmox
4-0-1 樣版虛擬機製作
4-1 vdi 硬碟映像檔 轉換 至 proxmox 的 qcow2 格式
4-1 虛擬硬碟-格式轉換
4-1 How to convert VirtualBox vdi to KVM qcow2
4-1 Proxmox VE無法開啟vmdk格式虛擬機器的問題
4-2 使用 VirtualBox 的虛擬硬碟轉換後的檔案的移機
4-2-1 Proxmox VE 的設定檔與虛擬機儲存位置
4-2-1-1 proxmox VE 設定檔放置位置
4-2-1-2 Proxmox VE 虛擬機儲存位置
4-2-2 建立 VM 樣版
4-2-2-1 開始建立 VM 樣版
4-2-2-2 新建立的樣版 VM
4-2-2-3 新增加一顆 SATA 介面的虛擬硬碟
4-2-2-4 變更 虛擬硬碟檔案的檔名
4-2-2-5 啟動虛擬機
4-2-2-6 關閉虛擬機
4-2-2-7 esxi 虛擬機轉到ProxmoxVE
4-3 VMWare 轉換 為 Proxmox VE
4-3-1 Proxmox 筆記 vsphere 轉移篇
4-3-2 文章:Esxi移機到Proxmox-檔案格式轉檔、iscsi、nfs串連教學、虛擬機新增及相關備忘事項
4-3-3 KVM 的 vmdk to qcow2 方式
4-3-4 OVA 檔案轉換成 qcow2
4-4 實體機轉換為虛擬機
4-4-1 參考資料
4-4-1-1 ==實體機轉換成虛擬機
4-4-1-2 virt-p2v
4-4-1-3 KVM 的 P2V k 可以根據 redhat 的模式, 也可以做到
5. Storage 儲存容器
5-1 Directory - proxmox實體機安裝第二顆硬碟 掛載為Directory
5-2 Directory
5-3 NFS
5-4 ZFS
6. Live Migration
6-1 備份與移機原文位址: http://viewcamerafan.blogspot.tw/2011/11/proxmox-ve.html
7. 虛擬機的移機 備援 備份 重建 還原
7-1 KVM (qemu)的複製(手工)
7-1-1 3.2 版的操作
7-1-1-1 虛擬機的備份或移機-手動方式
7-1-1-2 設定檔修改與檔名變更
7-1-1-3 兩台 proxmox 主機間的直接手動設定複製
7-1-1-4 利用 scp 指令把虛擬機直接 從另一台 proxmox 主機 copy 回來
7-1-1-5 以 script.sh 手動方式備份 VM 至另一台主機
7-1-1-6 以 script.sh 結合管理遠端主機VM的啟動/關閉
7-1-1-7 以變數代入 script
7-1-2 Proxmox VE 3.3 版的操作
7-1-2-1 虛擬機整機複製
7-1-2-2 手動複製 VM
7-1-3 Proxmox VE 4.0 b1 版的操作
7-2 LXC(Container) 的複製(手工)
7-2-1 LXC(Container) 的複製 實作-複製映像檔
7-2-2 LXC(Container) 的複製 實作-複製並修改設定檔
7-3 Proxmox VE 正規 備份/還原
7-3-1 Backup 備份 虛擬機
7-3-1-1 Backup 1. LXC
7-3-1-2 Backup 2. KVM
7-3-1-3 vzdump 備份時 io 權限調整
7-3-1-4 vzdump 虛擬機備份打包指令
7-3-2 虛擬機差異性備份(不建議使用,請改用zfs差異性備份)
7-3-2-1 Proxmox 上的每日差異備份
7-3-2-2 差異性備份-非官方-不建議使用
7-3-2-3 差異性備份(實作)---非官方-不建議使用
7-3-3 Restore 還原虛擬機 (從備份檔還原) LXC、KVM
7-4 利用 Backup 來複製 虛擬機
7-4-1 複製 LXC 虛擬機
7-4-2 複製 KVM 虛擬機
7-5 利用 ZFS send / receive 來直接跨主機間備份/複製虛擬機
7-5-1 ZFS send / receive 將 zfs 分割區 直接複製
7-5-2 ZFS send / receive 將 zfs 分割區 增量複製
7-5-2-1 實作
7-5-2-2 實作2
7-5-2-3 1 對 N 快照差異性傳送模式
7-5-3-4 實作2-script 配合 crondtab 自動同步
8. DATACenter -叢集管理功能-管理全部主機
8-1-1 將其餘的主機加入集叢( Cluster )
8-1-2 集叢 與 LXC 虛擬機 的相關問題
8-1-3 脫離 Cluster(刪除其中一個 node)
8-1-4 從集叢中刪除其中一台主機(Remove a cluster node)
8-1-5 把原來刪除的 node 再重新加回來
8-1-6 del node
8-1-7 cluster 從三個 node 變成只有兩個 node
8-2 遷移虛擬機器
8-3 遷移虛擬機器與注意事項
8-4 PVE 4.2 Cluster 修改方式
9. ZFS
9-0-1 ZFS 檔案系統基本概念
9-0-2 安裝 ZFS 功能
9-0-3 記憶體參數設定 zfs arc
9-0-4 在巢狀虛擬系統下使用 zfs 時虛擬機裡虛擬硬碟的類型
9-0-5 指定 ZFS 掛載目錄
9-0-6 zfs 維護指令
9-1 PVE 4.4 安裝 OpenAttic 套件/簡單管理 ZFS
9-1-1 Openattic 現在 3.0 版本出來了, 它能管理的檔案儲存系統跟格式更多, 同時界面更加完善簡潔
9-2 將硬碟格式化為 ZFS 格式及ZFS分割區
9-2-1 選擇單顆硬碟或製作磁碟陣列
9-2-1-1 ZFS 建立基本指令
9-2-1-2 單一顆硬碟 ---> ZFS 格式
9-2-1-3 二顆硬碟 ---> ZFS 格式 Raid0
9-2-1-4 二或四顆硬碟 ---> ZFS 格式 Raid1 (Mirror) Raid10 (Mirror Mirror)
9-2-1-5 二顆硬碟 ---> ZFS 格式 RAID Z-1
9-2-2 附註:zfs 陣列變換 一顆硬碟在安裝完 zfs raid 0 後想再增加一顆硬碟做成 zfs mirror (raid 1)
9-2-3 建立 ZFS 的分割區
9-2-3-1 ZFS 的分割區 的建立 與 刪除
9-2-3-2 ZFS 的分割區 的 搬移、更名、刪除
9-2-3-3 建立與摧毀磁碟區-建立具快照的 fat 檔案系統
9-2-3-4 比對快照
9-2-3-5 加入與移除裝置
9-2-3-6 更換運作中的裝置
9-2-3-7 更換運作中的裝置
9-2-3-8 清潔儲存池
9-3 Snapshot 快照功能 (檔案時光機)
9-3-1 Snapshot 快照的建立
9-3-2 Snapshot 刪除快照
9-3-3 Snapshot 回復到過去任意的還原點
9-3-4 Snaoshot cron 定時快照 script
9-3-4-1 配合 crontab 定時將虛擬機製作快照
9-3-4-2 script snapshot 不同期間快照保存份數主程式
9-3-4-3 script 快照刪除程式
9-3-5 Snapshot 備份/使用 replication
9-3-5-1
9-3-6 zfs send recive 的應用實作
9-4 製作以 USB 隨身碟開機的步驟
9-4-1 變更USB隨身碟的讀寫區到硬碟
9-5 硬碟更換
9-5-1 實作1 zfs Raid1 陣列替換固障硬碟
9-6 man zfs
9-6-1 man zfs ( Proxmox VE )
9-6-2 man zfs ( ubuntu 1404 )
9-7 測試報告
9-7-1 ZFS Raidz Performance, Capacity and Integrity
9-7-2 swap on zfs
9-7-3 zfs 測試工具
9-7-4 zfs 2018 新功能
9-8 其它秘技
9-8-1 qcow2或vmdk是以檔案的方式放在ZFS上,disk的cache設定記得要使用write back
10. 10
10-1 routeros 24hr版 抓不到硬碟無法安裝
10-2 虛擬硬碟容量擴充
11. 手動升級
11-1 下載新版的iso檔
11-2 安裝新版的硬碟
11-3 備份原始開機硬碟的設定
11-4 接上新硬碟
11-5 將設定檔寫至新硬碟
12. Proxmox VE 的應用
12-1 KVM 虛擬機 將備份上傳至 google drive 雲端硬碟
12-1-1 安裝套件
12-1-1-1 KVM 虛擬機 安裝套件
12-1-1-2 實體機 安裝套件
12-1-2 實作
12-1-2-1 實體主機上的 NFS 分享
12-1-2-2 虛擬機上的設定
12-1-2-3 實際上機操作上傳
12-1-3 應用
12-2 在 LXC 安裝OpenMediaVault 並附加 HW block device
12-2-1 在 LXC 安裝OpenMediaVault 並附加 HW block device
12-2-2 將 OMV3 安裝在 LXC 裡
12-2-3 在 Proxmox VE 5.1 LXC 中安裝 OpenMediaVault 4
12-2-4 利用 ZFS 和 Proxmox VE 自建 NAS
13. 問題排除
13-1 Proxmox 3.1 客端關機或是重新開機都會lock住
13-2 Error closing file /var/tmp/pve-reserved-ports.tmp.43281 failed - No space left on device (500)
13-3 某一本右側書籍目錄無法顯示
13-4 LXC 無法啟動 ( Cluster 相關 )
13-4-1 重開機後 LXC 無法啟動
13-5 PVE 4.2 嚴重 BUG
13-6 pve4.4 zfs on root 不支援 UEFI 開機模式
13-7 安裝 Qemu Agent - 節省 KVM 使用的記憶體使用量(windows)
13-8 主機重新安裝後 虛擬機存放在 zfs dataset 裡的東西都不見了
14. 概念
14-1 VM 的安全性
15. 其他技術
15-1 主機硬體支援虛擬化
15-1-1 Proxmox VE 中 安裝 Proxmox VE(Proxmox VE Nested Virtualization)
15-2 改機
15-2-1 NAS 改 Proxmox VE 4.1 教學
15-3 PVE GPU passthrough
15-4 掛載硬體
15-4-1 Proxmox Physical disk to kvm (KVM 虛擬機直接使用實體硬碟)
15-5 How To Create A NAS Using ZFS and Proxmox (with pictures)
15-6 網路速率
15-6-1 Linux開啓BBR擁塞控制算法
15-7 樣版
15-7-1 PVE 自訂 LXC 樣版來產生 CT
15-8 pve 优化
16. 外文資料
16-1 FB proxmox
16-1-1 pve 4.4 ZFS
16-2 在 Debian 8 上安裝 Proxmox VE
17. 參考文章
17-1 手動安裝 java jdk
17-2 promox 指令
17-3 proxmox 常用指令
17-4 在Proxmox VE中加入NFS資料儲存伺服器
17-5 Proxmox - USB pass-through
17-6 遠端執行命令、多台機器管理(Push.sh)
17-7 不用密碼直接用 ssh 登入到遠端電腦
17-8 rsync 檔案備份
17-9 透過rsync備份
17-10 ssh 免密碼登入
17-11 ssh 免密碼登入 & 資料
17-12 proxmox 3.4 版無法安裝 nfs-kernel-server
17-13 手動方式升級
17-14 Ubuntu 12.04 LTS 及ubuntu14.10 -- NFS安裝
17-15 pve 在 i386 機器
17-16 Proxmox VE的不足
17-17 Proxmox Virtual Environment 筆記
17-18 KVM to LXC 與 LXC to KVM
17-19 Proxmox VE USB Physical Port Mapping
17-20 Proxmox VE Physical disk to kvm
17-21 ceph要七台主要的考量
17-22 zfs 入門與管理技術
17-23 RAID-1 陣列讀取資料方式
17-24 How to mount Glusterfs volumes inside LXC/LXD (Linux containers)
17-25 變更 Proxmox VE 主機名稱 hostname
17-26 PVE內建的防火牆
17-27 未整理的指令
17-46 Proxmox VE 可以結合 FreeNAS 使用 ZFS over iSCSI,做到兼具 Block Level 效能與 WebUI 管理
18. 新增/修改記錄
19. 友站連結
9-6-2 man zfs ( ubuntu 1404 )
Proxmox VE 4.x 中文初階學習手冊 ======================= zfs(8) System Administration Commands zfs(8) NAME zfs - configures ZFS file systems SYNOPSIS zfs \[-?\] zfs create \[-p\] \[-o property=value\] ... filesystem zfs create \[-ps\] \[-b blocksize\] \[-o property=value\] ... -V size volume zfs destroy \[-fnpRrv\] filesystem|volume zfs destroy \[-dnpRrv\] filesystem|volume@snap\[%snap\]\[,...\] zfs destroy filesystem|volume#bookmark zfs snapshot | snap \[-r\] \[-o property=value\] ... filesystem@snapname|volume@snapname ... zfs rollback \[-rRf\] snapshot zfs clone \[-p\] \[-o property=value\] ... snapshot filesystem|volume zfs promote clone-filesystem zfs rename \[-f\] filesystem|volume|snapshot filesystem|volume|snapshot zfs rename \[-fp\] filesystem|volume filesystem|volume zfs rename -r snapshot snapshot zfs list \[-r|-d depth\]\[-Hp\]\[-o property\[,property\]...\] \[-t type\[,type\]..\] \[-s property\] ... \[-S property\] ... \[filesystem|volume|snapshot\] ... zfs set property=value filesystem|volume|snapshot ... zfs get \[-r|-d depth\]\[-Hp\]\[-o field\[,...\]\] \[-t type\[,...\]\] \[-s source\[,...\]\] "all" | property\[,...\] filesystem|volume|snapshot ... zfs inherit \[-rS\] property filesystem|volume|snapshot ... zfs upgrade \[-v\] zfs upgrade \[-r\] \[-V version\] -a | filesystem zfs userspace \[-Hinp\] \[-o field\[,...\]\] \[-s field\] ... \[-S field\] ... \[-t type\[,...\]\] filesystem|snapshot zfs groupspace \[-Hinp\] \[-o field\[,...\]\] \[-s field\] ... \[-S field\] ... \[-t type\[,...\]\] filesystem|snapshot zfs mount zfs mount \[-vO\] \[-o options\] -a | filesystem zfs unmount | umount \[-f\] -a | filesystem|mountpoint zfs share -a | filesystem zfs unshare -a filesystem|mountpoint zfs bookmark snapshot bookmark zfs send \[-DnPpRveL\] \[-\[iI\] snapshot\] snapshot zfs send \[-eL\] \[-i snapshot|bookmark\] filesystem|volume|snapshot zfs receive | recv \[-vnFus\] filesystem|volume|snapshot zfs receive | recv \[-vnFus\] \[-d|-e\] filesystem zfs allow filesystem|volume zfs allow \[-ldug\] "everyone"|user|group\[,...\] perm|@setname\[,...\] filesystem|volume zfs allow \[-ld\] -e perm|@setname\[,...\] filesystem|volume zfs allow -c perm|@setname\[,...\] filesystem|volume zfs allow -s @setname perm|@setname\[,...\] filesystem|volume zfs unallow \[-rldug\] "everyone"|user|group\[,...\] \[perm|@setname\[,... \]\] filesystem|volume zfs unallow \[-rld\] -e \[perm|@setname\[,... \]\] filesystem|volume zfs unallow \[-r\] -c \[perm|@setname\[ ... \]\] filesystem|volume zfs unallow \[-r\] -s @setname \[perm|@setname\[,... \]\] filesystem|volume zfs hold \[-r\] tag snapshot... zfs holds \[-r\] snapshot... zfs release \[-r\] tag snapshot... zfs diff \[-FHt\] snapshot snapshot|filesystem DESCRIPTION The zfs command configures ZFS datasets within a ZFS storage pool, as described in zpool(8). A dataset is identified by a unique path within the ZFS namespace. For example: pool/{filesystem,volume,snapshot} where the maximum length of a dataset name is MAXNAMELEN (256 bytes). A dataset can be one of the following: file system A ZFS dataset of type filesystem can be mounted within the standard system namespace and behaves like other file systems. While ZFS file systems are designed to be POSIX compliant, known issues exist that prevent compliance in some cases. Applications that depend on standards conformance might fail due to nonstandard behavior when checking file system free space. volume A logical volume exported as a raw or block device. This type of dataset should only be used under special circumstances. File sys?? tems are typically used in most environments. snapshot A read-only version of a file system or volume at a given point in time. It is specified as filesystem@name or volume@name. bookmark Much like a snapshot, but without the hold on on-disk data. It can be used as the source of a send (but not for a receive). It is specified as filesystem#name or volume#name. ZFS File System Hierarchy A ZFS storage pool is a logical collection of devices that provide space for datasets. A storage pool is also the root of the ZFS file system hierarchy. The root of the pool can be accessed as a file system, such as mounting and unmounting, taking snapshots, and setting properties. The physical storage characteristics, however, are managed by the zpool(8) command. See zpool(8) for more information on creating and administering pools. Snapshots A snapshot is a read-only copy of a file system or volume. Snapshots can be created extremely quickly, and initially consume no additional space within the pool. As data within the active dataset changes, the snapshot consumes more data than would otherwise be shared with the active dataset. Snapshots can have arbitrary names. Snapshots of volumes can be cloned or rolled back. Visibility is determined by the snapdev property of the parent volume. File system snapshots can be accessed under the .zfs/snapshot directory in the root of the file system. Snapshots are automatically mounted on demand and may be unmounted at regular intervals. The visibility of the .zfs directory can be controlled by the snapdir property. Bookmarks A bookmark is like a snapshot, a read-only copy of a file system or volume. Bookmarks can be created extremely quickly, compared to snap?? shots, and they consume no additional space within the pool. Bookmarks can also have arbitrary names, much like snapshots. Unlike snapshots, bookmarks can not be accessed through the filesystem in any way. From a storage standpoint a bookmark just provides a way to reference when a snapshot was created as a distinct object. Bookmarks are initially tied to a snapshot, not the filesystem/volume, and they will survive if the snapshot itself is destroyed. Since they are very light weight there's little incentive to destroy them. Clones A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and initially consumes no additional space. Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The origin property exposes this dependency, and the destroy command lists any such dependencies, if they exist. The clone parent-child dependency relationship can be reversed by using the promote subcommand. This causes the "origin" file system to become a clone of the specified file system, which makes it possible to destroy the file system that the clone was created from. Mount Points Creating a ZFS file system is a simple operation, so the number of file systems per system is likely to be numerous. To cope with this, ZFS automatically manages mounting and unmounting file systems without the need to edit the /etc/fstab file. All automatically managed file sys?? tems are mounted by ZFS at boot time. By default, file systems are mounted under /path, where path is the name of the file system in the ZFS namespace. Directories are created and destroyed as needed. A file system can also have a mount point set in the mountpoint prop?? erty. This directory is created as needed, and ZFS automatically mounts the file system when the zfs mount -a command is invoked (without edit?? ing /etc/fstab). The mountpoint property can be inherited, so if pool/home has a mount point of /export/stuff, then pool/home/user auto?? matically inherits a mount point of /export/stuff/user. A file system mountpoint property of none prevents the file system from being mounted. If needed, ZFS file systems can also be managed with traditional tools (mount, umount, /etc/fstab). If a file system's mount point is set to legacy, ZFS makes no attempt to manage the file system, and the admin?? istrator is responsible for mounting and unmounting the file system. Deduplication Deduplication is the process for removing redundant data at the block- level, reducing the total amount of data stored. If a file system has the dedup property enabled, duplicate data blocks are removed syn?? chronously. The result is that only unique data is stored and common components are shared among files. WARNING: DO NOT ENABLE DEDUPLICATION UNLESS YOU NEED IT AND KNOW EXACTLY WHAT YOU ARE DOING! Deduplicating data is a very resource-intensive operation. It is gener?? ally recommended that you have at least 1.25 GB of RAM per 1 TB of storage when you enable deduplication. But calculating the exact requirenments is a somewhat complicated affair. Please see the Oracle Dedup Guide for more information.. Enabling deduplication on an improperly-designed system will result in extreme performance issues (extremely slow filesystem and snapshot deletions etc.) and can potentially lead to data loss (i.e. unim?? portable pool due to memory exhaustion) if your system is not built for this purpose. Deduplication affects the processing power (CPU), disks (and the controller) as well as primary (real) memory. Before creating a pool with deduplication enabled, ensure that you have planned your hardware requirements appropriately and implemented appro?? priate recovery practices, such as regular backups. Unless necessary, deduplication should NOT be enabled on a system. Instead, consider using compression=lz4, as a less resource-intensive alternative. Native Properties Properties are divided into two types, native properties and user- defined (or "user") properties. Native properties either export inter?? nal statistics or control ZFS behavior. In addition, native properties are either editable or read-only. User properties have no effect on ZFS behavior, but you can use them to annotate datasets in a way that is meaningful in your environment. For more information about user proper?? ties, see the "User Properties" section, below. Every dataset has a set of properties that export statistics about the dataset as well as control various behaviors. Properties are inherited from the parent unless overridden by the child. Some properties apply only to certain types of datasets (file systems, volumes, or snap?? shots). The values of numeric properties can be specified using human-readable suffixes (for example, k, KB, M, Gb, and so forth, up to Z for zettabyte). The following are all valid (and equal) specifications: 1536M, 1.5g, 1.50GB The values of non-numeric properties are case sensitive and must be lowercase, except for mountpoint, sharenfs, and sharesmb. The following native properties consist of read-only statistics about the dataset. These properties can be neither set, nor inherited. Native properties apply to all dataset types unless otherwise noted. available The amount of space available to the dataset and all its children, assuming that there is no other activity in the pool. Because space is shared within a pool, availability can be limited by any number of factors, including physical pool size, quotas, reservations, or other datasets within the pool. This property can also be referred to by its shortened column name, avail. compressratio For non-snapshots, the compression ratio achieved for the used space of this dataset, expressed as a multiplier. The used prop?? erty includes descendant datasets, and, for clones, does not include the space shared with the origin snapshot. For snapshots, the compressratio is the same as the refcompressratio property. Compression can be turned on by running: zfs set compression=on dataset. The default value is off. creation The time this dataset was created. clones For snapshots, this property is a comma-separated list of filesys?? tems or volumes which are clones of this snapshot. The clones' origin property is this snapshot. If the clones property is not empty, then this snapshot can not be destroyed (even with the -r or -f options). defer\_destroy This property is on if the snapshot has been marked for deferred destruction by using the zfs destroy -d command. Otherwise, the property is off. filesystem\_count The total number of filesystems and volumes that exist under this location in the dataset tree. This value is only available when a filesystem\_limit has been set somewhere in the tree under which the dataset resides. logicalreferenced The amount of space that is "logically" accessible by this dataset. See the referenced property. The logical space ignores the effect of the compression and copies properties, giving a quantity closer to the amount of data that applications see. However, it does include space consumed by metadata. This property can also be referred to by its shortened column name, lrefer. logicalused The amount of space that is "logically" consumed by this dataset and all its descendents. See the used property. The logical space ignores the effect of the compression and copies properties, giving a quantity closer to the amount of data that applications see. However, it does include space consumed by metadata. This property can also be referred to by its shortened column name, lused. mounted For file systems, indicates whether the file system is currently mounted. This property can be either yes or no. origin For cloned file systems or volumes, the snapshot from which the clone was created. See also the clones property. referenced The amount of data that is accessible by this dataset, which may or may not be shared with other datasets in the pool. When a snapshot or clone is created, it initially references the same amount of space as the file system or snapshot it was created from, since its contents are identical. This property can also be referred to by its shortened column name, refer. refcompressratio The compression ratio achieved for the referenced space of this dataset, expressed as a multiplier. See also the compressratio property. snapshot\_count The total number of snapshots that exist under this location in the dataset tree. This value is only available when a snapshot\_limit has been set somewhere in the tree under which the dataset resides. type The type of dataset: filesystem, volume, or snapshot. used The amount of space consumed by this dataset and all its descen?? dents. This is the value that is checked against this dataset's quota and reservation. The space used does not include this dataset's reservation, but does take into account the reservations of any descendent datasets. The amount of space that a dataset con?? sumes from its parent, as well as the amount of space that are freed if this dataset is recursively destroyed, is the greater of its space used and its reservation. When snapshots (see the "Snapshots" section) are created, their space is initially shared between the snapshot and the file system, and possibly with previous snapshots. As the file system changes, space that was previously shared becomes unique to the snapshot, and counted in the snapshot's space used. Additionally, deleting snapshots can increase the amount of space unique to (and used by) other snapshots. The amount of space used, available, or referenced does not take into account pending changes. Pending changes are generally accounted for within a few seconds. Committing a change to a disk using fsync(2) or O\_SYNC does not necessarily guarantee that the space usage information is updated immediately. usedby\* The usedby\* properties decompose the used properties into the vari?? ous reasons that space is used. Specifically, used = usedbychildren \+ usedbydataset + usedbyrefreservation +, usedbysnapshots. These properties are only available for datasets created on zpool "ver?? sion 13" pools. usedbychildren The amount of space used by children of this dataset, which would be freed if all the dataset's children were destroyed. usedbydataset The amount of space used by this dataset itself, which would be freed if the dataset were destroyed (after first removing any refreservation and destroying any necessary snapshots or descen?? dents). usedbyrefreservation The amount of space used by a refreservation set on this dataset, which would be freed if the refreservation was removed. usedbysnapshots The amount of space consumed by snapshots of this dataset. In par?? ticular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not sim?? ply the sum of the snapshots' used properties because space can be shared by multiple snapshots. userused@user The amount of space consumed by the specified user in this dataset. Space is charged to the owner of each file, as displayed by ls -l. The amount of space charged is displayed by du and ls -s. See the zfs userspace subcommand for more information. Unprivileged users can access only their own space usage. The root user, or a user who has been granted the userused privilege with zfs allow, can access everyone's usage. The userused@... properties are not displayed by zfs get all. The user's name must be appended after the @ symbol, using one of the following forms: o POSIX name (for example, joe) o POSIX numeric ID (for example, 789) o SID name (for example, joe.smith@mydomain) o SID numeric ID (for example, S-1-123-456-789) userrefs This property is set to the number of user holds on this snapshot. User holds are set by using the zfs hold command. groupused@group The amount of space consumed by the specified group in this dataset. Space is charged to the group of each file, as displayed by ls -l. See the userused@user property for more information. Unprivileged users can only access their own groups' space usage. The root user, or a user who has been granted the groupused privi?? lege with zfs allow, can access all groups' usage. volblocksize=blocksize For volumes, specifies the block size of the volume. The blocksize cannot be changed once the volume has been written, so it should be set at volume creation time. The default blocksize for volumes is 8 Kbytes. Any power of 2 from 512 bytes to 128 Kbytes is valid. This property can also be referred to by its shortened column name, volblock. written The amount of referenced space written to this dataset since the previous snapshot. written@snapshot The amount of referenced space written to this dataset since the specified snapshot. This is the space that is referenced by this dataset but was not referenced by the specified snapshot. The snapshot may be specified as a short snapshot name (just the part after the @), in which case it will be interpreted as a snap?? shot in the same filesystem as this dataset. The snapshot be a full snapshot name (filesystem@snapshot), which for clones may be a snapshot in the origin's filesystem (or the origin of the origin's filesystem, etc). The following native properties can be used to change the behavior of a ZFS dataset. aclinherit=discard | noallow | restricted | passthrough | passthrough-x Controls how ACL entries are inherited when files and directories are created. A file system with an aclinherit property of discard does not inherit any ACL entries. A file system with an aclinherit property value of noallow only inherits inheritable ACL entries that specify "deny" permissions. The property value restricted (the default) removes the write\_acl and write\_owner permissions when the ACL entry is inherited. A file system with an aclinherit property value of passthrough inherits all inheritable ACL entries without any modifications made to the ACL entries when they are inherited. A file system with an aclinherit property value of passthrough-x has the same meaning as passthrough, except that the owner@, group@, and everyone@ ACEs inherit the execute permission only if the file creation mode also requests the execute bit. When the property value is set to passthrough, files are created with a mode determined by the inheritable ACEs. If no inheritable ACEs exist that affect the mode, then the mode is set in accordance to the requested mode from the application. The aclinherit property does not apply to Posix ACLs. acltype=noacl | posixacl Controls whether ACLs are enabled and if so what type of ACL to use. When a file system has the acltype property set to noacl (the default) then ACLs are disabled. Setting the acltype property to posixacl indicates Posix ACLs should be used. Posix ACLs are spe?? cific to Linux and are not functional on other platforms. Posix ACLs are stored as an xattr and therefore will not overwrite any existing ZFS/NFSv4 ACLs which may be set. Currently only posixacls are supported on Linux. To obtain the best performance when setting posixacl users are strongly encouraged to set the xattr=sa property. This will result in the Posix ACL being stored more efficiently on disk. But as a consequence of this all new xattrs will only be accessible from ZFS implementations which support the xattr=sa property. See the xattr property for more details. atime=on | off Controls whether the access time for files is updated when they are read. Turning this property off avoids producing write traffic when reading files and can result in significant performance gains, though it might confuse mailers and other similar utilities. The default value is on. See also relatime below. canmount=on | off | noauto If this property is set to off, the file system cannot be mounted, and is ignored by zfs mount -a. Setting this property to off is similar to setting the mountpoint property to none, except that the dataset still has a normal mountpoint property, which can be inher?? ited. Setting this property to off allows datasets to be used solely as a mechanism to inherit properties. One example of setting canmount=off is to have two datasets with the same mountpoint, so that the children of both datasets appear in the same directory, but might have different inherited characteristics. When the noauto option is set, a dataset can only be mounted and unmounted explicitly. The dataset is not mounted automatically when the dataset is created or imported, nor is it mounted by the zfs mount -a command or unmounted by the zfs unmount -a command. This property is not inherited. checksum=on | off | fletcher2,| fletcher4 | sha256 Controls the checksum used to verify data integrity. The default value is on, which automatically selects an appropriate algorithm (currently, fletcher4, but this may change in future releases). The value off disables integrity checking on user data. Disabling checksums is NOT a recommended practice. Changing this property affects only newly-written data. compression=on | off | lzjb | lz4 | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. Setting compression to on indicates that the current default com?? pression algorithm should be used. The default balances compres?? sion and decompression speed, with compression ratio and is expected to work well on a wide variety of workloads. Unlike all other settings for this property, on does not select a fixed com?? pression type. As new compression algorithms are added to ZFS and enabled on a pool, the default compression algorithm may change. The current default compression algorthm is either lzjb or, if the lz4\_compress feature is enabled, lz4. The lzjb compression algorithm is optimized for performance while providing decent data compression. The lz4 compression algorithm is a high-performance replacement for the lzjb algorithm. It features significantly faster compression and decompression, as well as a moderately higher compression ratio than lzjb, but can only be used on pools with the lz4\_compress fea?? ture set to enabled. See zpool-features(5) for details on ZFS fea?? ture flags and the lz4\_compress feature. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compres?? sion ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). The zle compression algorithm compresses runs of zeros. This property can also be referred to by its shortened column name compress. Changing this property affects only newly-written data. copies=1 | 2 | 3 Controls the number of copies of data stored for this dataset. These copies are in addition to any redundancy provided by the pool, for example, mirroring or RAID-Z. The copies are stored on different disks, if possible. The space used by multiple copies is charged to the associated file and dataset, changing the used prop?? erty and counting against quotas and reservations. Changing this property only affects newly-written data. Therefore, set this property at file system creation time by using the -o copies=N option. dedup=on | off | verify | sha256\[,verify\] Controls whether deduplication is in effect for a dataset. The default value is off. The default checksum used for deduplication is sha256 (subject to change). When dedup is enabled, the dedup checksum algorithm overrides the checksum property. Setting the value to verify is equivalent to specifying sha256,verify. If the property is set to verify, then, whenever two blocks have the same signature, ZFS will do a byte-for-byte comparison with the existing block to ensure that the contents are identical. Unless necessary, deduplication should NOT be enabled on a system. See Deduplication above. devices=on | off Controls whether device nodes can be opened on this file system. The default value is on. exec=on | off Controls whether processes can be executed from within this file system. The default value is on. mlslabel=label | none The mlslabel property is a sensitivity label that determines if a dataset can be mounted in a zone on a system with Trusted Exten?? sions enabled. If the labeled dataset matches the labeled zone, the dataset can be mounted and accessed from the labeled zone. When the mlslabel property is not set, the default value is none. Setting the mlslabel property to none is equivalent to removing the property. The mlslabel property can be modified only when Trusted Extensions is enabled and only with appropriate privilege. Rights to modify it cannot be delegated. When changing a label to a higher label or setting the initial dataset label, the {PRIV\_FILE\_UPGRADE\_SL} priv?? ilege is required. When changing a label to a lower label or the default (none), the {PRIV\_FILE\_DOWNGRADE\_SL} privilege is required. Changing the dataset to labels other than the default can be done only when the dataset is not mounted. When a dataset with the default label is mounted into a labeled-zone, the mount operation automatically sets the mlslabel property to the label of that zone. When Trusted Extensions is not enabled, only datasets with the default label (none) can be mounted. Zones are a Solaris feature and are not relevant on Linux. filesystem\_limit=count | none Limits the number of filesystems and volumes that can exist under this point in the dataset tree. The limit is not enforced if the user is allowed to change the limit. Setting a filesystem\_limit on a descendent of a filesystem that already has a filesystem\_limit does not override the ancestor's filesystem\_limit, but rather imposes an additional limit. This feature must be enabled to be used (see zpool-features(5)). mountpoint=path | none | legacy Controls the mount point used for this file system. See the "Mount Points" section for more information on how this property is used. When the mountpoint property is changed for a file system, the file system and any children that inherit the mount point are unmounted. If the new value is legacy, then they remain unmounted. Otherwise, they are automatically remounted in the new location if the prop?? erty was previously legacy or none, or if they were mounted before the property was changed. In addition, any shared file systems are unshared and shared in the new location. nbmand=on | off Controls whether the file system should be mounted with nbmand (Non Blocking mandatory locks). This is used for CIFS clients. Changes to this property only take effect when the file system is umounted and remounted. See mount(8) for more information on nbmand mounts. primarycache=all | none | metadata Controls what is cached in the primary cache (ARC). If this prop?? erty is set to all, then both user data and metadata is cached. If this property is set to none, then neither user data nor metadata is cached. If this property is set to metadata, then only metadata is cached. The default value is all. quota=size | none Limits the amount of space a dataset and its descendents can con?? sume. This property enforces a hard limit on the amount of space used. This includes all space consumed by descendents, including file systems and snapshots. Setting a quota on a descendent of a dataset that already has a quota does not override the ancestor's quota, but rather imposes an additional limit. Quotas cannot be set on volumes, as the volsize property acts as an implicit quota. snapshot\_limit=count | none Limits the number of snapshots that can be created on a dataset and its descendents. Setting a snapshot\_limit on a descendent of a dataset that already has a snapshot\_limit does not override the ancestor's snapshot\_limit, but rather imposes an additional limit. The limit is not enforced if the user is allowed to change the limit. For example, this means that recursive snapshots taken from the global zone are counted against each delegated dataset within a zone. This feature must be enabled to be used (see zpool-fea?? tures(5)). userquota@user=size | none Limits the amount of space consumed by the specified user. Similar to the refquota property, the userquota space calculation does not include space that is used by descendent datasets, such as snap?? shots and clones. User space consumption is identified by the userspace@user property. Enforcement of user quotas may be delayed by several seconds. This delay means that a user might exceed their quota before the system notices that they are over quota and begins to refuse additional writes with the EDQUOT error message . See the zfs userspace sub?? command for more information. Unprivileged users can only access their own groups' space usage. The root user, or a user who has been granted the userquota privi?? lege with zfs allow, can get and set everyone's quota. This property is not available on volumes, on file systems before version 4, or on pools before version 15. The userquota@... proper?? ties are not displayed by zfs get all. The user's name must be appended after the @ symbol, using one of the following forms: o POSIX name (for example, joe) o POSIX numeric ID (for example, 789) o SID name (for example, joe.smith@mydomain) o SID numeric ID (for example, S-1-123-456-789) groupquota@group=size | none Limits the amount of space consumed by the specified group. Group space consumption is identified by the userquota@user property. Unprivileged users can access only their own groups' space usage. The root user, or a user who has been granted the groupquota privi?? lege with zfs allow, can get and set all groups' quotas. readonly=on | off Controls whether this dataset can be modified. The default value is off. This property can also be referred to by its shortened column name, rdonly. recordsize=size Specifies a suggested block size for files in the file system. This property is designed solely for use with database workloads that access files in fixed-size records. ZFS automatically tunes block sizes according to internal algorithms optimized for typical access patterns. For databases that create very large files but access them in small random chunks, these algorithms may be suboptimal. Specifying a recordsize greater than or equal to the record size of the database can result in significant performance gains. Use of this property for general purpose file systems is strongly discouraged, and may adversely affect performance. The size specified must be a power of two greater than or equal to 512 and less than or equal to 128 Kbytes. Changing the file system's recordsize affects only files created afterward; existing files are unaffected. This property can also be referred to by its shortened column name, recsize. redundant\_metadata=all | most Controls what types of metadata are stored redundantly. ZFS stores an extra copy of metadata, so that if a single block is corrupted, the amount of user data lost is limited. This extra copy is in addition to any redundancy provided at the pool level (e.g. by mir?? roring or RAID-Z), and is in addition to an extra copy specified by the copies property (up to a total of 3 copies). For example if the pool is mirrored, copies=2, and redundant\_metadata=most, then ZFS stores 6 copies of most metadata, and 4 copies of data and some metadata. When set to all, ZFS stores an extra copy of all metadata. If a single on-disk block is corrupt, at worst a single block of user data (which is recordsize bytes long) can be lost. When set to most, ZFS stores an extra copy of most types of meta?? data. This can improve performance of random writes, because less metadata must be written. In practice, at worst about 100 blocks (of recordsize bytes each) of user data can be lost if a single on- disk block is corrupt. The exact behavior of which metadata blocks are stored redundantly may change in future releases. The default value is all. refquota=size | none
:::
展開
|
闔起
文章類別
書籍目錄
展開
|
闔起
線上使用者
9
人線上 (
9
人在瀏覽
線上書籍
)
會員: 0
訪客: 9
更多…