Toggle navigation
gienginali
:::
主選單
資訊安全
網路測速
線上書籍
登入
登入
帳號
密碼
登入
:::
新聞載入中,請稍後...
所有書籍
「Proxmox VE 4.x 中文初階學習手冊」目錄
MarkDown
9-6-1 man zfs ( Proxmox VE )
1. 導讀 Proxmox VE
1-1 前言 --- 企業級產品 買不起 還好有社群資源 支援 企業級的技術
1-2 建議閱讀參考資料
1-3 各家虛擬軟體比較
1-3-1 Proxmox VE 和 VMWare 比較
1-3-2 圖表
1-4 新版
1-4-1 4.2 版新增功能
1-5 攜碼跳槽前的準備(從 VirtualBox,VMWare,ESXi 轉換前的準備)
2. 安裝 Proxmox VE
2-1 開始安裝 Proxmox VE
2-1-1 proxmox 實體機的建議
2-2 開始在 實體機上 安裝 Proxmox VE
2-2-1 下載 proxmox 5.0 版的 .iso 檔
2-2-2 開始安裝 Proxmox VE
2-2-2-1 BIOS 設定
2-2-2-2 安裝方式1:以光碟開機安裝
2-2-2-3 安裝方式2:以隨身碟方式安裝
2-2-2-4 安裝方式3:直接將系統安裝在 USB 隨身碟上以隨身碟當開機碟
2-2-2-5 版權宣告
2-2-2-6 安裝提示訊息-硬碟格式化
2-2-2-7 時區與鍵盤設定
2-2-2-8 管理員密碼與 Email 設定
2-2-2-9 網路設定
2-2-2-10 複製系統與程式檔案
2-2-2-11 安裝完成第1次開機
2-2-3 管理
2-2-3-1 文字介面的管理-1.本機登入
2-2-3-2 文字介面的管理-2.遠端登入
2-2-3-3 Web 主控台方式登入-Firefox
2-2-3-4 Web 主控台方式登入-Chrome
2-2-4 第1次更新套件(debian update)
2-2-4-1 無購買企業支援授權
2-2-4-2 Proxmox 的 enterprise support
2-2-4-3 套件功能的更新(Proxmox update)
2-2-4-4 安裝其它套件
2-2-4-5 4.x ---> 5.x 升級版本
2-2-4-6 5.x ---> 6.x 升級版本
2-2-5 Proxmox VE 的安全性
2-2-5-1 proxmox ve 使用者權限管理功能
2-2-5-2
2-2-5-3 Root 的密碼 安全性
2-2-5-4 建立單一帳號管理單一虛擬機(for webgui)
2-2-5-5 建立一個具全部管理權限的使用者與群組
2-3 參考:安裝硬碟規劃
2-3-1 安裝時選擇 ZFS 格式
2-3-2 硬碟空間分割設定的規劃(系統預設自動分配)-以虛擬機安裝示範
2-3-3 硬碟空間分割設定的規劃(系統預設自動分配)-以實體機安裝示範
2-3-4 安裝Proxmox前安裝硬碟空間分割設定的規劃(手動分配)
2-3-5 刪除內建的LVM thin
2-3-6 4.2版後不使用 lvm 恢復為 ext4 的方式(官網)
2-4 將 Proxmox VE 安裝在 Debian 8
2-4-1 英文
2-4-2 中文(google 翻譯)
3. 開始建立虛擬機
3-0-0-1 建立虛擬機 的認識
3-0-1 LXC(Container)記憶體的使用
3-0-1 KVM - 文字模式 記憶體的使用
3-0-1 LXC(Container)和KVM 記憶體使用的差異
3-0-1 KVM - 圖形介面 記憶體的使用
3-0-1 虛擬硬碟的介面 (使用 .vmdk 映像檔快取定)
3-1 虛擬機開機
3-1 虛擬機遠端桌面
3-1 建立虛擬機
3-1 安裝虛擬機上的作業系統
3-1 KVM 虛擬機的安裝
3-2 LXC Container 虛擬機的安裝
3-2-1 前言暨建立LXC虛擬機
3-2-1-1 下載 樣版系統
3-2-1-2 開始建立 LXC 虛擬機(Linux Container)
3-2-1-3 LXC虛擬機的更新
3-2-1-4 LXC 虛擬機 OS 的實際大小
3-2-1-5 LXC 虛擬機中 ssh 中文輸入顯示功能/指令補完功能
3-2-2 安裝 SFS3
3-2-2-1 ssh 設定連線範圍
3-2-2-2 /bin/firewall 防火牆程式
3-2-2-3 LAMP的安裝
3-2-2-4 apache2 的設定
3-2-2-5 SFS3網頁連線範圍設定
3-2-2-6 sfs3程式的移轉
3-2-2-7 mysql 資料庫的移轉
3-2-2-8 配合縣網openid的設定
3-2-2-9 IP的設定變更/LXC虛擬機
3-2-2-10 IP的設定變更/kvm虛擬機
3-2-2-11 DNS 主機上的設定
3-2-2-12 cron 等排程備份程式
3-2-2-13 時區調整
3-2-3 LXC 容器 LXC Mounts 的方法
3-2-3-1 LXC 容器 LXC Bind Mounts
3-2-3-2 4.2版 Mount Point GUI 介面
3-2-3-3 4.4版 Mount Point GUI 介面
3-2-3-4 autofs 掛載 cifs (samba) / 實體機上掛載
3-2-3-5 NFS & Automount
3-2-3-6 MountPoint 遷移 解決方法
3-2-4 虛擬機調校
3-2-4-1 虛擬機瘦身
3-3 從實體機移轉為虛擬機 實作(非整機轉換)
3-3-1 sfs3 學籍系統
3-3-1-1 備份主機硬體設定
3-3-1-2 備份原始主機資料
3-3-1-3 備份 sfs3 實體主機設定檔、網頁、資料庫 script
3-3-1-4 準備 樣版 LXC
3-3-1-5 sfs3 實體主機設定檔、網頁、資料庫 ---> Proxmox VE
3-3-1-6 在虛擬機 LXC 212 中解壓縮檔案
3-3-1-7 還原 sfs3 網頁
3-3-1-8 還原 apache2 虛擬網站設定
3-3-1-9 修改 sfs3 設定檔
3-3-1-10 還原 mysql 資料庫
3-3-1-11 變更 mysql root 密碼
3-3-1-12 還原 hosts.allow hosts.deny crontab ACFSsfsBK.txt
3-3-1-13 變更 hostname 和 hosts
3-4 刪除虛擬機
3-4-1 存放位置:local
3-4-2 存放位置:local-lvm
3-4-3 ZFS pool
4. VirtualBox、VMWare 移轉至 proxmox
4-0-1 樣版虛擬機製作
4-1 vdi 硬碟映像檔 轉換 至 proxmox 的 qcow2 格式
4-1 虛擬硬碟-格式轉換
4-1 How to convert VirtualBox vdi to KVM qcow2
4-1 Proxmox VE無法開啟vmdk格式虛擬機器的問題
4-2 使用 VirtualBox 的虛擬硬碟轉換後的檔案的移機
4-2-1 Proxmox VE 的設定檔與虛擬機儲存位置
4-2-1-1 proxmox VE 設定檔放置位置
4-2-1-2 Proxmox VE 虛擬機儲存位置
4-2-2 建立 VM 樣版
4-2-2-1 開始建立 VM 樣版
4-2-2-2 新建立的樣版 VM
4-2-2-3 新增加一顆 SATA 介面的虛擬硬碟
4-2-2-4 變更 虛擬硬碟檔案的檔名
4-2-2-5 啟動虛擬機
4-2-2-6 關閉虛擬機
4-2-2-7 esxi 虛擬機轉到ProxmoxVE
4-3 VMWare 轉換 為 Proxmox VE
4-3-1 Proxmox 筆記 vsphere 轉移篇
4-3-2 文章:Esxi移機到Proxmox-檔案格式轉檔、iscsi、nfs串連教學、虛擬機新增及相關備忘事項
4-3-3 KVM 的 vmdk to qcow2 方式
4-3-4 OVA 檔案轉換成 qcow2
4-4 實體機轉換為虛擬機
4-4-1 參考資料
4-4-1-1 ==實體機轉換成虛擬機
4-4-1-2 virt-p2v
4-4-1-3 KVM 的 P2V k 可以根據 redhat 的模式, 也可以做到
5. Storage 儲存容器
5-1 Directory - proxmox實體機安裝第二顆硬碟 掛載為Directory
5-2 Directory
5-3 NFS
5-4 ZFS
6. Live Migration
6-1 備份與移機原文位址: http://viewcamerafan.blogspot.tw/2011/11/proxmox-ve.html
7. 虛擬機的移機 備援 備份 重建 還原
7-1 KVM (qemu)的複製(手工)
7-1-1 3.2 版的操作
7-1-1-1 虛擬機的備份或移機-手動方式
7-1-1-2 設定檔修改與檔名變更
7-1-1-3 兩台 proxmox 主機間的直接手動設定複製
7-1-1-4 利用 scp 指令把虛擬機直接 從另一台 proxmox 主機 copy 回來
7-1-1-5 以 script.sh 手動方式備份 VM 至另一台主機
7-1-1-6 以 script.sh 結合管理遠端主機VM的啟動/關閉
7-1-1-7 以變數代入 script
7-1-2 Proxmox VE 3.3 版的操作
7-1-2-1 虛擬機整機複製
7-1-2-2 手動複製 VM
7-1-3 Proxmox VE 4.0 b1 版的操作
7-2 LXC(Container) 的複製(手工)
7-2-1 LXC(Container) 的複製 實作-複製映像檔
7-2-2 LXC(Container) 的複製 實作-複製並修改設定檔
7-3 Proxmox VE 正規 備份/還原
7-3-1 Backup 備份 虛擬機
7-3-1-1 Backup 1. LXC
7-3-1-2 Backup 2. KVM
7-3-1-3 vzdump 備份時 io 權限調整
7-3-1-4 vzdump 虛擬機備份打包指令
7-3-2 虛擬機差異性備份(不建議使用,請改用zfs差異性備份)
7-3-2-1 Proxmox 上的每日差異備份
7-3-2-2 差異性備份-非官方-不建議使用
7-3-2-3 差異性備份(實作)---非官方-不建議使用
7-3-3 Restore 還原虛擬機 (從備份檔還原) LXC、KVM
7-4 利用 Backup 來複製 虛擬機
7-4-1 複製 LXC 虛擬機
7-4-2 複製 KVM 虛擬機
7-5 利用 ZFS send / receive 來直接跨主機間備份/複製虛擬機
7-5-1 ZFS send / receive 將 zfs 分割區 直接複製
7-5-2 ZFS send / receive 將 zfs 分割區 增量複製
7-5-2-1 實作
7-5-2-2 實作2
7-5-2-3 1 對 N 快照差異性傳送模式
7-5-3-4 實作2-script 配合 crondtab 自動同步
8. DATACenter -叢集管理功能-管理全部主機
8-1-1 將其餘的主機加入集叢( Cluster )
8-1-2 集叢 與 LXC 虛擬機 的相關問題
8-1-3 脫離 Cluster(刪除其中一個 node)
8-1-4 從集叢中刪除其中一台主機(Remove a cluster node)
8-1-5 把原來刪除的 node 再重新加回來
8-1-6 del node
8-1-7 cluster 從三個 node 變成只有兩個 node
8-2 遷移虛擬機器
8-3 遷移虛擬機器與注意事項
8-4 PVE 4.2 Cluster 修改方式
9. ZFS
9-0-1 ZFS 檔案系統基本概念
9-0-2 安裝 ZFS 功能
9-0-3 記憶體參數設定 zfs arc
9-0-4 在巢狀虛擬系統下使用 zfs 時虛擬機裡虛擬硬碟的類型
9-0-5 指定 ZFS 掛載目錄
9-0-6 zfs 維護指令
9-1 PVE 4.4 安裝 OpenAttic 套件/簡單管理 ZFS
9-1-1 Openattic 現在 3.0 版本出來了, 它能管理的檔案儲存系統跟格式更多, 同時界面更加完善簡潔
9-2 將硬碟格式化為 ZFS 格式及ZFS分割區
9-2-1 選擇單顆硬碟或製作磁碟陣列
9-2-1-1 ZFS 建立基本指令
9-2-1-2 單一顆硬碟 ---> ZFS 格式
9-2-1-3 二顆硬碟 ---> ZFS 格式 Raid0
9-2-1-4 二或四顆硬碟 ---> ZFS 格式 Raid1 (Mirror) Raid10 (Mirror Mirror)
9-2-1-5 二顆硬碟 ---> ZFS 格式 RAID Z-1
9-2-2 附註:zfs 陣列變換 一顆硬碟在安裝完 zfs raid 0 後想再增加一顆硬碟做成 zfs mirror (raid 1)
9-2-3 建立 ZFS 的分割區
9-2-3-1 ZFS 的分割區 的建立 與 刪除
9-2-3-2 ZFS 的分割區 的 搬移、更名、刪除
9-2-3-3 建立與摧毀磁碟區-建立具快照的 fat 檔案系統
9-2-3-4 比對快照
9-2-3-5 加入與移除裝置
9-2-3-6 更換運作中的裝置
9-2-3-7 更換運作中的裝置
9-2-3-8 清潔儲存池
9-3 Snapshot 快照功能 (檔案時光機)
9-3-1 Snapshot 快照的建立
9-3-2 Snapshot 刪除快照
9-3-3 Snapshot 回復到過去任意的還原點
9-3-4 Snaoshot cron 定時快照 script
9-3-4-1 配合 crontab 定時將虛擬機製作快照
9-3-4-2 script snapshot 不同期間快照保存份數主程式
9-3-4-3 script 快照刪除程式
9-3-5 Snapshot 備份/使用 replication
9-3-5-1
9-3-6 zfs send recive 的應用實作
9-4 製作以 USB 隨身碟開機的步驟
9-4-1 變更USB隨身碟的讀寫區到硬碟
9-5 硬碟更換
9-5-1 實作1 zfs Raid1 陣列替換固障硬碟
9-6 man zfs
9-6-1 man zfs ( Proxmox VE )
9-6-2 man zfs ( ubuntu 1404 )
9-7 測試報告
9-7-1 ZFS Raidz Performance, Capacity and Integrity
9-7-2 swap on zfs
9-7-3 zfs 測試工具
9-7-4 zfs 2018 新功能
9-8 其它秘技
9-8-1 qcow2或vmdk是以檔案的方式放在ZFS上,disk的cache設定記得要使用write back
10. 10
10-1 routeros 24hr版 抓不到硬碟無法安裝
10-2 虛擬硬碟容量擴充
11. 手動升級
11-1 下載新版的iso檔
11-2 安裝新版的硬碟
11-3 備份原始開機硬碟的設定
11-4 接上新硬碟
11-5 將設定檔寫至新硬碟
12. Proxmox VE 的應用
12-1 KVM 虛擬機 將備份上傳至 google drive 雲端硬碟
12-1-1 安裝套件
12-1-1-1 KVM 虛擬機 安裝套件
12-1-1-2 實體機 安裝套件
12-1-2 實作
12-1-2-1 實體主機上的 NFS 分享
12-1-2-2 虛擬機上的設定
12-1-2-3 實際上機操作上傳
12-1-3 應用
12-2 在 LXC 安裝OpenMediaVault 並附加 HW block device
12-2-1 在 LXC 安裝OpenMediaVault 並附加 HW block device
12-2-2 將 OMV3 安裝在 LXC 裡
12-2-3 在 Proxmox VE 5.1 LXC 中安裝 OpenMediaVault 4
12-2-4 利用 ZFS 和 Proxmox VE 自建 NAS
13. 問題排除
13-1 Proxmox 3.1 客端關機或是重新開機都會lock住
13-2 Error closing file /var/tmp/pve-reserved-ports.tmp.43281 failed - No space left on device (500)
13-3 某一本右側書籍目錄無法顯示
13-4 LXC 無法啟動 ( Cluster 相關 )
13-4-1 重開機後 LXC 無法啟動
13-5 PVE 4.2 嚴重 BUG
13-6 pve4.4 zfs on root 不支援 UEFI 開機模式
13-7 安裝 Qemu Agent - 節省 KVM 使用的記憶體使用量(windows)
13-8 主機重新安裝後 虛擬機存放在 zfs dataset 裡的東西都不見了
14. 概念
14-1 VM 的安全性
15. 其他技術
15-1 主機硬體支援虛擬化
15-1-1 Proxmox VE 中 安裝 Proxmox VE(Proxmox VE Nested Virtualization)
15-2 改機
15-2-1 NAS 改 Proxmox VE 4.1 教學
15-3 PVE GPU passthrough
15-4 掛載硬體
15-4-1 Proxmox Physical disk to kvm (KVM 虛擬機直接使用實體硬碟)
15-5 How To Create A NAS Using ZFS and Proxmox (with pictures)
15-6 網路速率
15-6-1 Linux開啓BBR擁塞控制算法
15-7 樣版
15-7-1 PVE 自訂 LXC 樣版來產生 CT
15-8 pve 优化
16. 外文資料
16-1 FB proxmox
16-1-1 pve 4.4 ZFS
16-2 在 Debian 8 上安裝 Proxmox VE
17. 參考文章
17-1 手動安裝 java jdk
17-2 promox 指令
17-3 proxmox 常用指令
17-4 在Proxmox VE中加入NFS資料儲存伺服器
17-5 Proxmox - USB pass-through
17-6 遠端執行命令、多台機器管理(Push.sh)
17-7 不用密碼直接用 ssh 登入到遠端電腦
17-8 rsync 檔案備份
17-9 透過rsync備份
17-10 ssh 免密碼登入
17-11 ssh 免密碼登入 & 資料
17-12 proxmox 3.4 版無法安裝 nfs-kernel-server
17-13 手動方式升級
17-14 Ubuntu 12.04 LTS 及ubuntu14.10 -- NFS安裝
17-15 pve 在 i386 機器
17-16 Proxmox VE的不足
17-17 Proxmox Virtual Environment 筆記
17-18 KVM to LXC 與 LXC to KVM
17-19 Proxmox VE USB Physical Port Mapping
17-20 Proxmox VE Physical disk to kvm
17-21 ceph要七台主要的考量
17-22 zfs 入門與管理技術
17-23 RAID-1 陣列讀取資料方式
17-24 How to mount Glusterfs volumes inside LXC/LXD (Linux containers)
17-25 變更 Proxmox VE 主機名稱 hostname
17-26 PVE內建的防火牆
17-27 未整理的指令
17-46 Proxmox VE 可以結合 FreeNAS 使用 ZFS over iSCSI,做到兼具 Block Level 效能與 WebUI 管理
18. 新增/修改記錄
19. 友站連結
9-7-1 ZFS Raidz Performance, Capacity and Integrity
Proxmox VE 4.x 中文初階學習手冊 ======================= zfs(8) System Administration Commands zfs(8) NAME zfs - configures ZFS file systems SYNOPSIS zfs \[-?\] zfs create \[-p\] \[-o property=value\] ... filesystem zfs create \[-ps\] \[-b blocksize\] \[-o property=value\] ... -V size volume zfs destroy \[-rRf\] filesystem|volume zfs destroy \[-rRd\] snapshot zfs snapshot \[-r\] \[-o property=value\]... filesystem@snapname|volume@snapname zfs rollback \[-rRf\] snapshot zfs clone \[-p\] \[-o property=value\] ... snapshot filesystem|volume zfs promote clone-filesystem zfs rename filesystem|volume|snapshot filesystem|volume|snapshot zfs rename \[-p\] filesystem|volume filesystem|volume zfs rename -r snapshot snapshot zfs list \[-r|-d depth\]\[-H\]\[-o property\[,...\]\] \[-t type\[,...\]\] \[-s property\] ... \[-S property\] ... \[filesystem|volume|snapshot\] ... zfs set property=value filesystem|volume|snapshot ... zfs get \[-r|-d depth\]\[-Hp\]\[-o all | field\[,...\]\] \[-s source\[,...\]\] all | property\[,...\] filesystem|volume|snapshot ... zfs inherit \[-rS\] property filesystem|volume|snapshot ... zfs upgrade \[-v\] zfs upgrade \[-r\] \[-V version\] -a | filesystem zfs userspace \[-niHp\] \[-o field\[,...\]\] \[-sS field\] ... \[-t type \[,...\]\] filesystem|snapshot zfs groupspace \[-niHp\] \[-o field\[,...\]\] \[-sS field\] ... \[-t type \[,...\]\] filesystem|snapshot zfs mount zfs mount \[-vO\] \[-o options\] -a | filesystem zfs unmount \[-f\] -a | filesystem|mountpoint zfs share -a | filesystem zfs unshare -a filesystem|mountpoint zfs send \[-DvRp\] \[-\[iI\] snapshot\] snapshot zfs receive \[-vnFu\] filesystem|volume|snapshot zfs receive \[-vnFu\] \[-d | -e\] filesystem zfs allow filesystem|volume zfs allow \[-ldug\] "everyone"|user|group\[,...\] perm|@setname\[,...\] filesystem|volume zfs allow \[-ld\] -e perm|@setname\[,...\] filesystem|volume zfs allow -c perm|@setname\[,...\] filesystem|volume zfs allow -s @setname perm|@setname\[,...\] filesystem|volume zfs unallow \[-rldug\] "everyone"|user|group\[,...\] \[perm|@setname\[,... \]\] filesystem|volume zfs unallow \[-rld\] -e \[perm|@setname\[,... \]\] filesystem|volume zfs unallow \[-r\] -c \[perm|@setname\[ ... \]\] filesystem|volume zfs unallow \[-r\] -s @setname \[perm|@setname\[,... \]\] filesystem|volume zfs hold \[-r\] tag snapshot... zfs holds \[-r\] snapshot... zfs release \[-r\] tag snapshot... DESCRIPTION The zfs command configures ZFS datasets within a ZFS storage pool, as described in zpool(1M). A dataset is identified by a unique path within the ZFS namespace. For example: pool/{filesystem,volume,snapshot} where the maximum length of a dataset name is MAXNAMELEN (256 bytes). A dataset can be one of the following: file system A ZFS dataset of type filesystem can be mounted within the standard system namespace and behaves like other file systems. While ZFS file systems are designed to be POSIX compliant, known issues exist that prevent compliance in some cases. Applications that depend on standards conformance might fail due to nonstandard behavior when checking file system free space. volume A logical volume exported as a raw or block device. This type of dataset should only be used under spe?? cial circumstances. File systems are typically used in most environments. snapshot A read-only version of a file system or volume at a given point in time. It is specified as filesys?? tem@name or volume@name. ZFS File System Hierarchy A ZFS storage pool is a logical collection of devices that provide space for datasets. A storage pool is also the root of the ZFS file system hierarchy. The root of the pool can be accessed as a file system, such as mounting and unmounting, taking snapshots, and setting properties. The physical storage characteristics, however, are managed by the zpool(1M) command. See zpool(1M) for more information on creating and administering pools. Snapshots A snapshot is a read-only copy of a file system or volume. Snapshots can be created extremely quickly, and initially consume no additional space within the pool. As data within the active dataset changes, the snap?? shot consumes more data than would otherwise be shared with the active dataset. Snapshots can have arbitrary names. Snapshots of volumes can be cloned or rolled back, but cannot be accessed independently. File system snapshots can be accessed under the .zfs/snapshot directory in the root of the file system. Snap?? shots are automatically mounted on demand and may be unmounted at regular intervals. The visibility of the .zfs directory can be controlled by the snapdir property. Clones A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and initially consumes no additional space. Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The origin property exposes this dependency, and the destroy command lists any such dependencies, if they exist. The clone parent-child dependency relationship can be reversed by using the promote subcommand. This causes the "origin" file system to become a clone of the specified file system, which makes it possible to destroy the file system that the clone was created from. Mount Points Creating a ZFS file system is a simple operation, so the number of file systems per system is likely to be numerous. To cope with this, ZFS automatically manages mounting and unmounting file systems without the need to edit the /etc/vfstab file. All automatically managed file systems are mounted by ZFS at boot time. By default, file systems are mounted under /path, where path is the name of the file system in the ZFS names?? pace. Directories are created and destroyed as needed. A file system can also have a mount point set in the mountpoint property. This directory is created as needed, and ZFS automatically mounts the file system when the zfs mount -a command is invoked (without edit?? ing /etc/vfstab). The mountpoint property can be inherited, so if pool/home has a mount point of /export/stuff, then pool/home/user automatically inherits a mount point of /export/stuff/user. A file system mountpoint property of none prevents the file system from being mounted. If needed, ZFS file systems can also be managed with traditional tools (mount, umount, /etc/vfstab). If a file system's mount point is set to legacy, ZFS makes no attempt to manage the file system, and the adminis?? trator is responsible for mounting and unmounting the file system. Zones A ZFS file system can be added to a non-global zone by using the zonecfg add fs subcommand. A ZFS file system that is added to a non-global zone must have its mountpoint property set to legacy. The physical properties of an added file system are controlled by the global administrator. However, the zone administrator can create, modify, or destroy files within the added file system, depending on how the file system is mounted. A dataset can also be delegated to a non-global zone by using the zonecfg add dataset subcommand. You cannot delegate a dataset to one zone and the children of the same dataset to another zone. The zone administrator can change properties of the dataset or any of its children. However, the quota property is controlled by the global administrator. A ZFS volume can be added as a device to a non-global zone by using the zonecfg add device subcommand. How?? ever, its physical properties can be modified only by the global administrator. For more information about zonecfg syntax, see zonecfg(1M). After a dataset is delegated to a non-global zone, the zoned property is automatically set. A zoned file sys?? tem cannot be mounted in the global zone, since the zone administrator might have to set the mount point to an unacceptable value. The global administrator can forcibly clear the zoned property, though this should be done with extreme care. The global administrator should verify that all the mount points are acceptable before clearing the property. Deduplication Deduplication is the process for removing redundant data at the block-level, reducing the total amount of data stored. If a file system has the dedup property enabled, duplicate data blocks are removed syn?? chronously. The result is that only unique data is stored and common components are shared among files. Native Properties Properties are divided into two types, native properties and user-defined (or "user") properties. Native properties either export internal statistics or control ZFS behavior. In addition, native properties are either editable or read-only. User properties have no effect on ZFS behavior, but you can use them to anno?? tate datasets in a way that is meaningful in your environment. For more information about user properties, see the "User Properties" section, below. Every dataset has a set of properties that export statistics about the dataset as well as control various behaviors. Properties are inherited from the parent unless overridden by the child. Some properties apply only to certain types of datasets (file systems, volumes, or snapshots). The values of numeric properties can be specified using human-readable suffixes (for example, k, KB, M, Gb, and so forth, up to Z for zettabyte). The following are all valid (and equal) specifications: 1536M, 1.5g, 1.50GB The values of non-numeric properties are case sensitive and must be lowercase, except for mountpoint, sharenfs, and sharesmb. The following native properties consist of read-only statistics about the dataset. These properties can be neither set, nor inherited. Native properties apply to all dataset types unless otherwise noted. available The amount of space available to the dataset and all its children, assuming that there is no other activ?? ity in the pool. Because space is shared within a pool, availability can be limited by any number of fac?? tors, including physical pool size, quotas, reservations, or other datasets within the pool. This property can also be referred to by its shortened column name, avail. compressratio The compression ratio achieved for this dataset, expressed as a multiplier. Compression can be turned on by running: zfs set compression=on dataset. The default value is off. creation The time this dataset was created. defer\_destroy This property is on if the snapshot has been marked for deferred destroy by using the zfs destroy -d com?? mand. Otherwise, the property is off. mounted For file systems, indicates whether the file system is currently mounted. This property can be either yes or no. origin For cloned file systems or volumes, the snapshot from which the clone was created. The origin cannot be destroyed (even with the -r or -f options) so long as a clone exists. referenced The amount of data that is accessible by this dataset, which may or may not be shared with other datasets in the pool. When a snapshot or clone is created, it initially references the same amount of space as the file system or snapshot it was created from, since its contents are identical. This property can also be referred to by its shortened column name, refer. type The type of dataset: filesystem, volume, or snapshot. used The amount of space consumed by this dataset and all its descendents. This is the value that is checked against this dataset's quota and reservation. The space used does not include this dataset's reservation, but does take into account the reservations of any descendent datasets. The amount of space that a dataset consumes from its parent, as well as the amount of space that are freed if this dataset is recur?? sively destroyed, is the greater of its space used and its reservation. When snapshots (see the "Snapshots" section) are created, their space is initially shared between the snapshot and the file system, and possibly with previous snapshots. As the file system changes, space that was previously shared becomes unique to the snapshot, and counted in the snapshot's space used. Additionally, deleting snapshots can increase the amount of space unique to (and used by) other snap?? shots. The amount of space used, available, or referenced does not take into account pending changes. Pending changes are generally accounted for within a few seconds. Committing a change to a disk using fsync(3c) or O\_SYNC does not necessarily guarantee that the space usage information is updated immediately. usedby\* The usedby\* properties decompose the used properties into the various reasons that space is used. Specif?? ically, used = usedbychildren + usedbydataset + usedbyrefreservation +, usedbysnapshots. These properties are only available for datasets created on zpool "version 13" pools. usedbychildren The amount of space used by children of this dataset, which would be freed if all the dataset's children were destroyed. usedbydataset The amount of space used by this dataset itself, which would be freed if the dataset were destroyed (after first removing any refreservation and destroying any necessary snapshots or descendents). usedbyrefreservation The amount of space used by a refreservation set on this dataset, which would be freed if the refreserva?? tion was removed. usedbysnapshots The amount of space consumed by snapshots of this dataset. In particular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not simply the sum of the snapshots' used properties because space can be shared by multiple snapshots. userused@user The amount of space consumed by the specified user in this dataset. Space is charged to the owner of each file, as displayed by ls -l. The amount of space charged is displayed by du and ls -s. See the zfs userspace subcommand for more information. Unprivileged users can access only their own space usage. The root user, or a user who has been granted the userused privilege with zfs allow, can access everyone's usage. The userused@... properties are not displayed by zfs get all. The user's name must be appended after the @ symbol, using one of the following forms: o POSIX name (for example, joe) o POSIX numeric ID (for example, 789) o SID name (for example, joe.smith@mydomain) o SID numeric ID (for example, S-1-123-456-789) userrefs This property is set to the number of user holds on this snapshot. User holds are set by using the zfs hold command. groupused@group The amount of space consumed by the specified group in this dataset. Space is charged to the group of each file, as displayed by ls -l. See the userused@user property for more information. Unprivileged users can only access their own groups' space usage. The root user, or a user who has been granted the groupused privilege with zfs allow, can access all groups' usage. volblocksize=blocksize For volumes, specifies the block size of the volume. The blocksize cannot be changed once the volume has been written, so it should be set at volume creation time. The default blocksize for volumes is 8 Kbytes. Any power of 2 from 512 bytes to 128 Kbytes is valid. This property can also be referred to by its shortened column name, volblock. The following native properties can be used to change the behavior of a ZFS dataset. aclinherit=discard | noallow | restricted | passthrough | passthrough-x Controls how ACL entries are inherited when files and directories are created. A file system with an aclinherit property of discard does not inherit any ACL entries. A file system with an aclinherit prop?? erty value of noallow only inherits inheritable ACL entries that specify "deny" permissions. The property value restricted (the default) removes the write\_acl and write\_owner permissions when the ACL entry is inherited. A file system with an aclinherit property value of passthrough inherits all inheritable ACL entries without any modifications made to the ACL entries when they are inherited. A file system with an aclinherit property value of passthrough-x has the same meaning as passthrough, except that the owner@, group@, and everyone@ ACEs inherit the execute permission only if the file creation mode also requests the execute bit. When the property value is set to passthrough, files are created with a mode determined by the inherita?? ble ACEs. If no inheritable ACEs exist that affect the mode, then the mode is set in accordance to the requested mode from the application. aclmode=discard | groupmask | passthrough Controls how an ACL is modified during chmod(2). A file system with an aclmode property of discard deletes all ACL entries that do not represent the mode of the file. An aclmode property of groupmask (the default) reduces user or group permissions. The permissions are reduced, such that they are no greater than the group permission bits, unless it is a user entry that has the same UID as the owner of the file or directory. In this case, the ACL permissions are reduced so that they are no greater than owner per?? mission bits. A file system with an aclmode property of passthrough indicates that no changes are made to the ACL other than generating the necessary ACL entries to represent the new mode of the file or direc?? tory. atime=on | off Controls whether the access time for files is updated when they are read. Turning this property off avoids producing write traffic when reading files and can result in significant performance gains, though it might confuse mailers and other similar utilities. The default value is on. canmount=on | off | noauto If this property is set to off, the file system cannot be mounted, and is ignored by zfs mount -a. Set?? ting this property to off is similar to setting the mountpoint property to none, except that the dataset still has a normal mountpoint property, which can be inherited. Setting this property to off allows datasets to be used solely as a mechanism to inherit properties. One example of setting canmount=off is to have two datasets with the same mountpoint, so that the children of both datasets appear in the same directory, but might have different inherited characteristics. When the noauto option is set, a dataset can only be mounted and unmounted explicitly. The dataset is not mounted automatically when the dataset is created or imported, nor is it mounted by the zfs mount -a com?? mand or unmounted by the zfs unmount -a command. This property is not inherited. checksum=on | off | fletcher2,| fletcher4 | sha256 Controls the checksum used to verify data integrity. The default value is on, which automatically selects an appropriate algorithm (currently, fletcher4, but this may change in future releases). The value off disables integrity checking on user data. Disabling checksums is NOT a recommended practice. Changing this property affects only newly-written data. compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can spec?? ify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compres?? sion ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). This property can also be referred to by its shortened column name compress. Changing this property affects only newly-written data. copies=1 | 2 | 3 Controls the number of copies of data stored for this dataset. These copies are in addition to any redun?? dancy provided by the pool, for example, mirroring or RAID-Z. The copies are stored on different disks, if possible. The space used by multiple copies is charged to the associated file and dataset, changing the used property and counting against quotas and reservations. Changing this property only affects newly-written data. Therefore, set this property at file system cre?? ation time by using the -o copies=N option. dedup=on | off | verify | sha256\[,verify\] Controls whether deduplication is in effect for a dataset. The default value is off. The default checksum used for deduplication is sha256 (subject to change). When dedup is enabled, the dedup checksum algorithm overrides the checksum property. Setting the value to verify is equivalent to specifying sha256,verify. If the property is set to verify, then, whenever two blocks have the same signature, ZFS will do a byte- for-byte comparison with the existing block to ensure that the contents are identical. devices=on | off Controls whether device nodes can be opened on this file system. The default value is on. exec=on | off Controls whether processes can be executed from within this file system. The default value is on. mlslabel=label | none The mlslabel property is a sensitivity label that determines if a dataset can be mounted in a zone on a system with Trusted Extensions enabled. If the labeled dataset matches the labeled zone, the dataset can be mounted and accessed from the labeled zone. When the mlslabel property is not set, the default value is none. Setting the mlslabel property to none is equivalent to removing the property. The mlslabel property can be modified only when Trusted Extensions is enabled and only with appropriate privilege. Rights to modify it cannot be delegated. When changing a label to a higher label or setting the initial dataset label, the {PRIV\_FILE\_UPGRADE\_SL} privilege is required. When changing a label to a lower label or the default (none), the {PRIV\_FILE\_DOWNGRADE\_SL} privilege is required. Changing the dataset to labels other than the default can be done only when the dataset is not mounted. When a dataset with the default label is mounted into a labeled-zone, the mount operation automatically sets the mlsla?? bel property to the label of that zone. When Trusted Extensions is not enabled, only datasets with the default label (none) can be mounted. mountpoint=path | none | legacy Controls the mount point used for this file system. See the "Mount Points" section for more information on how this property is used. When the mountpoint property is changed for a file system, the file system and any children that inherit the mount point are unmounted. If the new value is legacy, then they remain unmounted. Otherwise, they are automatically remounted in the new location if the property was previously legacy or none, or if they were mounted before the property was changed. In addition, any shared file systems are unshared and shared in the new location. nbmand=on | off Controls whether the file system should be mounted with nbmand (Non Blocking mandatory locks). This is used for CIFS clients. Changes to this property only take effect when the file system is umounted and remounted. See mount(1M) for more information on nbmand mounts. primarycache=all | none | metadata Controls what is cached in the primary cache (ARC). If this property is set to all, then both user data and metadata is cached. If this property is set to none, then neither user data nor metadata is cached. If this property is set to metadata, then only metadata is cached. The default value is all. quota=size | none Limits the amount of space a dataset and its descendents can consume. This property enforces a hard limit on the amount of space used. This includes all space consumed by descendents, including file systems and snapshots. Setting a quota on a descendent of a dataset that already has a quota does not override the ancestor's quota, but rather imposes an additional limit. Quotas cannot be set on volumes, as the volsize property acts as an implicit quota. userquota@user=size | none Limits the amount of space consumed by the specified user. Similar to the refquota property, the userquota space calculation does not include space that is used by descendent datasets, such as snapshots and clones. User space consumption is identified by the userspace@user property. Enforcement of user quotas may be delayed by several seconds. This delay means that a user might exceed her quota before the system notices that she is over quota. The system would then begin to refuse addi?? tional writes with the EDQUOT error message . See the zfs userspace subcommand for more information. Unprivileged users can only access their own groups' space usage. The root user, or a user who has been granted the userquota privilege with zfs allow, can get and set everyone's quota. This property is not available on volumes, on file systems before version 4, or on pools before version 15. The userquota@... properties are not displayed by zfs get all. The user's name must be appended after the @ symbol, using one of the following forms: o POSIX name (for example, joe) o POSIX numeric ID (for example, 789) o SID name (for example, joe.smith@mydomain) o SID numeric ID (for example, S-1-123-456-789) groupquota@group=size | none Limits the amount of space consumed by the specified group. Group space consumption is identified by the userquota@user property. Unprivileged users can access only their own groups' space usage. The root user, or a user who has been granted the groupquota privilege with zfs allow, can get and set all groups' quotas. readonly=on | off Controls whether this dataset can be modified. The default value is off. This property can also be referred to by its shortened column name, rdonly. recordsize=size Specifies a suggested block size for files in the file system. This property is designed solely for use with database workloads that access files in fixed-size records. ZFS automatically tunes block sizes according to internal algorithms optimized for typical access patterns. For databases that create very large files but access them in small random chunks, these algorithms may be suboptimal. Specifying a recordsize greater than or equal to the record size of the database can result in significant performance gains. Use of this property for general purpose file systems is strongly discouraged, and may adversely affect performance. The size specified must be a power of two greater than or equal to 512 and less than or equal to 128 Kbytes. Changing the file system's recordsize affects only files created afterward; existing files are unaf?? fected. This property can also be referred to by its shortened column name, recsize. refquota=size | none Limits the amount of space a dataset can consume. This property enforces a hard limit on the amount of space used. This hard limit does not include space used by descendents, including file systems and snap?? shots. refreservation=size | none The minimum amount of space guaranteed to a dataset, not including its descendents. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space speci?? fied by refreservation. The refreservation reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations. If refreservation is set, a snapshot is only allowed if there is enough free pool space outside of this reservation to accommodate the current number of "referenced" bytes in the dataset. This property can also be referred to by its shortened column name, refreserv. reservation=size | none The minimum amount of space guaranteed to a dataset and its descendents. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by its reservation. Reservations are accounted for in the parent datasets' space used, and count against the parent datasets' quotas and reservations. This property can also be referred to by its shortened column name, reserv. secondarycache=all | none | metadata Controls what is cached in the secondary cache (L2ARC). If this property is set to all, then both user data and metadata is cached. If this property is set to none, then neither user data nor metadata is cached. If this property is set to metadata, then only metadata is cached. The default value is all. setuid=on | off Controls whether the set-UID bit is respected for the file system. The default value is on. shareiscsi=on | off Like the sharenfs property, shareiscsi indicates whether a ZFS volume is exported as an iSCSI target. The acceptable values for this property are on, off, and type=disk. The default value is off. In the future, other target types might be supported. For example, tape. You might want to set shareiscsi=on for a file system so that all ZFS volumes within the file system are shared by default. However, setting this property on a file system has no direct effect. sharesmb=on | off | opts Controls whether the file system is shared by using the Solaris CIFS service, and what options are to be used. A file system with the sharesmb property set to off is managed through traditional tools such as sharemgr(1M). Otherwise, the file system is automatically shared and unshared with the zfs share and zfs unshare commands. If the property is set to on, the sharemgr(1M) command is invoked with no options. Oth?? erwise, the sharemgr(1M) command is invoked with options equivalent to the contents of this property. Because SMB shares requires a resource name, a unique resource name is constructed from the dataset name. The constructed name is a copy of the dataset name except that the characters in the dataset name, which would be illegal in the resource name, are replaced with underscore (\_) characters. A pseudo property "name" is also supported that allows you to replace the data set name with a specified name. The speci?? fied name is then used to replace the prefix dataset in the case of inheritance. For example, if the dataset data/home/john is set to name=john, then data/home/john has a resource name of john. If a child dataset of data/home/john/backups, it has a resource name of john\_backups. When SMB shares are created, the SMB share name appears as an entry in the .zfs/shares directory. You can use the ls or chmod command to display the share-level ACLs on the entries in this directory. When the sharesmb property is changed for a dataset, the dataset and any children inheriting the property are re-shared with the new options, only if the property was previously set to off, or if they were shared before the property was changed. If the new property is set to off, the file systems are unshared. sharenfs=on | off | opts Controls whether the file system is shared via NFS, and what options are used. A file system with a sharenfs property of off is managed through traditional tools such as share(1M), unshare(1M), and dfstab(4). Otherwise, the file system is automatically shared and unshared with the zfs share and zfs unshare commands. If the property is set to on, the share(1M) command is invoked with no options. Other?? wise, the share(1M) command is invoked with options equivalent to the contents of this property. When the sharenfs property is changed for a dataset, the dataset and any children inheriting the property are re-shared with the new options, only if the property was previously off, or if they were shared before the property was changed. If the new property is off, the file systems are unshared. logbias = latency | throughput Provides a hint to ZFS about handling of synchronous requests in this dataset. If logbias is set to latency (the default), ZFS uses the pool's log devices (if configured) to handle the requests at low latency. If logbias is set to throughput, ZFS does not use the configured pool log devices. Instead, ZFS optimizes synchronous operations for global pool throughput and efficient use of resources. snapdir=hidden | visible Controls whether the .zfs directory is hidden or visible in the root of the file system as discussed in the "Snapshots" section. The default value is hidden. version=1 | 2 | current The on-disk version of this file system, which is independent of the pool version. This property can only be set to later supported versions. See the zfs upgrade command. volsize=size For volumes, specifies the logical size of the volume. By default, creating a volume establishes a reser?? vation of equal size. For storage pools with a version number of 9 or higher, a refreservation is set instead. Any changes to volsize are reflected in an equivalent change to the reservation (or refreserva?? tion). The volsize can only be set to a multiple of volblocksize, and cannot be zero. The reservation is kept equal to the volume's logical size to prevent unexpected behavior for consumers. Without the reservation, the volume could run out of space, resulting in undefined behavior or data cor?? ruption, depending on how the volume is used. These effects can also occur when the volume size is changed while it is in use (particularly when shrinking the size). Extreme care should be used when adjusting the volume size. Though not recommended, a "sparse volume" (also known as "thin provisioning") can be created by specify?? ing the -s option to the zfs create -V command, or by changing the reservation after the volume has been created. A "sparse volume" is a volume where the reservation is less then the volume size. Consequently, writes to a sparse volume can fail with ENOSPC when the pool is low on space. For a sparse volume, changes to volsize are not reflected in the reservation. vscan=on | off Controls whether regular files should be scanned for viruses when a file is opened and closed. In addi?? tion to enabling this property, the virus scan service must also be enabled for virus scanning to occur. The default value is off. xattr=on | off Controls whether extended attributes are enabled for this file system. The default value is on. zoned=on | off Controls whether the dataset is managed from a non-global zone. See the "Zones" section for more informa?? tion. The default value is off. The following three properties cannot be changed after the file system is created, and therefore, should be set when the file system is created. If the properties are not set with the zfs create or zpool create com?? mands, these properties are inherited from the parent dataset. If the parent dataset lacks these properties due to having been created prior to these features being supported, the new file system will have the default values for these properties. casesensitivity=sensitive | insensitive | mixed Indicates whether the file name matching algorithm used by the file system should be case-sensitive, case-insensitive, or allow a combination of both styles of matching. The default value for the casesensi?? tivity property is sensitive. Traditionally, UNIX and POSIX file systems have case-sensitive file names. The mixed value for the casesensitivity property indicates that the file system can support requests for both case-sensitive and case-insensitive matching behavior. Currently, case-insensitive matching behavior on a file system that supports mixed behavior is limited to the Solaris CIFS server product. For more information about the mixed value behavior, see the Solaris ZFS Administration Guide. normalization = none | formC | formD | formKC | formKD Indicates whether the file system should perform a unicode normalization of file names whenever two file names are compared, and which normalization algorithm should be used. File names are always stored unmod?? ified, names are normalized as part of any comparison process. If this property is set to a legal value other than none, and the utf8only property was left unspecified, the utf8only property is automatically set to on. The default value of the normalization property is none. This property cannot be changed after the file system is created. utf8only=on | off Indicates whether the file system should reject file names that include characters that are not present in the UTF-8 character code set. If this property is explicitly set to off, the normalization property must either not be explicitly set or be set to none. The default value for the utf8only property is off. This property cannot be changed after the file system is created. The casesensitivity, normalization, and utf8only properties are also new permissions that can be assigned to non-privileged users by using the ZFS delegated administration feature. Temporary Mount Point Properties When a file system is mounted, either through mount(1M) for legacy mounts or the zfs mount command for normal file systems, its mount options are set according to its properties. The correlation between properties and mount options is as follows: PROPERTY MOUNT OPTION devices devices/nodevices exec exec/noexec readonly ro/rw setuid setuid/nosetuid xattr xattr/noxattr In addition, these options can be set on a per-mount basis using the -o option, without affecting the prop?? erty that is stored on disk. The values specified on the command line override the values stored in the dataset. The -nosuid option is an alias for nodevices,nosetuid. These properties are reported as "temporary" by the zfs get command. If the properties are changed while the dataset is mounted, the new setting overrides any temporary settings. User Properties In addition to the standard native properties, ZFS supports arbitrary user properties. User properties have no effect on ZFS behavior, but applications or administrators can use them to annotate datasets (file sys?? tems, volumes, and snapshots). User property names must contain a colon (:) character to distinguish them from native properties. They may contain lowercase letters, numbers, and the following punctuation characters: colon (:), dash (-), period (.), and underscore (\_). The expected convention is that the property name is divided into two portions such as module:property, but this namespace is not enforced by ZFS. User property names can be at most 256 charac?? ters, and cannot begin with a dash (-). When making programmatic use of user properties, it is strongly suggested to use a reversed DNS domain name for the module component of property names to reduce the chance that two independently-developed packages use the same property name for different purposes. Property names beginning with com.sun. are reserved for use by Sun Microsystems. The values of user properties are arbitrary strings, are always inherited, and are never validated. All of the commands that operate on properties (zfs list, zfs get, zfs set, and so forth) can be used to manipulate both native properties and user properties. Use th
:::
展開
|
闔起
文章類別
不分類
虛擬機器 (15)
系統安裝
網頁維護-區塊管理公告
書籍目錄
總目錄
1.導讀 Proxmox VE
1-1前言 --- 企業級產品 買不起 還好有社群資源 支援 企業級的技術
1-2建議閱讀參考資料
1-3各家虛擬軟體比較
1-3-1Proxmox VE 和 VMWare 比較
1-3-2圖表
1-4新版
1-4-14.2 版新增功能
1-5攜碼跳槽前的準備(從 VirtualBox,VMWare,ESXi 轉換前的準備)
2.安裝 Proxmox VE
2-1開始安裝 Proxmox VE
2-1-1proxmox 實體機的建議
2-2開始在 實體機上 安裝 Proxmox VE
2-2-1下載 proxmox 5.0 版的 .iso 檔
2-2-2開始安裝 Proxmox VE
2-2-2-1BIOS 設定
2-2-2-2安裝方式1:以光碟開機安裝
2-2-2-3安裝方式2:以隨身碟方式安裝
2-2-2-4安裝方式3:直接將系統安裝在 USB 隨身碟上以隨身碟當開機碟
2-2-2-5版權宣告
2-2-2-6安裝提示訊息-硬碟格式化
2-2-2-7時區與鍵盤設定
2-2-2-8管理員密碼與 Email 設定
2-2-2-9網路設定
2-2-2-10複製系統與程式檔案
2-2-2-11安裝完成第1次開機
2-2-3管理
2-2-3-1文字介面的管理-1.本機登入
2-2-3-2文字介面的管理-2.遠端登入
2-2-3-3Web 主控台方式登入-Firefox
2-2-3-4Web 主控台方式登入-Chrome
2-2-4第1次更新套件(debian update)
2-2-4-1無購買企業支援授權
2-2-4-2Proxmox 的 enterprise support
2-2-4-3套件功能的更新(Proxmox update)
2-2-4-4安裝其它套件
2-2-4-54.x ---> 5.x 升級版本
2-2-4-65.x ---> 6.x 升級版本
2-2-5Proxmox VE 的安全性
2-2-5-1proxmox ve 使用者權限管理功能
2-2-5-2
2-2-5-3Root 的密碼 安全性
2-2-5-4建立單一帳號管理單一虛擬機(for webgui)
2-2-5-5建立一個具全部管理權限的使用者與群組
2-3參考:安裝硬碟規劃
2-3-1安裝時選擇 ZFS 格式
2-3-2硬碟空間分割設定的規劃(系統預設自動分配)-以虛擬機安裝示範
2-3-3硬碟空間分割設定的規劃(系統預設自動分配)-以實體機安裝示範
2-3-4安裝Proxmox前安裝硬碟空間分割設定的規劃(手動分配)
2-3-5刪除內建的LVM thin
2-3-64.2版後不使用 lvm 恢復為 ext4 的方式(官網)
2-4將 Proxmox VE 安裝在 Debian 8
2-4-1英文
2-4-2中文(google 翻譯)
3.開始建立虛擬機
3-1虛擬機開機
3-1虛擬機遠端桌面
3-1建立虛擬機
3-1安裝虛擬機上的作業系統
3-1KVM 虛擬機的安裝
3-2LXC Container 虛擬機的安裝
3-2-1前言暨建立LXC虛擬機
3-2-1-1下載 樣版系統
3-2-1-2開始建立 LXC 虛擬機(Linux Container)
3-2-1-3LXC虛擬機的更新
3-2-1-4LXC 虛擬機 OS 的實際大小
3-2-1-5LXC 虛擬機中 ssh 中文輸入顯示功能/指令補完功能
3-2-2安裝 SFS3
3-2-2-1ssh 設定連線範圍
3-2-2-2/bin/firewall 防火牆程式
3-2-2-3LAMP的安裝
3-2-2-4apache2 的設定
3-2-2-5SFS3網頁連線範圍設定
3-2-2-6sfs3程式的移轉
3-2-2-7mysql 資料庫的移轉
3-2-2-8配合縣網openid的設定
3-2-2-9IP的設定變更/LXC虛擬機
3-2-2-10IP的設定變更/kvm虛擬機
3-2-2-11DNS 主機上的設定
3-2-2-12cron 等排程備份程式
3-2-2-13時區調整
3-2-3LXC 容器 LXC Mounts 的方法
3-2-3-1LXC 容器 LXC Bind Mounts
3-2-3-24.2版 Mount Point GUI 介面
3-2-3-34.4版 Mount Point GUI 介面
3-2-3-4autofs 掛載 cifs (samba) / 實體機上掛載
3-2-3-5NFS & Automount
3-2-3-6MountPoint 遷移 解決方法
3-2-4虛擬機調校
3-2-4-1虛擬機瘦身
3-3從實體機移轉為虛擬機 實作(非整機轉換)
3-3-1sfs3 學籍系統
3-3-1-1備份主機硬體設定
3-3-1-2備份原始主機資料
3-3-1-3備份 sfs3 實體主機設定檔、網頁、資料庫 script
3-3-1-4準備 樣版 LXC
3-3-1-5sfs3 實體主機設定檔、網頁、資料庫 ---> Proxmox VE
3-3-1-6在虛擬機 LXC 212 中解壓縮檔案
3-3-1-7還原 sfs3 網頁
3-3-1-8還原 apache2 虛擬網站設定
3-3-1-9修改 sfs3 設定檔
3-3-1-10還原 mysql 資料庫
3-3-1-11變更 mysql root 密碼
3-3-1-12還原 hosts.allow hosts.deny crontab ACFSsfsBK.txt
3-3-1-13變更 hostname 和 hosts
3-4刪除虛擬機
3-4-1存放位置:local
3-4-2存放位置:local-lvm
3-4-3ZFS pool
3-0-0-1建立虛擬機 的認識
3-0-1LXC(Container)記憶體的使用
3-0-1KVM - 文字模式 記憶體的使用
3-0-1LXC(Container)和KVM 記憶體使用的差異
3-0-1KVM - 圖形介面 記憶體的使用
3-0-1虛擬硬碟的介面 (使用 .vmdk 映像檔快取定)
4.VirtualBox、VMWare 移轉至 proxmox
4-1vdi 硬碟映像檔 轉換 至 proxmox 的 qcow2 格式
4-1虛擬硬碟-格式轉換
4-1How to convert VirtualBox vdi to KVM qcow2
4-1Proxmox VE無法開啟vmdk格式虛擬機器的問題
4-2使用 VirtualBox 的虛擬硬碟轉換後的檔案的移機
4-2-1Proxmox VE 的設定檔與虛擬機儲存位置
4-2-1-1proxmox VE 設定檔放置位置
4-2-1-2Proxmox VE 虛擬機儲存位置
4-2-2建立 VM 樣版
4-2-2-1開始建立 VM 樣版
4-2-2-2新建立的樣版 VM
4-2-2-3新增加一顆 SATA 介面的虛擬硬碟
4-2-2-4變更 虛擬硬碟檔案的檔名
4-2-2-5啟動虛擬機
4-2-2-6關閉虛擬機
4-2-2-7esxi 虛擬機轉到ProxmoxVE
4-3VMWare 轉換 為 Proxmox VE
4-3-1Proxmox 筆記 vsphere 轉移篇
4-3-2文章:Esxi移機到Proxmox-檔案格式轉檔、iscsi、nfs串連教學、虛擬機新增及相關備忘事項
4-3-3KVM 的 vmdk to qcow2 方式
4-3-4OVA 檔案轉換成 qcow2
4-4實體機轉換為虛擬機
4-4-1參考資料
4-4-1-1==實體機轉換成虛擬機
4-4-1-2virt-p2v
4-4-1-3KVM 的 P2V k 可以根據 redhat 的模式, 也可以做到
4-0-1樣版虛擬機製作
5.Storage 儲存容器
5-1Directory - proxmox實體機安裝第二顆硬碟 掛載為Directory
5-2Directory
5-3NFS
5-4ZFS
6.Live Migration
6-1備份與移機原文位址: http://viewcamerafan.blogspot.tw/2011/11/proxmox-ve.html
7.虛擬機的移機 備援 備份 重建 還原
7-1KVM (qemu)的複製(手工)
7-1-13.2 版的操作
7-1-1-1虛擬機的備份或移機-手動方式
7-1-1-2設定檔修改與檔名變更
7-1-1-3兩台 proxmox 主機間的直接手動設定複製
7-1-1-4利用 scp 指令把虛擬機直接 從另一台 proxmox 主機 copy 回來
7-1-1-5以 script.sh 手動方式備份 VM 至另一台主機
7-1-1-6以 script.sh 結合管理遠端主機VM的啟動/關閉
7-1-1-7以變數代入 script
7-1-2Proxmox VE 3.3 版的操作
7-1-2-1虛擬機整機複製
7-1-2-2手動複製 VM
7-1-3Proxmox VE 4.0 b1 版的操作
7-2LXC(Container) 的複製(手工)
7-2-1LXC(Container) 的複製 實作-複製映像檔
7-2-2LXC(Container) 的複製 實作-複製並修改設定檔
7-3Proxmox VE 正規 備份/還原
7-3-1Backup 備份 虛擬機
7-3-1-1Backup 1. LXC
7-3-1-2Backup 2. KVM
7-3-1-3vzdump 備份時 io 權限調整
7-3-1-4vzdump 虛擬機備份打包指令
7-3-2虛擬機差異性備份(不建議使用,請改用zfs差異性備份)
7-3-2-1Proxmox 上的每日差異備份
7-3-2-2差異性備份-非官方-不建議使用
7-3-2-3差異性備份(實作)---非官方-不建議使用
7-3-3Restore 還原虛擬機 (從備份檔還原) LXC、KVM
7-4利用 Backup 來複製 虛擬機
7-4-1複製 LXC 虛擬機
7-4-2複製 KVM 虛擬機
7-5利用 ZFS send / receive 來直接跨主機間備份/複製虛擬機
7-5-1ZFS send / receive 將 zfs 分割區 直接複製
7-5-2ZFS send / receive 將 zfs 分割區 增量複製
7-5-2-1實作
7-5-2-2實作2
7-5-2-31 對 N 快照差異性傳送模式
7-5-3-4實作2-script 配合 crondtab 自動同步
8.DATACenter -叢集管理功能-管理全部主機
8-2遷移虛擬機器
8-3遷移虛擬機器與注意事項
8-4PVE 4.2 Cluster 修改方式
8-1-1將其餘的主機加入集叢( Cluster )
8-1-2集叢 與 LXC 虛擬機 的相關問題
8-1-3脫離 Cluster(刪除其中一個 node)
8-1-4從集叢中刪除其中一台主機(Remove a cluster node)
8-1-5把原來刪除的 node 再重新加回來
8-1-6del node
8-1-7cluster 從三個 node 變成只有兩個 node
9.ZFS
9-1PVE 4.4 安裝 OpenAttic 套件/簡單管理 ZFS
9-1-1Openattic 現在 3.0 版本出來了, 它能管理的檔案儲存系統跟格式更多, 同時界面更加完善簡潔
9-2將硬碟格式化為 ZFS 格式及ZFS分割區
9-2-1選擇單顆硬碟或製作磁碟陣列
9-2-1-1ZFS 建立基本指令
9-2-1-2單一顆硬碟 ---> ZFS 格式
9-2-1-3二顆硬碟 ---> ZFS 格式 Raid0
9-2-1-4二或四顆硬碟 ---> ZFS 格式 Raid1 (Mirror) Raid10 (Mirror Mirror)
9-2-1-5二顆硬碟 ---> ZFS 格式 RAID Z-1
9-2-2附註:zfs 陣列變換 一顆硬碟在安裝完 zfs raid 0 後想再增加一顆硬碟做成 zfs mirror (raid 1)
9-2-3建立 ZFS 的分割區
9-2-3-1ZFS 的分割區 的建立 與 刪除
9-2-3-2ZFS 的分割區 的 搬移、更名、刪除
9-2-3-3建立與摧毀磁碟區-建立具快照的 fat 檔案系統
9-2-3-4比對快照
9-2-3-5加入與移除裝置
9-2-3-6更換運作中的裝置
9-2-3-7更換運作中的裝置
9-2-3-8清潔儲存池
9-3Snapshot 快照功能 (檔案時光機)
9-3-1Snapshot 快照的建立
9-3-2Snapshot 刪除快照
9-3-3Snapshot 回復到過去任意的還原點
9-3-4Snaoshot cron 定時快照 script
9-3-4-1配合 crontab 定時將虛擬機製作快照
9-3-4-2script snapshot 不同期間快照保存份數主程式
9-3-4-3script 快照刪除程式
9-3-5Snapshot 備份/使用 replication
9-3-5-1
9-3-6zfs send recive 的應用實作
9-4製作以 USB 隨身碟開機的步驟
9-4-1變更USB隨身碟的讀寫區到硬碟
9-5硬碟更換
9-5-1實作1 zfs Raid1 陣列替換固障硬碟
9-6man zfs
9-6-1man zfs ( Proxmox VE )
9-6-2man zfs ( ubuntu 1404 )
9-7測試報告
9-7-1ZFS Raidz Performance, Capacity and Integrity
9-7-2swap on zfs
9-7-3zfs 測試工具
9-7-4zfs 2018 新功能
9-8其它秘技
9-8-1qcow2或vmdk是以檔案的方式放在ZFS上,disk的cache設定記得要使用write back
9-0-1ZFS 檔案系統基本概念
9-0-2安裝 ZFS 功能
9-0-3記憶體參數設定 zfs arc
9-0-4在巢狀虛擬系統下使用 zfs 時虛擬機裡虛擬硬碟的類型
9-0-5指定 ZFS 掛載目錄
9-0-6zfs 維護指令
10.10
10-1routeros 24hr版 抓不到硬碟無法安裝
10-2虛擬硬碟容量擴充
11.手動升級
11-1下載新版的iso檔
11-2安裝新版的硬碟
11-3備份原始開機硬碟的設定
11-4接上新硬碟
11-5將設定檔寫至新硬碟
12.Proxmox VE 的應用
12-1KVM 虛擬機 將備份上傳至 google drive 雲端硬碟
12-1-1安裝套件
12-1-1-1KVM 虛擬機 安裝套件
12-1-1-2實體機 安裝套件
12-1-2實作
12-1-2-1實體主機上的 NFS 分享
12-1-2-2虛擬機上的設定
12-1-2-3實際上機操作上傳
12-1-3應用
12-2在 LXC 安裝OpenMediaVault 並附加 HW block device
12-2-1在 LXC 安裝OpenMediaVault 並附加 HW block device
12-2-2將 OMV3 安裝在 LXC 裡
12-2-3在 Proxmox VE 5.1 LXC 中安裝 OpenMediaVault 4
12-2-4利用 ZFS 和 Proxmox VE 自建 NAS
13.問題排除
13-1Proxmox 3.1 客端關機或是重新開機都會lock住
13-2Error closing file /var/tmp/pve-reserved-ports.tmp.43281 failed - No space left on device (500)
13-3某一本右側書籍目錄無法顯示
13-4LXC 無法啟動 ( Cluster 相關 )
13-4-1重開機後 LXC 無法啟動
13-5PVE 4.2 嚴重 BUG
13-6pve4.4 zfs on root 不支援 UEFI 開機模式
13-7安裝 Qemu Agent - 節省 KVM 使用的記憶體使用量(windows)
13-8主機重新安裝後 虛擬機存放在 zfs dataset 裡的東西都不見了
14.概念
14-1VM 的安全性
15.其他技術
15-1主機硬體支援虛擬化
15-1-1Proxmox VE 中 安裝 Proxmox VE(Proxmox VE Nested Virtualization)
15-2改機
15-2-1NAS 改 Proxmox VE 4.1 教學
15-3PVE GPU passthrough
15-4掛載硬體
15-4-1Proxmox Physical disk to kvm (KVM 虛擬機直接使用實體硬碟)
15-5How To Create A NAS Using ZFS and Proxmox (with pictures)
15-6網路速率
15-6-1Linux開啓BBR擁塞控制算法
15-7樣版
15-7-1PVE 自訂 LXC 樣版來產生 CT
15-8pve 优化
16.外文資料
16-1FB proxmox
16-1-1pve 4.4 ZFS
16-2在 Debian 8 上安裝 Proxmox VE
17.參考文章
17-1手動安裝 java jdk
17-2promox 指令
17-3proxmox 常用指令
17-4在Proxmox VE中加入NFS資料儲存伺服器
17-5Proxmox - USB pass-through
17-6遠端執行命令、多台機器管理(Push.sh)
17-7不用密碼直接用 ssh 登入到遠端電腦
17-8rsync 檔案備份
17-9透過rsync備份
17-10ssh 免密碼登入
17-11ssh 免密碼登入 & 資料
17-12proxmox 3.4 版無法安裝 nfs-kernel-server
17-13手動方式升級
17-14Ubuntu 12.04 LTS 及ubuntu14.10 -- NFS安裝
17-15pve 在 i386 機器
17-16Proxmox VE的不足
17-17Proxmox Virtual Environment 筆記
17-18KVM to LXC 與 LXC to KVM
17-19Proxmox VE USB Physical Port Mapping
17-20Proxmox VE Physical disk to kvm
17-21ceph要七台主要的考量
17-22zfs 入門與管理技術
17-23RAID-1 陣列讀取資料方式
17-24How to mount Glusterfs volumes inside LXC/LXD (Linux containers)
17-25變更 Proxmox VE 主機名稱 hostname
17-26PVE內建的防火牆
17-27未整理的指令
17-46Proxmox VE 可以結合 FreeNAS 使用 ZFS over iSCSI,做到兼具 Block Level 效能與 WebUI 管理
18.新增/修改記錄
19.友站連結
展開
|
闔起
線上使用者
12
人線上 (
10
人在瀏覽
線上書籍
)
會員: 0
訪客: 12
更多…
Title
Text
Cancel
OK