zfsforlinux
⑴ 如何在Centos7上安裝和使用ZFS
安裝友老ZFS
為了對CentOS安裝ZFS,我游晌們需要神告鋒先安裝支持包EPEL倉庫,然後在ZFS存儲庫上安裝所需的ZFS包。
yum localinstall --nogpgcheck http://epel.mirror.net.in/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
現在安裝內核開發和zfs包,內核開發的軟體包是需要ZFS建立模塊和插入到內核中。
yum install kernel-devel zfs
驗證zfs模塊插入到內核使用的lsmod命令,如果沒有,使用『modprobe命令手動插入它。
[root@li1467-130 ~]# lsmod |grep zfs
[root@li1467-130 ~]# modprobe zfs
[root@li1467-130 ~]# lsmod |grep zfs
zfs 2790271 0
zunicode 331170 1 zfs
zavl 15236 1 zfs
zcommon 55411 1 zfs
znvpair 89086 2 zfs,zcommon
spl 92029 3 zfs,zcommon,znvpair
讓我們檢查是否我們可以使用zfs的命令:
[root@li1467-130 ~]# zfs list
no datasets available
⑵ 如何在linux下掛載 zfs分區
目信枝豎前來說,不支持ext4到zfs的轉換,只能搭老重新格式化。 最新的16.04 lts原生支持zfs,但是好像沒有默認啟用,以前沒有在Linux上做過文件系統的轉換,請問如何從ext4轉換成zfs?除了全格沒有滑大其它辦法。
⑶ 我的linux系統掛了,重新裝好系統,怎麼正確導入zfs池,而不損壞裡面的數據
他這個是不可能了,因為你裝系統的話它是全新的系統,不能裝在以前的那個消息。
⑷ ghost for linux 如何進行安裝啊
iso 文件需要先掛載 ,然後再讀取:
例如你的iso文件在 /home/kyle 目錄下,你想掛載到/mnt 目錄下
就使用命令:
mount /home/kyle/g4l-v0.32.iso /mnt -o loop
然後 進入 /mnt文件 路徑
cd /mnt
裡面應該有內容(文件 文件夾)
如果有Install.sh 的shell 腳本就執行 install 腳本,如果是源代碼,那就像其他源代碼安裝那樣
執行:
./confgure
make
make install
基本OK。
⑸ 目前支持zfs 文件系統的linux有哪些
我們在Linux中常用的文件肆爛系統主要有ext3、ext2及reiserfs。Linux目前幾乎桐枯支持所有的Unix類局雹洞的文件系統,除了我們在安裝Linux操作系統時所要選擇的ext3、reiserfs和ext2外,還支持蘋果MACOS的HFS,也支持其它Unix操作系統的文件系統,比如XFS、JFS...
⑹ for Linux是什麼意思
比如QQ for Linux
就是在Linux操作系統下使用的QQ
因為Linux操作系統和我們通常用的Windows操作系統是不一樣的,所以我們通常用的QQ是不能在 Linux 下使用。
很多像「軟體名 for 操作系統名」這樣的軟體就是專門為這種操作系統開發的。Linux介紹請查看《Linux就該這么學》
⑺ zfs on linux 穩定了嗎
下面橘孫將稿數指導大家在Ubuntu/Linux 上安裝原生的ZFS 文件系統。 測試環境:Linux 2.6.35-24-generic #42-Ubuntu SMP x86_64 GNU/圓敬鏈Linux Ubuntu 10.10 ,也適用於Ubuntu 10.04。 確保安裝以下軟體包 build-essential gawk zlib1g-dev uuid-dev 若沒有安...
⑻ proxmox ve -- ZFS on Linux 2019-08-30
ZFS是由Sun Microsystems設計的一個文件系統和邏輯卷管理器的組合。從proxmox ve 3.4開褲稿始,zfs文件系統的本機Linux內核埠作為可選文件系統引入,並作為根文件系統的附加選擇。不需要手動編譯ZFS模塊-包括所有包。
通過使用zfs,它可以通過低硬體預算花銷實現最大的企業功能,並且可以通過利用SSD緩存或純使用SSD來飢哪實現高性能系統。ZFS可以通過適度的CPU和內存負載以及簡單的管理來取代成本高昂的硬體RAID卡。
General ZFS advantages
ZFS很大程度上依賴於內存,因此至少需要8GB才能啟動。爛純碼在實踐中,盡可能多地使用高配置硬體。為了防止數據損壞,我們建議使用高質量的ECC RAM。
如果使用專用緩存和/或日誌磁碟,則應使用企業級SSD(例如Intel SSD DC S3700系列)。這可以顯著提高整體性能。
If you are experimenting with an installation of Proxmox VE inside a VM (Nested Virtualization), don』t use <tt>virtio</tt> for disks of that VM, since they are not supported by ZFS. Use IDE or SCSI instead (works also with <tt>virtio</tt> SCSI controller type).
When you install using the Proxmox VE installer, you can choose ZFS for the root file system. You need to select the RAID type at installation time:
| RAID0
|
Also called 「striping」. The capacity of such volume is the sum of the capacities of all disks. But RAID0 does not add any rendancy, so the failure of a single drive makes the volume unusable.
|
| RAID1
|
Also called 「mirroring」. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The resulting capacity is that of a single disk.
|
| RAID10
|
A combination of RAID0 and RAID1. Requires at least 4 disks.
|
| RAIDZ-1
|
A variation on RAID-5, single parity. Requires at least 3 disks.
|
| RAIDZ-2
|
A variation on RAID-5, double parity. Requires at least 4 disks.
|
| RAIDZ-3
|
A variation on RAID-5, triple parity. Requires at least 5 disks.
|
The installer automatically partitions the disks, creates a ZFS pool called <tt>rpool</tt>, and installs the root file system on the ZFS subvolume <tt>rpool/ROOT/pve-1</tt>.
Another subvolume called <tt>rpool/data</tt> is created to store VM images. In order to use that with the Proxmox VE tools, the installer creates the following configuration entry in <tt>/etc/pve/storage.cfg</tt>:
<pre><tt>zfspool: local-zfs
pool rpool/data
sparse
content images,rootdir</tt></pre>
After installation, you can view your ZFS pool status using the <tt>zpool</tt> command:
<pre><tt># zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
errors: No known data errors</tt></pre>
The <tt>zfs</tt> command is used configure and manage your ZFS file systems. The following command lists all file systems after installation:
<pre><tt># zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.94G 7.68T 96K /rpool
rpool/ROOT 702M 7.68T 96K /rpool/ROOT
rpool/ROOT/pve-1 702M 7.68T 702M /
rpool/data 96K 7.68T 96K /rpool/data
rpool/swap 4.25G 7.69T 64K -</tt></pre>
Depending on whether the system is booted in EFI or legacy BIOS mode the Proxmox VE installer sets up either <tt>grub</tt> or <tt>systemd-boot</tt> as main bootloader. See the chapter on Proxmox VE host bootladers for details.
This section gives you some usage examples for common tasks. ZFS itself is really powerful and provides many options. The main commands to manage ZFS are <tt>zfs</tt> and <tt>zpool</tt>. Both commands come with great manual pages, which can be read with:
<pre><tt># man zpool
To create a new pool, at least one disk is needed. The <tt>ashift</tt> should have the same sector-size (2 power of <tt>ashift</tt>) or larger as the underlying disk.
<pre><tt>zpool create -f -o ashift=12 <pool> <device></tt></pre>
To activate compression
<pre><tt>zfs set compression=lz4 <pool></tt></pre>
Minimum 1 Disk
<pre><tt>zpool create -f -o ashift=12 <pool> <device1> <device2></tt></pre>
Minimum 2 Disks
<pre><tt>zpool create -f -o ashift=12 <pool> mirror <device1> <device2></tt></pre>
Minimum 4 Disks
<pre><tt>zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4></tt></pre>
Minimum 3 Disks
<pre><tt>zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3></tt></pre>
Minimum 4 Disks
<pre><tt>zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4></tt></pre>
It is possible to use a dedicated cache drive partition to increase the performance (use SSD).
As <tt><device></tt> it is possible to use more devices, like it』s shown in "Create a new pool with RAID*".
<pre><tt>zpool create -f -o ashift=12 <pool> <device> cache <cache_device></tt></pre>
It is possible to use a dedicated cache drive partition to increase the performance(SSD).
As <tt><device></tt> it is possible to use more devices, like it』s shown in "Create a new pool with RAID*".
<pre><tt>zpool create -f -o ashift=12 <pool> <device> log <log_device></tt></pre>
If you have an pool without cache and log. First partition the SSD in 2 partition with <tt>parted</tt> or <tt>gdisk</tt>
| Always use GPT partition tables. |
The maximum size of a log device should be about half the size of physical memory, so this is usually quite small. The rest of the SSD can be used as cache.
<pre><tt>zpool add -f <pool> log <device-part1> cache <device-part2></tt></pre>
Changing a failed device
<pre><tt>zpool replace -f <pool> <old device> <new device></tt></pre>
Changing a failed bootable device when using systemd-boot
<pre><tt>sgdisk <healthy bootable device> -R <new device>
sgdisk -G <new device>
zpool replace -f <pool> <old zfs partition> <new zfs partition>
pve-efiboot-tool format <new disk's ESP>
pve-efiboot-tool init <new disk's ESP></tt></pre>
| <tt>ESP</tt> stands for EFI System Partition, which is setup as partition #2 on bootable disks setup by the Proxmox VE installer since version 5.4. For details, see Setting up a new partition for use as synced ESP . |
ZFS comes with an event daemon, which monitors events generated by the ZFS kernel mole. The daemon can also send emails on ZFS events like pool errors. Newer ZFS packages ships the daemon in a separate package, and you can install it using <tt>apt-get</tt>:
<pre><tt># apt-get install zfs-zed</tt></pre>
To activate the daemon it is necessary to edit <tt>/etc/zfs/zed.d/zed.rc</tt> with your favourite editor, and uncomment the <tt>ZED_EMAIL_ADDR</tt> setting:
<pre><tt>ZED_EMAIL_ADDR="root"</tt></pre>
Please note Proxmox VE forwards mails to <tt>root</tt> to the email address configured for the root user.
| The only setting that is required is <tt>ZED_EMAIL_ADDR</tt>. All other settings are optional. |
It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the host. Use your preferred editor to change the configuration in <tt>/etc/modprobe.d/zfs.conf</tt> and insert:
<pre><tt>options zfs zfs_arc_max=8589934592</tt></pre>
This example setting limits the usage to 8GB.
|
If your root file system is ZFS you must update your initramfs every time this value changes:
<pre><tt>update-initramfs -u</tt></pre>
|
Swap-space created on a zvol may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage.
We strongly recommend to use enough memory, so that you normally do not run into low memory situations. Should you need or want to add swap, it is preferred to create a partition on a physical disk and use it as swapdevice. You can leave some space free for this purpose in the advanced options of the installer. Additionally, you can lower the 「swappiness」 value. A good value for servers is 10:
<pre><tt>sysctl -w vm.swappiness=10</tt></pre>
To make the swappiness persistent, open <tt>/etc/sysctl.conf</tt> with an editor of your choice and add the following line:
<pre><tt>vm.swappiness = 10</tt></pre>
<caption class="title">Table 1. Linux kernel <tt>swappiness</tt> parameter values</caption> <colgroup><col style="width:33%;"> <col style="width:66%;"></colgroup>
<tt>vm.swappiness = 0</tt>
|
The kernel will swap only to avoid an out of memory condition
|
|
<tt>vm.swappiness = 1</tt>
|
Minimum amount of swapping without disabling it entirely.
|
|
<tt>vm.swappiness = 10</tt>
|
This value is sometimes recommended to improve performance when sufficient memory exists in a system.
|
|
<tt>vm.swappiness = 60</tt>
|
The default value.
|
|
<tt>vm.swappiness = 100</tt>
|
The kernel will swap aggressively.
|
ZFS on Linux version 0.8.0 introced support for native encryption of datasets. After an upgrade from previous ZFS on Linux versions, the encryption feature can be enabled per pool:
<pre><tt># zpool get feature@encryption tank
NAME PROPERTY VALUE SOURCE
tank feature@encryption disabled local
NAME PROPERTY VALUE SOURCE
tank feature@encryption enabled local</tt></pre>
| There is currently no support for booting from pools with encrypted datasets using Grub, and only limited support for automatically unlocking encrypted datasets on boot. Older versions of ZFS without encryption support will not be able to decrypt stored data. |
| It is recommended to either unlock storage datasets manually after booting, or to write a custom unit to pass the key material needed for unlocking on boot to <tt>zfs load-key</tt>. |
| Establish and test a backup procere before enabling encryption of proction data.If the associated key material/passphrase/keyfile has been lost, accessing the encrypted data is no longer possible. |
Encryption needs to be setup when creating datasets/zvols, and is inherited by default to child datasets. For example, to create an encrypted dataset <tt>tank/encrypted_data</tt> and configure it as storage in Proxmox VE, run the following commands:
<pre><tt># zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
Enter passphrase:
Re-enter passphrase:
All guest volumes/disks create on this storage will be encrypted with the shared key material of the parent dataset.
To actually use the storage, the associated key material needs to be loaded with <tt>zfs load-key</tt>:
<pre><tt># zfs load-key tank/encrypted_data
Enter passphrase for 'tank/encrypted_data':</tt></pre>
It is also possible to use a (random) keyfile instead of prompting for a passphrase by setting the <tt>keylocation</tt> and <tt>keyformat</tt> properties, either at creation time or with <tt>zfs change-key</tt> on existing datasets:
<pre><tt># dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
| When using a keyfile, special care needs to be taken to secure the keyfile against unauthorized access or accidental loss. Without the keyfile, it is not possible to access the plaintext data! |
A guest volume created underneath an encrypted dataset will have its <tt>encryptionroot</tt> property set accordingly. The key material only needs to be loaded once per encryptionroot to be available to all encrypted datasets underneath it.
See the <tt>encryptionroot</tt>, <tt>encryption</tt>, <tt>keylocation</tt>, <tt>keyformat</tt> and <tt>keystatus</tt> properties, the <tt>zfs load-key</tt>, <tt>zfs unload-key</tt> and <tt>zfs change-key</tt> commands and the <tt>Encryption</tt> section from <tt>man zfs</tt> for more details and advanced usage.
⑼ linux各個版本的軟體市場能安裝
一、常見的office辦公軟體openoffice:免費開源的office辦公軟體redoffice: 從 OpenOffice 中衍生出開發雲主機域名來的,類似於國產的麒麟系統。libreoffice:很多桌面發行版本都默認自帶的,比如ubuntu,可用於一般的文檔閱讀和處理。Google Docs:網路協同辦公非常好,基本的文檔處理,內容演示都可以,而且相比本地的辦公軟體有著很大的優勢。WPS:國產的office辦公軟體。VNote 筆記軟體二、輸入法搜狗輸入法 for linux三、上網、通信Chrome瀏覽器Firefox瀏覽器微信 for LinuxQQ for Linux四、開發工具Atom編輯器Vscode編輯器Sublime Text編輯器eclipseVirtualBox 虛擬機五、影音娛樂網易雲音樂《反恐精英:全球攻勢(CS GO)》《Dota 2》關於linux系統中可以安裝哪些軟體問題的解答就分享到這里了,希望以上內容可以對大家有一定的幫助,如果你還有很多疑惑沒有解開,可以關注開發雲行業資訊頻道了解更多相關知識