当前位置:首页 » 操作系统 » zfsforlinux

zfsforlinux

发布时间: 2023-05-14 22:32:50

⑴ 如何在Centos7上安装和使用ZFS

安装友老ZFS

为了对CentOS安装ZFS,我游晌们需要神告锋先安装支持包EPEL仓库,然后在ZFS存储库上安装所需的ZFS包。
yum localinstall --nogpgcheck http://epel.mirror.net.in/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm

现在安装内核开发和zfs包,内核开发的软件包是需要ZFS建立模块和插入到内核中。
yum install kernel-devel zfs

验证zfs模块插入到内核使用的lsmod命令,如果没有,使用‘modprobe命令手动插入它。
[root@li1467-130 ~]# lsmod |grep zfs
[root@li1467-130 ~]# modprobe zfs
[root@li1467-130 ~]# lsmod |grep zfs
zfs 2790271 0
zunicode 331170 1 zfs
zavl 15236 1 zfs
zcommon 55411 1 zfs
znvpair 89086 2 zfs,zcommon
spl 92029 3 zfs,zcommon,znvpair

让我们检查是否我们可以使用zfs的命令:
[root@li1467-130 ~]# zfs list
no datasets available

⑵ 如何在linux下挂载 zfs分区

目信枝竖前来说,不支持ext4到zfs的转换,只能搭老重新格式化。 最新的16.04 lts原生支持zfs,但是好像没有默认启用,以前没有在Linux上做过文件系统的转换,请问如何从ext4转换成zfs?除了全格没有滑大其它办法。

⑶ 我的linux系统挂了,重新装好系统,怎么正确导入zfs池,而不损坏里面的数据

他这个是不可能了,因为你装系统的话它是全新的系统,不能装在以前的那个消息。

⑷ ghost for linux 如何进行安装啊

iso 文件需要先挂载 ,然后再读取:
例如你的iso文件在 /home/kyle 目录下,你想挂载到/mnt 目录下
就使用命令:
mount /home/kyle/g4l-v0.32.iso /mnt -o loop
然后 进入 /mnt文件 路径
cd /mnt
里面应该有内容(文件 文件夹
如果有Install.sh 的shell 脚本就执行 install 脚本,如果是源代码,那就像其他源代码安装那样
执行:
./confgure
make

make install
基本OK。

⑸ 目前支持zfs 文件系统的linux有哪些

我们在Linux中常用的文件肆烂系统主要有ext3、ext2及reiserfs。Linux目前几乎桐枯支持所有的Unix类局雹洞的文件系统,除了我们在安装Linux操作系统时所要选择的ext3、reiserfs和ext2外,还支持苹果MACOS的HFS,也支持其它Unix操作系统的文件系统,比如XFS、JFS...

⑹ for Linux是什么意思

比如QQ for Linux
就是在Linux操作系统下使用的QQ
因为Linux操作系统和我们通常用的Windows操作系统是不一样的,所以我们通常用的QQ是不能在 Linux 下使用。
很多像“软件名 for 操作系统名”这样的软件就是专门为这种操作系统开发的。Linux介绍请查看《Linux就该这么学》

⑺ zfs on linux 稳定了吗

下面橘孙将稿数指导大家在Ubuntu/Linux 上安装原生的ZFS 文件系统。 测试环境:Linux 2.6.35-24-generic #42-Ubuntu SMP x86_64 GNU/圆敬链Linux Ubuntu 10.10 ,也适用于Ubuntu 10.04。 确保安装以下软件包 build-essential gawk zlib1g-dev uuid-dev 若没有安...

⑻ proxmox ve -- ZFS on Linux 2019-08-30

ZFS是由Sun Microsystems设计的一个文件系统和逻辑卷管理器的组合。从proxmox ve 3.4开裤稿始,zfs文件系统的本机Linux内核端口作为可选文件系统引入,并作为根文件系统的附加选择。不需要手动编译ZFS模块-包括所有包。

通过使用zfs,它可以通过低硬件预算花销实现最大的企业功能,并且可以通过利用SSD缓存或纯使用SSD来饥哪实现高性能系统。ZFS可以通过适度的CPU和内存负载以及简单的管理来取代成本高昂的硬件RAID卡。

General ZFS advantages

ZFS很大程度上依赖于内存,因此至少需要8GB才能启动。烂纯码在实践中,尽可能多地使用高配置硬件。为了防止数据损坏,我们建议使用高质量的ECC RAM。

如果使用专用缓存和/或日志磁盘,则应使用企业级SSD(例如Intel SSD DC S3700系列)。这可以显着提高整体性能。

If you are experimenting with an installation of Proxmox VE inside a VM (Nested Virtualization), don’t use <tt>virtio</tt> for disks of that VM, since they are not supported by ZFS. Use IDE or SCSI instead (works also with <tt>virtio</tt> SCSI controller type).

When you install using the Proxmox VE installer, you can choose ZFS for the root file system. You need to select the RAID type at installation time:

| RAID0
|

Also called “striping”. The capacity of such volume is the sum of the capacities of all disks. But RAID0 does not add any rendancy, so the failure of a single drive makes the volume unusable.

|
| RAID1
|

Also called “mirroring”. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The resulting capacity is that of a single disk.

|
| RAID10
|

A combination of RAID0 and RAID1. Requires at least 4 disks.

|
| RAIDZ-1
|

A variation on RAID-5, single parity. Requires at least 3 disks.

|
| RAIDZ-2
|

A variation on RAID-5, double parity. Requires at least 4 disks.

|
| RAIDZ-3
|

A variation on RAID-5, triple parity. Requires at least 5 disks.

|

The installer automatically partitions the disks, creates a ZFS pool called <tt>rpool</tt>, and installs the root file system on the ZFS subvolume <tt>rpool/ROOT/pve-1</tt>.

Another subvolume called <tt>rpool/data</tt> is created to store VM images. In order to use that with the Proxmox VE tools, the installer creates the following configuration entry in <tt>/etc/pve/storage.cfg</tt>:

<pre><tt>zfspool: local-zfs
pool rpool/data
sparse
content images,rootdir</tt></pre>

After installation, you can view your ZFS pool status using the <tt>zpool</tt> command:

<pre><tt># zpool status
pool: rpool
state: ONLINE
scan: none requested
config:

errors: No known data errors</tt></pre>

The <tt>zfs</tt> command is used configure and manage your ZFS file systems. The following command lists all file systems after installation:

<pre><tt># zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.94G 7.68T 96K /rpool
rpool/ROOT 702M 7.68T 96K /rpool/ROOT
rpool/ROOT/pve-1 702M 7.68T 702M /
rpool/data 96K 7.68T 96K /rpool/data
rpool/swap 4.25G 7.69T 64K -</tt></pre>

Depending on whether the system is booted in EFI or legacy BIOS mode the Proxmox VE installer sets up either <tt>grub</tt> or <tt>systemd-boot</tt> as main bootloader. See the chapter on Proxmox VE host bootladers for details.

This section gives you some usage examples for common tasks. ZFS itself is really powerful and provides many options. The main commands to manage ZFS are <tt>zfs</tt> and <tt>zpool</tt>. Both commands come with great manual pages, which can be read with:

<pre><tt># man zpool

To create a new pool, at least one disk is needed. The <tt>ashift</tt> should have the same sector-size (2 power of <tt>ashift</tt>) or larger as the underlying disk.

<pre><tt>zpool create -f -o ashift=12 <pool> <device></tt></pre>

To activate compression

<pre><tt>zfs set compression=lz4 <pool></tt></pre>

Minimum 1 Disk

<pre><tt>zpool create -f -o ashift=12 <pool> <device1> <device2></tt></pre>

Minimum 2 Disks

<pre><tt>zpool create -f -o ashift=12 <pool> mirror <device1> <device2></tt></pre>

Minimum 4 Disks

<pre><tt>zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4></tt></pre>

Minimum 3 Disks

<pre><tt>zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3></tt></pre>

Minimum 4 Disks

<pre><tt>zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4></tt></pre>

It is possible to use a dedicated cache drive partition to increase the performance (use SSD).

As <tt><device></tt> it is possible to use more devices, like it’s shown in "Create a new pool with RAID*".

<pre><tt>zpool create -f -o ashift=12 <pool> <device> cache <cache_device></tt></pre>

It is possible to use a dedicated cache drive partition to increase the performance(SSD).

As <tt><device></tt> it is possible to use more devices, like it’s shown in "Create a new pool with RAID*".

<pre><tt>zpool create -f -o ashift=12 <pool> <device> log <log_device></tt></pre>

If you have an pool without cache and log. First partition the SSD in 2 partition with <tt>parted</tt> or <tt>gdisk</tt>

| Always use GPT partition tables. |

The maximum size of a log device should be about half the size of physical memory, so this is usually quite small. The rest of the SSD can be used as cache.

<pre><tt>zpool add -f <pool> log <device-part1> cache <device-part2></tt></pre>

Changing a failed device

<pre><tt>zpool replace -f <pool> <old device> <new device></tt></pre>

Changing a failed bootable device when using systemd-boot

<pre><tt>sgdisk <healthy bootable device> -R <new device>
sgdisk -G <new device>
zpool replace -f <pool> <old zfs partition> <new zfs partition>
pve-efiboot-tool format <new disk's ESP>
pve-efiboot-tool init <new disk's ESP></tt></pre>

| <tt>ESP</tt> stands for EFI System Partition, which is setup as partition #2 on bootable disks setup by the Proxmox VE installer since version 5.4. For details, see Setting up a new partition for use as synced ESP . |

ZFS comes with an event daemon, which monitors events generated by the ZFS kernel mole. The daemon can also send emails on ZFS events like pool errors. Newer ZFS packages ships the daemon in a separate package, and you can install it using <tt>apt-get</tt>:

<pre><tt># apt-get install zfs-zed</tt></pre>

To activate the daemon it is necessary to edit <tt>/etc/zfs/zed.d/zed.rc</tt> with your favourite editor, and uncomment the <tt>ZED_EMAIL_ADDR</tt> setting:

<pre><tt>ZED_EMAIL_ADDR="root"</tt></pre>

Please note Proxmox VE forwards mails to <tt>root</tt> to the email address configured for the root user.

| The only setting that is required is <tt>ZED_EMAIL_ADDR</tt>. All other settings are optional. |

It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the host. Use your preferred editor to change the configuration in <tt>/etc/modprobe.d/zfs.conf</tt> and insert:

<pre><tt>options zfs zfs_arc_max=8589934592</tt></pre>

This example setting limits the usage to 8GB.

|

If your root file system is ZFS you must update your initramfs every time this value changes:

<pre><tt>update-initramfs -u</tt></pre>

|

Swap-space created on a zvol may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage.

We strongly recommend to use enough memory, so that you normally do not run into low memory situations. Should you need or want to add swap, it is preferred to create a partition on a physical disk and use it as swapdevice. You can leave some space free for this purpose in the advanced options of the installer. Additionally, you can lower the “swappiness” value. A good value for servers is 10:

<pre><tt>sysctl -w vm.swappiness=10</tt></pre>

To make the swappiness persistent, open <tt>/etc/sysctl.conf</tt> with an editor of your choice and add the following line:

<pre><tt>vm.swappiness = 10</tt></pre>

<caption class="title">Table 1. Linux kernel <tt>swappiness</tt> parameter values</caption> <colgroup><col style="width:33%;"> <col style="width:66%;"></colgroup>

<tt>vm.swappiness = 0</tt>

|

The kernel will swap only to avoid an out of memory condition

|
|

<tt>vm.swappiness = 1</tt>

|

Minimum amount of swapping without disabling it entirely.

|
|

<tt>vm.swappiness = 10</tt>

|

This value is sometimes recommended to improve performance when sufficient memory exists in a system.

|
|

<tt>vm.swappiness = 60</tt>

|

The default value.

|
|

<tt>vm.swappiness = 100</tt>

|

The kernel will swap aggressively.

|

ZFS on Linux version 0.8.0 introced support for native encryption of datasets. After an upgrade from previous ZFS on Linux versions, the encryption feature can be enabled per pool:

<pre><tt># zpool get feature@encryption tank
NAME PROPERTY VALUE SOURCE
tank feature@encryption disabled local

NAME PROPERTY VALUE SOURCE
tank feature@encryption enabled local</tt></pre>

| There is currently no support for booting from pools with encrypted datasets using Grub, and only limited support for automatically unlocking encrypted datasets on boot. Older versions of ZFS without encryption support will not be able to decrypt stored data. |

| It is recommended to either unlock storage datasets manually after booting, or to write a custom unit to pass the key material needed for unlocking on boot to <tt>zfs load-key</tt>. |

| Establish and test a backup procere before enabling encryption of proction data.If the associated key material/passphrase/keyfile has been lost, accessing the encrypted data is no longer possible. |

Encryption needs to be setup when creating datasets/zvols, and is inherited by default to child datasets. For example, to create an encrypted dataset <tt>tank/encrypted_data</tt> and configure it as storage in Proxmox VE, run the following commands:

<pre><tt># zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
Enter passphrase:
Re-enter passphrase:

All guest volumes/disks create on this storage will be encrypted with the shared key material of the parent dataset.

To actually use the storage, the associated key material needs to be loaded with <tt>zfs load-key</tt>:

<pre><tt># zfs load-key tank/encrypted_data
Enter passphrase for 'tank/encrypted_data':</tt></pre>

It is also possible to use a (random) keyfile instead of prompting for a passphrase by setting the <tt>keylocation</tt> and <tt>keyformat</tt> properties, either at creation time or with <tt>zfs change-key</tt> on existing datasets:

<pre><tt># dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1

| When using a keyfile, special care needs to be taken to secure the keyfile against unauthorized access or accidental loss. Without the keyfile, it is not possible to access the plaintext data! |

A guest volume created underneath an encrypted dataset will have its <tt>encryptionroot</tt> property set accordingly. The key material only needs to be loaded once per encryptionroot to be available to all encrypted datasets underneath it.

See the <tt>encryptionroot</tt>, <tt>encryption</tt>, <tt>keylocation</tt>, <tt>keyformat</tt> and <tt>keystatus</tt> properties, the <tt>zfs load-key</tt>, <tt>zfs unload-key</tt> and <tt>zfs change-key</tt> commands and the <tt>Encryption</tt> section from <tt>man zfs</tt> for more details and advanced usage.

⑼ linux各个版本的软件市场能安装

一、常见的office办公软件openoffice:免费开源的office办公软件redoffice: 从 OpenOffice 中衍生出开发云主机域名来的,类似于国产的麒麟系统。libreoffice:很多桌面发行版本都默认自带的,比如ubuntu,可用于一般的文档阅读和处理。Google Docs:网络协同办公非常好,基本的文档处理,内容演示都可以,而且相比本地的办公软件有着很大的优势。WPS:国产的office办公软件。VNote 笔记软件二、输入法搜狗输入法 for linux三、上网、通信Chrome浏览器Firefox浏览器微信 for LinuxQQ for Linux四、开发工具Atom编辑器Vscode编辑器Sublime Text编辑器eclipseVirtualBox 虚拟机五、影音娱乐网易云音乐《反恐精英:全球攻势(CS GO)》《Dota 2》关于linux系统中可以安装哪些软件问题的解答就分享到这里了,希望以上内容可以对大家有一定的帮助,如果你还有很多疑惑没有解开,可以关注开发云行业资讯频道了解更多相关知识

热点内容
javazip解压加密 发布:2025-05-15 12:15:02 浏览:941
dnf服务器存放什么信息 发布:2025-05-15 12:11:07 浏览:215
办公室视频剧本脚本 发布:2025-05-15 12:03:51 浏览:490
编译失败什么意思 发布:2025-05-15 11:58:18 浏览:87
lcs脚本官网 发布:2025-05-15 11:56:15 浏览:88
三国志战略版打9级矿什么配置 发布:2025-05-15 11:41:29 浏览:953
安卓加速器怎么关 发布:2025-05-15 11:38:16 浏览:465
密码锁坏了如何打开 发布:2025-05-15 11:30:19 浏览:838
怎样增加共享文件夹连接数量 发布:2025-05-15 11:24:50 浏览:962
安卓如何关闭单应用音量 发布:2025-05-15 11:22:31 浏览:352