当前位置:首页 » 操作系统 » linux安装多路径

linux安装多路径

发布时间: 2022-12-09 03:55:27

‘壹’ 如何使用linux自带多路径DM

一、多路径解释
多路径,顾名思义就是有多种选择的路径。在SAN或IPSAN环境,主机和存储之间外加了光纤交换机,这就导致主机和存储之间交换速度和效率增强,一条路径肯定是不行的,也是不安全不稳定的。多路径就是要来解决从主机到磁盘之间最快,最高效的问题。主要实现如下几个功能
故障的切换和恢复
IO流量的负载均衡
磁盘的虚拟化
多路径之前一直是存储厂商负责解决,竟来被拆分出来单独卖钱了。
构架基本是这样的:存储,多路径软件,光纤交换机,主机,主机系统。

二、LINUX下的multipath
1、查看是否自带安装?

1
2
3
4
5
6

[root@web2 multipath]# rpm -qa|grep device
device-mapper-1.02.39-1.el5
device-mapper-1.02.39-1.el5
device-mapper-multipath-0.4.7-34.el5
device-mapper-event-1.02.39-1.el5
[root@web2 multipath]#

2、安装

1
2
3
4
5
6

rpm -ivh device-mapper-1.02.39-1.el5.rpm #安装映射包
rpm -ivh device-mapper-multipath-0.4.7-34.el5.rpm #安装多路径包

外加加入开机启动
chkconfig –level 2345 multipathd on #设置成开机自启动multipathd
lsmod |grep dm_multipath #来检查安装是否正常

3、配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14

# on the default devices.
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
}
devices {
device {
vendor "HP"
path_grouping_policy multibus
features "1 queue_if_no_path"
path_checker readsector()
failback immediate
}
}<br><br>完整的配置如下:

blacklist {
devnode "^sda"
}

defaults {
user_friendly_names no
}

multipaths {
multipath {
wwid
alias iscsi-dm0
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
multipath {
wwid
alias iscsi-dm1
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
multipath {
wwid
alias iscsi-dm2
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
multipath {
wwid
alias iscsi-dm3
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
}

devices {
device {
vendor "iSCSI-Enterprise"
proct "Virtual disk"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
path_checker readsector0
path_selector "round-robin 0"
}
}
4、命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

[root@web2 ~]# multipath -h
multipath-tools v0.4.7 (03/12, 2006)
Usage: multipath [-v level] [-d] [-h|-l|-ll|-f|-F|-r]
[-p failover|multibus|group_by_serial|group_by_prio]
[device]

-v level verbosity level
0 no output
1 print created devmap names only
2 default verbosity
3 print debug information
-h print this usage text
-b file bindings file location
-d dry run, do not create or update devmaps
-l show multipath topology (sysfs and DM info)
-ll show multipath topology (maximum info)
-f flush a multipath device map
-F flush all multipath device maps
-r force devmap reload
-p policy force all maps to specified policy :
failover 1 path per priority group
multibus all paths in 1 priority group
group_by_serial 1 priority group per serial
group_by_prio 1 priority group per priority lvl
group_by_node_name 1 priority group per target node

device limit scope to the device's multipath
(udev-style $DEVNAME reference, eg /dev/sdb
or major:minor or a device map name)
[root@web2 ~]#

5、启动关闭

1
2
3
4

# /etc/init.d/multipathd start #开启mulitipath服务
service multipath start
service multipath restart
service multipath shutdown

6、如何获取wwid

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

1、
[root@vxfs01 ~]# cat /var/lib/multipath/bindings
# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
#
# Format:
# alias wwid
#
mpath0
mpath1
mpath2
mpath3
mpath4

2、
[root@vxfs01 ~]# multipath -v3 |grep 3600
sdb: uid = (callout)
sdc: uid = (callout)
sdd: uid = (callout)
sde: uid = (callout)
1:0:0:0 sdb 8:16 0 [undef][ready] DGC,RAI
1:0:1:0 sdc 8:32 1 [undef][ready] DGC,RAI
2:0:0:0 sdd 8:48 1 [undef][ready] DGC,RAI
2:0:1:0 sde 8:64 0 [undef][ready] DGC,RAI
Found matching wwid [] in bindings file.

比较详细的文字:
http://zhumeng8337797.blog.163.com/blog/static/1007689142013416111534352/
http://blog.csdn.net/wuweilong/article/details/14184097
RHEL官网资料:
http://www.prudentwoo.com/wp-content/uploads/downloads/2013/11/Red_Hat_Enterprise_Linux-5-DM_Multipath-en-US.pdf
http://www.prudentwoo.com/wp-content/uploads/downloads/2013/11/Red_Hat_Enterprise_Linux-5-DM_Multipath-zh-CN.pdf
http://www.prudentwoo.com/wp-content/uploads/downloads/2013/11/Red_Hat_Enterprise_Linux-6-DM_Multipath-en-US.pdf
http://www.prudentwoo.com/wp-content/uploads/downloads/2013/11/Red_Hat_Enterprise_Linux-6-DM_Multipath-zh-CN.pdf

‘贰’ Linux系统怎么配置多路径

是要挂存储吗?你需要安装multipath,然后配置文件是:/etc/multipath.conf

‘叁’ linux系统装好多路径软件时,还可以通过fdisk -l 查看到多条路径吗

fdisk -l看到的是逻辑设备。譬如你存储映射到主机的一个LUN,经过两条路径到主机上,那么在主机上fdisk -l就会看到两块逻辑上的磁盘,事实上这两块磁盘是同一个物理设备(对于主机来说,LUN是物理设备),通过multipath -ll看到的是一个物理设备下的实际的两条path。

‘肆’ Multipath 多路径配置实践心得

配置存储时一定会遇到 multipath 多路径的问题,不同的厂商比如 EMC PowerPath,Veritas VxDMP 等都有独立的多路径软件,而多路径软件的功能也很清晰主要用于IO流量负载均衡和故障切换恢复等。在 Linux 环境中 device-mapper-multipath 是一个免费的通用型多路径管理软件,其配置文件也非常简单,主要通过修改 /etc/multipath.conf 来调整。文章提供了《HPE 3PAR Red Hat Enterprise Linux和 Oracle Linux 实施指南》,也分享了自己配置 multipath 的实践过程,希望对大家有参考价值。

2017年05月04日 - 初稿

阅读原文 - https://wsgzao.github.io/post/multipath/

扩展阅读

multipath - https://access.redhat.com/labsinfo/multipathhelper
[原]红旗上使用multipath复合多条路径 - http://www.linuxfly.org/post/513/

<iframe src="https://www.slideshare.net/slideshow/embed_code/key/MKQJhstxnv4JFy" width="479" height="511" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> <div style="margin-bottom:5px"> </div>

普通的电脑主机都是一个硬盘挂接到一个总线上,这里是一对一的关系。而到了有光纤组成的SAN环境,由于主机和存储通过了光纤交换机连接,这样的话,就构成了多对多的关系。也就是说,主机到存储可以有多条路径可以选择。主机到存储之间的IO由多条路径可以选择。

既然,每个主机到所对应的存储可以经过几条不同的路径,如果是同时使用的话,I/O流量如何分配?其中一条路径坏掉了,如何处理?还有在操作系统的角度来看,每条路径,操作系统会认为是一个实际存在的物理盘,但实际上只是通向同一个物理盘的不同路径而已,这样是在使用的时候,就给用户带来了困惑。多路径软件就是为了解决上面的问题应运而生的,多路径的主要功能就是和存储设备一起配合实现如下功能:

由于多路径软件是需要和存储在一起配合使用的,不同的厂商基于不同的操作系统,都提供了不同的版本。并且有的厂商,软件和硬件也不是一起卖的,如果要使用多路径软件的话,可能还需要向厂商购买license才行。比如EMC公司基于Linux下的多路径软件,就需要单独的购买license。其中,EMC提供的就是PowerPath,HDS提供的就是HDLM,Veritas提供的就是VxDMP。当然,使用系统自带的免费多路径软件包,同时也是一个比较通用的包,可以支持大多数存储厂商的设备,即使是一些不是出名的厂商,通过对配置文件进行稍作修改,也是可以支持并运行的很好。

比较重要的一点还是听从原厂工程师的建议根据实际的业务和存储策略使用合适的多路径软件。

‘伍’ linux 多路径存储是怎么回事

Linux下HDS存储多路径查看
在Redhat下确定需要划分的存储空间。在本例中需要进行划分的空间是从HDS AMS2000上划分到服务器的多路径存储空间。其中sddlmad为ycdb1上需要进行划分的空间,sddlmah为ycdb2上需要进行划分的空间。具体如下:
查看环境
# rpm -qa|grep device-mapper
device-mapper-event-1.02.32-1.el5
device-mapper-multipath-0.4.7-30.el5
device-mapper-1.02.32-1.el5
# rpm -qa|grep lvm2 lvm2-2.02.46-8.el5
查看空间
#fdisk -l
Disk /dev/sddlmad: 184.2 GB, 184236900352 bytes 255 heads, 63 sectors/track, 22398 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sddlmah: 184.2 GB, 184236900352 bytes

255 heads, 63 sectors/track, 22398 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
查看存储
#cd /opt/DynamicLinkManager/bin/
#./dlnkmgr view -lu
Proct : AMS
SerialNumber : 83041424 LUs : 8
iLU HDevName Device PathID Status
0000 sddlmaa /dev/sdb 000000 Online
/dev/sdj 000008 Online
/dev/sdr 000016 Online
/dev/sdz 000017 Online

0001 sddlmab /dev/sdc 000001 Online
/dev/sdk 000009 Online
/dev/sds 000018 Online
/dev/sdaa 000019 Online
0002 sddlmac /dev/sdd 000002 Online
/dev/sdl 000010 Online
/dev/sdt 000020 Online
/dev/sdab 000021 Online
0003 sddlmad /dev/sde 000003 Online
/dev/sdm 000011 Online
/dev/s 000022 Online
/dev/sdac 000023 Online
0004 sddlmae /dev/sdf 000004 Online
/dev/sdn 000012 Online
/dev/sdv 000024 Online
/dev/sdad 000025 Online
0005 sddlmaf /dev/sdg 000005 Online
/dev/sdo 000013 Online
/dev/sdw 000026 Online
/dev/sdae 000027 Online
0006 sddlmag /dev/sdh 000006 Online
/dev/sdp 000014 Online
/dev/sdx 000028 Online
/dev/sdaf 000029 Online
0007 sddlmah /dev/sdi 000007 Online
/dev/sdq 000015 Online
/dev/sdy 000030 Online
/dev/sdag 000031 Online
##############################################################
4. lvm.conf的修改
为了能够正确的使用LVM,需要修改其过滤器:
#cd /etc/lvm #vi lvm.conf
# By default we accept every block device
# filter = [ "a/.*/" ]
filter = [ "a|sddlm[a-p][a-p]|.*|","r|dev/sd|" ]
例:

[root@bsrunbak etc]# ls -l lvm*

[root@bsrunbak etc]# cd lvm
[root@bsrunbak lvm]# ls
archive backup cache lvm.conf
[root@bsrunbak lvm]# more lvm.conf

[root@bsrunbak lvm]# pvs

Last login: Fri Jul 10 11:17:21 2015 from 172.17.99.198
[root@bsrunserver1 ~]#
[root@bsrunserver1 ~]#
[root@bsrunserver1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda4 30G 8.8G 20G 32% /
tmpfs 95G 606M 94G 1% /dev/shm
/dev/sda2 194M 33M 151M 18% /boot
/dev/sda1 200M 260K 200M 1% /boot/efi
/dev/mapper/datavg-oraclelv
50G 31G 17G 65% /oracle
172.16.110.25:/Tbackup
690G 553G 102G 85% /Tbackup
/dev/mapper/tmpvg-oradatalv
345G 254G 74G 78% /oradata
/dev/mapper/datavg-lvodc
5.0G 665M 4.1G 14% /odc
[root@bsrunserver1 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda5 datavg lvm2 a-- 208.06g 153.06g
/dev/sddlmba tmpvg lvm2 a-- 200.00g 49.99g
/dev/sddlmbb tmpvg lvm2 a-- 200.00g 0
[root@bsrunserver1 ~]# cd /etc/lvm
[root@bsrunserver1 lvm]# more lvm.conf
# Don't have more than one filter line active at once: only one gets
used.

# Run vgscan after you change this parameter to ensure that
# the cache file gets regenerated (see below).
# If it doesn't do what you expect, check the output of 'vgscan -vvvv'.

# By default we accept every block device:
# filter = [ "a/.*/" ]

# Exclude the cdrom drive
# filter = [ "r|/dev/cdrom|" ]

# When testing I like to work with just loopback devices:
# filter = [ "a/loop/", "r/.*/" ]

# Or maybe all loops and ide drives except hdc:
# filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]

# Use anchors if you want to be really specific
# filter = [ "a|^/dev/hda8$|", "r/.*/" ]
filter = [ "a|/dev/sddlm.*|", "a|^/dev/sda5$|", "r|.*|" ]

[root@bsrunserver1 lvm]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda4 30963708 9178396 20212448 32% /
tmpfs 99105596 620228 98485368 1% /dev/shm
/dev/sda2 198337 33546 154551 18% /boot
/dev/sda1 204580 260 204320 1% /boot/efi
/dev/mapper/datavg-oraclelv
51606140 31486984 17497716 65% /oracle
172.16.110.25:/Tbackup
722486368 579049760 106736448 85% /Tbackup
/dev/mapper/tmpvg-oradatalv
361243236 266027580 76865576 78% /oradata
/dev/mapper/datavg-lvodc
5160576 680684 4217748 14% /odc
[root@bsrunserver1 lvm]#
You have new mail in /var/spool/mail/root
[root@bsrunserver1 lvm]#
[root@bsrunserver1 lvm]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda5 datavg lvm2 a-- 208.06g 153.06g
/dev/sddlmba tmpvg lvm2 a-- 200.00g 49.99g
/dev/sddlmbb tmpvg lvm2 a-- 200.00g 0
[root@bsrunserver1 lvm]#
进入文件:
[root@bsrunbak lvm]# cd /opt/D*/bin
or
[root@bsrunbak bin]# pwd
/opt/DynamicLinkManager/bin
显示HDS存储卷:
[root@bsrunbak lvm]# ./dlnkmgr view -lu

‘陆’ linux上安装RAC时不使用asmlib的多路径怎么配置

FYI

/dev/mapper/mpathXX


如果使用了 多路径方案, 可以直接使用multipath 绑定设备名 不需要用到 asmlib或UDEV

请直接参考 文档:Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5 [ID 605828.1]

[root@vrh1~]#foriin`cat/proc/partitions|awk'{print$4}'|grepsd|grep[a-z]$`;doecho"###$i:`scsi_id-g-u-s/block/$i`";done
###sda:SATA_VBOX_HARDDISK_VB83d4445f-b8790695_
###sdb:SATA_VBOX_HARDDISK_VB0db2f233-269850e0_
###sdc:SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_
###sdd:SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_
###sde:SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_
###sdf:SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_
###sdg:SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_
###sdh:SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_
###sdi:SATA_VBOX_HARDDISK_VB3a556907-2b72391d_
###sdj:SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_
###sdk:SATA_VBOX_HARDDISK_VB743e1567-d0009678_


[root@vrh1~]#grep-v^#/etc/multipath.conf
defaults{
user_friendly_namesyes
}
defaults{
udev_dir/dev
polling_interval10
selector"round-robin0"
path_grouping_policyfailover
getuid_callout"/sbin/scsi_id-g-u-s/block/%n"
prio_callout/bin/true
path_checkerreadsector0
rr_min_io100
rr_weightpriorities
failbackimmediate
#no_path_retryfail
user_friendly_nameyes
}
devnode_blacklist{
devnode"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode"^hd[a-z]"
devnode"^cciss!c[0-9]d[0-9]*"
}
multipaths{
multipath{
wwidSATA_VBOX_HARDDISK_VB0db2f233-269850e0_
aliasvoting1
path_grouping_policyfailover
}
multipath{
wwidSATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_
aliasvoting2
path_grouping_policyfailover
}
multipath{
wwidSATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_
aliasvoting3
path_grouping_policyfailover
}
multipath{
wwidSATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_
aliasocr1
path_grouping_policyfailover
}
multipath{
wwidSATA_VBOX_HARDDISK_VB4915e6e3-737b312e_
aliasocr2
path_grouping_policyfailover
}
multipath{
wwidSATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_
aliasocr3
path_grouping_policyfailover
}
}[root@vrh1~]#multipath
[root@vrh1~]#multipath-ll
mpath2(SATA_VBOX_HARDDISK_VB3a556907-2b72391d_)dm-9ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-8:0:0:0sdi8:128activereadyrunning
mpath1(SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_)dm-8ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-7:0:0:0sdh8:112activereadyrunning
ocr3(SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_)dm-7ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-6:0:0:0sdg8:96activereadyrunning
ocr2(SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_)dm-6ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-5:0:0:0sdf8:80activereadyrunning
ocr1(SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_)dm-5ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-4:0:0:0sde8:64activereadyrunning
voting3(SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_)dm-4ATA,VBOXHARDDISK
size=40Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-3:0:0:0sdd8:48activereadyrunning
voting2(SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_)dm-3ATA,VBOXHARDDISK
size=40Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-2:0:0:0sdc8:32activereadyrunning
voting1(SATA_VBOX_HARDDISK_VB0db2f233-269850e0_)dm-2ATA,VBOXHARDDISK
size=40Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-1:0:0:0sdb8:16activereadyrunning
mpath4(SATA_VBOX_HARDDISK_VB743e1567-d0009678_)dm-11ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-10:0:0:0sdk8:160activereadyrunning
mpath3(SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_)dm-10ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-9:0:0:0sdj8:144activereadyrunning[root@vrh1~]#dmsetupls|sort
mpath1(253,8)
mpath2(253,9)
mpath3(253,10)
mpath4(253,11)
ocr1(253,5)
ocr2(253,6)
ocr3(253,7)
VolGroup00-LogVol00(253,0)
VolGroup00-LogVol01(253,1)
voting1(253,2)
voting2(253,3)
voting3(253,4)
[root@vrh1~]#ls-l/dev/mapper/*
crw-------1rootroot10,62Oct1709:58/dev/mapper/control
brw-rw----1rootdisk253,8Oct1900:11/dev/mapper/mpath1
brw-rw----1rootdisk253,9Oct1900:11/dev/mapper/mpath2
brw-rw----1rootdisk253,10Oct1900:11/dev/mapper/mpath3
brw-rw----1rootdisk253,11Oct1900:11/dev/mapper/mpath4
brw-rw----1rootdisk253,5Oct1900:11/dev/mapper/ocr1
brw-rw----1rootdisk253,6Oct1900:11/dev/mapper/ocr2
brw-rw----1rootdisk253,7Oct1900:11/dev/mapper/ocr3
brw-rw----1rootdisk253,0Oct1709:58/dev/mapper/VolGroup00-LogVol00
brw-rw----1rootdisk253,1Oct1709:58/dev/mapper/VolGroup00-LogVol01
brw-rw----1rootdisk253,2Oct1900:11/dev/mapper/voting1
brw-rw----1rootdisk253,3Oct1900:11/dev/mapper/voting2
brw-rw----1rootdisk253,4Oct1900:11/dev/mapper/voting3
[root@vrh1~]#ls-l/dev/dm*
brw-rw----1rootroot253,0Oct1709:58/dev/dm-0
brw-rw----1rootroot253,1Oct1709:58/dev/dm-1
brw-rw----1rootroot253,10Oct1900:11/dev/dm-10
brw-rw----1rootroot253,11Oct1900:11/dev/dm-11
brw-rw----1rootroot253,2Oct1900:11/dev/dm-2
brw-rw----1rootroot253,3Oct1900:11/dev/dm-3
brw-rw----1rootroot253,4Oct1900:11/dev/dm-4
brw-rw----1rootroot253,5Oct1900:11/dev/dm-5
brw-rw----1rootroot253,6Oct1900:11/dev/dm-6
brw-rw----1rootroot253,7Oct1900:11/dev/dm-7
brw-rw----1rootroot253,8Oct1900:11/dev/dm-8
brw-rw----1rootroot253,9Oct1900:11/dev/dm-9
[root@vrh1~]#ls-l/dev/disk/by-id/
total0
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB0db2f233-269850e0->../../asm-diskb
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB3a556907-2b72391d->../../asm-diski
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB4915e6e3-737b312e->../../asm-diskf
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9->../../asm-diskg
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a->../../asm-diske
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB743e1567-d0009678->../../asm-diskk
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4->../../asm-diskj
lrwxrwxrwx1rootroot9Oct1709:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695->../../sda
lrwxrwxrwx1rootroot10Oct1709:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part1->../../sda1
lrwxrwxrwx1rootroot10Oct1709:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part2->../../sda2
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33->../../asm-diskc
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d->../../asm-diskh
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8->../../asm-diskd
ReportAbuseLike(0)
2.Re:asm磁盘使用链路聚合设备名,IO性能只有非聚合设备的1/6!
LiuMaclean(刘相兵)
Expert
LiuMaclean(刘相兵)Jul21,201311:09AM(inresponseto13628)
step1:
[oracle@vrh8mapper]$cat/etc/multipath.conf
multipaths{
multipath{
wwidSATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_
aliasasm-disk1
mode660
uid501
gid503
}


multipath{
wwidSATA_VBOX_HARDDISK_VB0db2f233-269850e0_
aliasasm-disk2
mode660
uid501
gid503
}


multipath{
wwidSATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_
aliasasm-disk3
mode660
uid501
gid503
}
}


step2:step3:

[oracle@vrh8mapper]$ls-l/dev/mapper/asm-disk*
brw-rw----1gridasmadmin253,4Jul2107:02/dev/mapper/asm-disk1
brw-rw----1gridasmadmin253,2Jul2107:02/dev/mapper/asm-disk2
brw-rw----1gridasmadmin253,3Jul2107:02/dev/mapper/asm-disk3

‘柒’ multipath多路径,Linux系统底层存储扩容了,如何扩大文件系统

linux服务器通过multipath多路径连接到共享存储,那么当文件系统空间不足的时候,有几种方式可以扩展文件系统的大小:

1、pv不变,原lun存储扩大容量,扩大lv,扩大文件系统

2、新增pv,加入到vg中,扩大lv,扩大文件系统

下文是针对场景1的情况下如何操作(但是个人建议采取新建pv的方式2进行):

Environment

If you have this specific scenario, you can use the following steps:

Note: if these lv's are part of a clustered vg, steps 1 and 2 need to be performed on all nodes. 注意:集群模式下步骤1和步骤2两个节点都需要执行。

1) Update block devices

Note: This step needs to be run against any sd devices mapping to that lun. When using multipath, there will be more than one. 通过multipath -ll命令查看每个聚合卷对应的路径。

2) Update multipath device

例子:

3) Resize the physical volume, which will also resize the volume group

4) Resize your logical volume (the below command takes all available space in the vg)

5) Resize your filesystem

6) Verify vg, lv and filesystem extension has worked appropriately

模拟存储端扩容testlv增加

查看客户端多路径情况

客户端更新存储

更新聚合设备

更新pv空间

更新lv空间

更新文件系统空间

‘捌’ 新装 linux 服务器,挂载原有多路径下lvm磁盘

先用fdisk -l 查看目前磁盘挂载情况

尝试挂载 /dev/xvdb 到 /data 目录

mkdir /data

mount /dev/xvdb /data


如果报错:

mount:you must specify the filesystem type

就格式化当前的设备

mkfs.ext4 /dev/xvdb


注意:首先 df -T -h 查看当前被挂载的设备的文件系统类型

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root

ext4 16G 795M 14G 6% /

tmpfs tmpfs 5.8G 0 5.8G 0% /dev/shm

/dev/xvda1 ext4 485M 32M 429M 7% /boot

如果其他的硬盘是 ext3 就使用 mkfs.ext3 /dev/xvdb

如果是ext4 就使用 mkfs.ext3 /dev/xvdb然后再次尝试挂载设备

mount /dev/xvdb /data

注意:这种挂在为临时挂在,在系统重启之后挂载信息就会丢失,为了解决这个问题就必须要修改/etc/fstab 这个文件,添加信息进去

/dev/xvdb /opt ext4 defaults 1 2


fstab中存放了与分区有关的重要信息,其中每一行为一个分区记录,每一行又可分为六个部份,下面以/dev/hda7 / ext2 defaults 1 1为例逐个说明:

  1. 第一项是您想要mount的储存装置的实体位置,如hdb或如上例的/dev/hda7。设备名或者设备卷标名,(/dev/sda10 或者 LABEL=/)[源设备位置]

  2. 2. 第二项就是您想要将其加入至哪个目录位置,如/home或如上例的/,这其实就是在安装时提示的挂入点。设备挂载目录(例如上面的“/”或者“/mnt/D/”)[将要挂载到的位置]

  3. 3. 第三项就是所谓的local filesystem,其包含了以下格式:如ext、ext2、msdos、iso9660、nfs、swap等,或如上例的ext2,可以参见 /prco/filesystems说明。设备文件系统(例如上面的“ext3”或者“vfat”)[源设备的文件系统格式】、 4. 第四项就是mount时,所要设定的状态,如ro(只读)或如上例的defaults(包括了其它参数如rw, suid, dev, exec, auto, nouser, and async),可以参见“mount nfs”。(看帮助man mount)

  4. 对于已经挂载好的设备,例如上面的/dev/sda2,现在要改变挂载参数,这时可以不用卸载该设备,而可以使用下面的命令(没有挂载的设 备,remount 这个参数无效)#mount /mnt/D/ -o remount,ro (改defaults为ro)为了安全起见,可以指明其他挂载参数,例如:

  5. noexec(不允许可执行文件可执行,但千万不要把根分区挂为noexec,那就无法使用系统了,连mount 命令都无法使用了,这时只有重新做系统了!nodev(不允许挂载设备文件)nosuid,nosgid(不允许有suid和sgid属 性)nouser(不允许普通用户挂载)

  6. 5. 第五项是提供DUMP功能,在系统DUMP时是否需要BACKUP的标志位,其内定值是0。指明是否要备份,(0为不备份,1为要备份,一般根分区要备份)

  7. 6. 第六项是设定此filesystem是否要在开机时做check的动作,除了root的filesystem其必要的check为1之外,其它皆可视需要 设定,内定值是0。指明自检顺序。 (0为不自检,1或者2为要自检,如果是根分区要设为1,其他分区只能是2)</ol>

‘玖’ Linux多路径配置

如果使用了多路径方案,可以直接使用multipath绑定设备名不需要用到asmlib或UDEV请直接参考文档:Configuringnon-(11.1.0,11.2.0)onRHEL5/OL5[ID605828.1][root@vrh1~]#foriin`cat/proc/partitions|awk'{print$4}'|grepsd|grep[a-z]$`;doecho"###$i:`scsi_id-g-u-s/block/$i`";done###sda:SATA_VBOX_HARDDISK_VB83d4445f-b8790695_###sdb:SATA_VBOX_HARDDISK_VB0db2f233-269850e0_###sdc:SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_###sdd:SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_###sde:SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_###sdf:SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_###sdg:SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_###sdh:SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_###sdi:SATA_VBOX_HARDDISK_VB3a556907-2b72391d_###sdj:SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_###sdk:SATA_VBOX_HARDDISK_VB743e1567-d0009678_[root@vrh1~]#grep-v^#/etc/multipath.confdefaults{user_friendly_namesyes}defaults{udev_dir/devpolling_interval10selector"round-robin0"path_grouping_policyfailovergetuid_callout"/sbin/scsi_id-g-u-s/block/%n"prio_callout/bin/truepath_checkerreadsector0rr_min_io100rr_#no_path_retryfailuser_friendly_nameyes}devnode_blacklist{devnode"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"devnode"^hd[a-z]"devnode"^cciss!c[0-9]d[0-9]*"}multipaths{multipath{wwidSATA_VBOX_HARDDISK_VB0db2f233-269850e0_aliasvoting1path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_aliasvoting2path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_aliasvoting3path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_aliasocr1path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VB4915e6e3-737b312e_aliasocr2path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_aliasocr3path_grouping_policyfailover}}[root@vrh1~]#multipath[root@vrh1~]#multipath-llmpath2(SATA_VBOX_HARDDISK_VB3a556907-2b72391d_)dm-9ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-8:0:0:0sdi8:128activereadyrunningmpath1(SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_)dm-8ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-7:0:0:0sdh8:112activereadyrunningocr3(SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_)dm-7ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-6:0:0:0sdg8:96activereadyrunningocr2(SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_)dm-6ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-5:0:0:0sdf8:80activereadyrunningocr1(SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_)dm-5ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-4:0:0:0sde8:64activereadyrunningvoting3(SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_)dm-4ATA,VBOXHARDDISKsize=40Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-3:0:0:0sdd8:48activereadyrunningvoting2(SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_)dm-3ATA,VBOXHARDDISKsize=40Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-2:0:0:0sdc8:32activereadyrunningvoting1(SATA_VBOX_HARDDISK_VB0db2f233-269850e0_)dm-2ATA,VBOXHARDDISKsize=40Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-1:0:0:0sdb8:16activereadyrunningmpath4(SATA_VBOX_HARDDISK_VB743e1567-d0009678_)dm-11ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-10:0:0:0sdk8:160activereadyrunningmpath3(SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_)dm-10ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-9:0:0:0sdj8:144activereadyrunning[root@vrh1~]#dmsetupls|sortmpath1(253,8)mpath2(253,9)mpath3(253,10)mpath4(253,11)ocr1(253,5)ocr2(253,6)ocr3(253,7)VolGroup00-LogVol00(253,0)VolGroup00-LogVol01(253,1)voting1(253,2)voting2(253,3)voting3(253,4)[root@vrh1~]#ls-l/dev/mapper/*crw-------1rootroot10,62Oct1709:58/dev/mapper/controlbrw-rw----1rootdisk253,8Oct1900:11/dev/mapper/mpath1brw-rw----1rootdisk253,9Oct1900:11/dev/mapper/mpath2brw-rw----1rootdisk253,10Oct1900:11/dev/mapper/mpath3brw-rw----1rootdisk253,11Oct1900:11/dev/mapper/mpath4brw-rw----1rootdisk253,5Oct1900:11/dev/mapper/ocr1brw-rw----1rootdisk253,6Oct1900:11/dev/mapper/ocr2brw-rw----1rootdisk253,7Oct1900:11/dev/mapper/ocr3brw-rw----1rootdisk253,0Oct1709:58/dev/mapper/VolGroup00-LogVol00brw-rw----1rootdisk253,1Oct1709:58/dev/mapper/VolGroup00-LogVol01brw-rw----1rootdisk253,2Oct1900:11/dev/mapper/voting1brw-rw----1rootdisk253,3Oct1900:11/dev/mapper/voting2brw-rw----1rootdisk253,4Oct1900:11/dev/mapper/voting3[root@vrh1~]#ls-l/dev/dm*brw-rw----1rootroot253,0Oct1709:58/dev/dm-0brw-rw----1rootroot253,1Oct1709:58/dev/dm-1brw-rw----1rootroot253,10Oct1900:11/dev/dm-10brw-rw----1rootroot253,11Oct1900:11/dev/dm-11brw-rw----1rootroot253,2Oct1900:11/dev/dm-2brw-rw----1rootroot253,3Oct1900:11/dev/dm-3brw-rw----1rootroot253,4Oct1900:11/dev/dm-4brw-rw----1rootroot253,5Oct1900:11/dev/dm-5brw-rw----1rootroot253,6Oct1900:11/dev/dm-6brw-rw----1rootroot253,7Oct1900:11/dev/dm-7brw-rw----1rootroot253,8Oct1900:11/dev/dm-8brw-rw----1rootroot253,9Oct1900:11/dev/dm-9[root@vrh1~]#ls-l/dev/disk/by-id/:58scsi-SATA_VBOX_HARDDISK_VB0db2f233-269850e0->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB3a556907-2b72391d->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB4915e6e3-737b312e->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB743e1567-d0009678->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695->../../:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part1->../../:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part2->../../:58scsi-SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33->../../asm-:58scsi-SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d->../../asm-:58scsi-SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8->../../asm-diskdReportAbuseLike(0)2.Re:asm磁盘使用链路聚合设备名,IO性能只有非聚合设备的1/6!LiuMaclean(刘相兵)ExpertLiuMaclean(刘相兵)Jul21,201311:09AM(inresponseto13628)step1:[oracle@vrh8mapper]$cat/etc/multipath.confmultipaths{multipath{wwidSATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_aliasasm-disk1mode660uid501gid503}multipath{wwidSATA_VBOX_HARDDISK_VB0db2f233-269850e0_aliasasm-disk2mode660uid501gid503}multipath{wwidSATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_aliasasm-disk3mode660uid501gid503}}第二步:第三步:[oracle@vrh8mapper]$ls-l/dev/mapper/asm-disk*brw-rw----1gridasmadmin253,4Jul2107:02/dev/mapper/asm-disk1brw-rw----1gridasmadmin253,2Jul2107:02/dev/mapper/asm-disk2brw-rw----1gridasmadmin253,3Jul2107:02/dev/mapper/asm-disk3

‘拾’ linux多路径mpath怎么修改名称

Linux下多路径Multipath的简单配置
1、启用Multipath:
(1)启动multipathd服务
#service
multipathd
start
或者
#/etc/init.d/multipathd
start
(2)修改multipath配置文件/etc/multipath.conf:
a
默认情况下所以的设备都在multipath的黑名单中,所以即使启动了multipathd服务并加在了内核模块,multipath也不会对链路进行聚合,找到下面的3行并注释掉(在行首加上#号)
#devnode_blacklist
{
#
devnode
"*"
#}
b
默认情况下multipath生成dm设备之后,会同时在/dev/mapper/下生成以磁盘wwid为名的符号链接指向对应的dm设备。如果想生成mpath设备,则需要打开user_friendly_names选项,将配置文件中下面3行的注释取消(去掉行首的#号)
defaults
{
user_friendly_names
yes
}
(3)重启multipathd服务(修改multipath.conf文件之后都应该重启multipath服务)
(4)扫描磁盘
#multipath
-v2
使用上面命令之后,系统中会出现链路聚合之后的dm设备,同时也会在/dev/mapper/、/dev/mpath/目录下生成相应的设备。
查看multipath拓扑结构
#multipath
-ll
另外一个重要的文件是/var/lib/multipath/bindings,这个文件中是磁盘的别名和wwid的对应关系,典型的例子是:
mpath0

(5)需要注意的问题,multipath也会为本地的磁盘生成相应的dm设备,所以需要在multipath.conf中将本地磁盘加入到黑名单,配置的方法可以参考下面的示例
devnode_blacklist
{
wwid

devnode
"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode
"^hd[a-z]"
}
如上例所示,可以通过wwid或者设备名将本地磁盘加入到黑名单中。
2、固定multipath设备的命名:
通过wwid和设备别名一一对应的方式固定multipath设备的名称,这些和别名对应的设备会被创建到/dev/mapper/目录下,使用时直接使用这个目录的的设备。
(1)通过/var/lib/multipath/bindings可以获取所有磁盘的wwid,确定每个磁盘的别名之后,在/etc/multipath.conf中的multipaths段中加入相应的配置,如将wwid为的磁盘命名为etl01,wwid为的磁盘命名为etl02,配置文件如下所示
multipaths
{
multipath
{
wwid

alias
etl01
}
multipath
{
wwid

alias
etl02
}
}
(2)配置完成之后,重启multipathd服务,使用下面的命令清空已有的multipath记录
#multipath
-F
然后使用multipath
-v2重新扫描设备,这时会在/dev/mapper/目录下生成和别名对应的设备文件。
#ls
/dev/mapper/
control
etl01
eth02
(3)如果多台服务器的存储链路完全相同,并希望各服务器上同一磁盘的设备名相同,可以在一台服务器上配置好别名绑定之后,将multipaths
{
}中间的配置复制到其他服务器,这样各台服务器/dev/mapper/下面的设备将会保持一致。

热点内容
php取数组第一个 发布:2025-05-16 11:30:58 浏览:423
解调算法 发布:2025-05-16 11:21:09 浏览:136
python密码暴力破解 发布:2025-05-16 11:13:28 浏览:592
倒角刀编程 发布:2025-05-16 11:12:55 浏览:350
数据库的酸性 发布:2025-05-16 11:03:17 浏览:124
phpmysql长连接 发布:2025-05-16 10:51:50 浏览:734
android横屏全屏 发布:2025-05-16 10:47:43 浏览:475
服务器直链下载搭建 发布:2025-05-16 10:47:38 浏览:176
编译不成功怎么办 发布:2025-05-16 10:35:54 浏览:613
如何修改密码找回 发布:2025-05-16 10:35:53 浏览:571