當前位置:首頁 » 操作系統 » linux安裝多路徑

linux安裝多路徑

發布時間: 2022-12-09 03:55:27

『壹』 如何使用linux自帶多路徑DM

一、多路徑解釋
多路徑,顧名思義就是有多種選擇的路徑。在SAN或IPSAN環境,主機和存儲之間外加了光纖交換機,這就導致主機和存儲之間交換速度和效率增強,一條路徑肯定是不行的,也是不安全不穩定的。多路徑就是要來解決從主機到磁碟之間最快,最高效的問題。主要實現如下幾個功能
故障的切換和恢復
IO流量的負載均衡
磁碟的虛擬化
多路徑之前一直是存儲廠商負責解決,竟來被拆分出來單獨賣錢了。
構架基本是這樣的:存儲,多路徑軟體,光纖交換機,主機,主機系統。

二、LINUX下的multipath
1、查看是否自帶安裝?

1
2
3
4
5
6

[root@web2 multipath]# rpm -qa|grep device
device-mapper-1.02.39-1.el5
device-mapper-1.02.39-1.el5
device-mapper-multipath-0.4.7-34.el5
device-mapper-event-1.02.39-1.el5
[root@web2 multipath]#

2、安裝

1
2
3
4
5
6

rpm -ivh device-mapper-1.02.39-1.el5.rpm #安裝映射包
rpm -ivh device-mapper-multipath-0.4.7-34.el5.rpm #安裝多路徑包

外加加入開機啟動
chkconfig –level 2345 multipathd on #設置成開機自啟動multipathd
lsmod |grep dm_multipath #來檢查安裝是否正常

3、配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14

# on the default devices.
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
}
devices {
device {
vendor "HP"
path_grouping_policy multibus
features "1 queue_if_no_path"
path_checker readsector()
failback immediate
}
}<br><br>完整的配置如下:

blacklist {
devnode "^sda"
}

defaults {
user_friendly_names no
}

multipaths {
multipath {
wwid
alias iscsi-dm0
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
multipath {
wwid
alias iscsi-dm1
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
multipath {
wwid
alias iscsi-dm2
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
multipath {
wwid
alias iscsi-dm3
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
}

devices {
device {
vendor "iSCSI-Enterprise"
proct "Virtual disk"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
path_checker readsector0
path_selector "round-robin 0"
}
}
4、命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

[root@web2 ~]# multipath -h
multipath-tools v0.4.7 (03/12, 2006)
Usage: multipath [-v level] [-d] [-h|-l|-ll|-f|-F|-r]
[-p failover|multibus|group_by_serial|group_by_prio]
[device]

-v level verbosity level
0 no output
1 print created devmap names only
2 default verbosity
3 print debug information
-h print this usage text
-b file bindings file location
-d dry run, do not create or update devmaps
-l show multipath topology (sysfs and DM info)
-ll show multipath topology (maximum info)
-f flush a multipath device map
-F flush all multipath device maps
-r force devmap reload
-p policy force all maps to specified policy :
failover 1 path per priority group
multibus all paths in 1 priority group
group_by_serial 1 priority group per serial
group_by_prio 1 priority group per priority lvl
group_by_node_name 1 priority group per target node

device limit scope to the device's multipath
(udev-style $DEVNAME reference, eg /dev/sdb
or major:minor or a device map name)
[root@web2 ~]#

5、啟動關閉

1
2
3
4

# /etc/init.d/multipathd start #開啟mulitipath服務
service multipath start
service multipath restart
service multipath shutdown

6、如何獲取wwid

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

1、
[root@vxfs01 ~]# cat /var/lib/multipath/bindings
# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
#
# Format:
# alias wwid
#
mpath0
mpath1
mpath2
mpath3
mpath4

2、
[root@vxfs01 ~]# multipath -v3 |grep 3600
sdb: uid = (callout)
sdc: uid = (callout)
sdd: uid = (callout)
sde: uid = (callout)
1:0:0:0 sdb 8:16 0 [undef][ready] DGC,RAI
1:0:1:0 sdc 8:32 1 [undef][ready] DGC,RAI
2:0:0:0 sdd 8:48 1 [undef][ready] DGC,RAI
2:0:1:0 sde 8:64 0 [undef][ready] DGC,RAI
Found matching wwid [] in bindings file.

比較詳細的文字:
http://zhumeng8337797.blog.163.com/blog/static/1007689142013416111534352/
http://blog.csdn.net/wuweilong/article/details/14184097
RHEL官網資料:
http://www.prudentwoo.com/wp-content/uploads/downloads/2013/11/Red_Hat_Enterprise_Linux-5-DM_Multipath-en-US.pdf
http://www.prudentwoo.com/wp-content/uploads/downloads/2013/11/Red_Hat_Enterprise_Linux-5-DM_Multipath-zh-CN.pdf
http://www.prudentwoo.com/wp-content/uploads/downloads/2013/11/Red_Hat_Enterprise_Linux-6-DM_Multipath-en-US.pdf
http://www.prudentwoo.com/wp-content/uploads/downloads/2013/11/Red_Hat_Enterprise_Linux-6-DM_Multipath-zh-CN.pdf

『貳』 Linux系統怎麼配置多路徑

是要掛存儲嗎?你需要安裝multipath,然後配置文件是:/etc/multipath.conf

『叄』 linux系統裝好多路徑軟體時,還可以通過fdisk -l 查看到多條路徑嗎

fdisk -l看到的是邏輯設備。譬如你存儲映射到主機的一個LUN,經過兩條路徑到主機上,那麼在主機上fdisk -l就會看到兩塊邏輯上的磁碟,事實上這兩塊磁碟是同一個物理設備(對於主機來說,LUN是物理設備),通過multipath -ll看到的是一個物理設備下的實際的兩條path。

『肆』 Multipath 多路徑配置實踐心得

配置存儲時一定會遇到 multipath 多路徑的問題,不同的廠商比如 EMC PowerPath,Veritas VxDMP 等都有獨立的多路徑軟體,而多路徑軟體的功能也很清晰主要用於IO流量負載均衡和故障切換恢復等。在 Linux 環境中 device-mapper-multipath 是一個免費的通用型多路徑管理軟體,其配置文件也非常簡單,主要通過修改 /etc/multipath.conf 來調整。文章提供了《HPE 3PAR Red Hat Enterprise Linux和 Oracle Linux 實施指南》,也分享了自己配置 multipath 的實踐過程,希望對大家有參考價值。

2017年05月04日 - 初稿

閱讀原文 - https://wsgzao.github.io/post/multipath/

擴展閱讀

multipath - https://access.redhat.com/labsinfo/multipathhelper
[原]紅旗上使用multipath復合多條路徑 - http://www.linuxfly.org/post/513/

<iframe src="https://www.slideshare.net/slideshow/embed_code/key/MKQJhstxnv4JFy" width="479" height="511" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> <div style="margin-bottom:5px"> </div>

普通的電腦主機都是一個硬碟掛接到一個匯流排上,這里是一對一的關系。而到了有光纖組成的SAN環境,由於主機和存儲通過了光纖交換機連接,這樣的話,就構成了多對多的關系。也就是說,主機到存儲可以有多條路徑可以選擇。主機到存儲之間的IO由多條路徑可以選擇。

既然,每個主機到所對應的存儲可以經過幾條不同的路徑,如果是同時使用的話,I/O流量如何分配?其中一條路徑壞掉了,如何處理?還有在操作系統的角度來看,每條路徑,操作系統會認為是一個實際存在的物理盤,但實際上只是通向同一個物理盤的不同路徑而已,這樣是在使用的時候,就給用戶帶來了困惑。多路徑軟體就是為了解決上面的問題應運而生的,多路徑的主要功能就是和存儲設備一起配合實現如下功能:

由於多路徑軟體是需要和存儲在一起配合使用的,不同的廠商基於不同的操作系統,都提供了不同的版本。並且有的廠商,軟體和硬體也不是一起賣的,如果要使用多路徑軟體的話,可能還需要向廠商購買license才行。比如EMC公司基於Linux下的多路徑軟體,就需要單獨的購買license。其中,EMC提供的就是PowerPath,HDS提供的就是HDLM,Veritas提供的就是VxDMP。當然,使用系統自帶的免費多路徑軟體包,同時也是一個比較通用的包,可以支持大多數存儲廠商的設備,即使是一些不是出名的廠商,通過對配置文件進行稍作修改,也是可以支持並運行的很好。

比較重要的一點還是聽從原廠工程師的建議根據實際的業務和存儲策略使用合適的多路徑軟體。

『伍』 linux 多路徑存儲是怎麼回事

Linux下HDS存儲多路徑查看
在Redhat下確定需要劃分的存儲空間。在本例中需要進行劃分的空間是從HDS AMS2000上劃分到伺服器的多路徑存儲空間。其中sddlmad為ycdb1上需要進行劃分的空間,sddlmah為ycdb2上需要進行劃分的空間。具體如下:
查看環境
# rpm -qa|grep device-mapper
device-mapper-event-1.02.32-1.el5
device-mapper-multipath-0.4.7-30.el5
device-mapper-1.02.32-1.el5
# rpm -qa|grep lvm2 lvm2-2.02.46-8.el5
查看空間
#fdisk -l
Disk /dev/sddlmad: 184.2 GB, 184236900352 bytes 255 heads, 63 sectors/track, 22398 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sddlmah: 184.2 GB, 184236900352 bytes

255 heads, 63 sectors/track, 22398 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
查看存儲
#cd /opt/DynamicLinkManager/bin/
#./dlnkmgr view -lu
Proct : AMS
SerialNumber : 83041424 LUs : 8
iLU HDevName Device PathID Status
0000 sddlmaa /dev/sdb 000000 Online
/dev/sdj 000008 Online
/dev/sdr 000016 Online
/dev/sdz 000017 Online

0001 sddlmab /dev/sdc 000001 Online
/dev/sdk 000009 Online
/dev/sds 000018 Online
/dev/sdaa 000019 Online
0002 sddlmac /dev/sdd 000002 Online
/dev/sdl 000010 Online
/dev/sdt 000020 Online
/dev/sdab 000021 Online
0003 sddlmad /dev/sde 000003 Online
/dev/sdm 000011 Online
/dev/s 000022 Online
/dev/sdac 000023 Online
0004 sddlmae /dev/sdf 000004 Online
/dev/sdn 000012 Online
/dev/sdv 000024 Online
/dev/sdad 000025 Online
0005 sddlmaf /dev/sdg 000005 Online
/dev/sdo 000013 Online
/dev/sdw 000026 Online
/dev/sdae 000027 Online
0006 sddlmag /dev/sdh 000006 Online
/dev/sdp 000014 Online
/dev/sdx 000028 Online
/dev/sdaf 000029 Online
0007 sddlmah /dev/sdi 000007 Online
/dev/sdq 000015 Online
/dev/sdy 000030 Online
/dev/sdag 000031 Online
##############################################################
4. lvm.conf的修改
為了能夠正確的使用LVM,需要修改其過濾器:
#cd /etc/lvm #vi lvm.conf
# By default we accept every block device
# filter = [ "a/.*/" ]
filter = [ "a|sddlm[a-p][a-p]|.*|","r|dev/sd|" ]
例:

[root@bsrunbak etc]# ls -l lvm*

[root@bsrunbak etc]# cd lvm
[root@bsrunbak lvm]# ls
archive backup cache lvm.conf
[root@bsrunbak lvm]# more lvm.conf

[root@bsrunbak lvm]# pvs

Last login: Fri Jul 10 11:17:21 2015 from 172.17.99.198
[root@bsrunserver1 ~]#
[root@bsrunserver1 ~]#
[root@bsrunserver1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda4 30G 8.8G 20G 32% /
tmpfs 95G 606M 94G 1% /dev/shm
/dev/sda2 194M 33M 151M 18% /boot
/dev/sda1 200M 260K 200M 1% /boot/efi
/dev/mapper/datavg-oraclelv
50G 31G 17G 65% /oracle
172.16.110.25:/Tbackup
690G 553G 102G 85% /Tbackup
/dev/mapper/tmpvg-oradatalv
345G 254G 74G 78% /oradata
/dev/mapper/datavg-lvodc
5.0G 665M 4.1G 14% /odc
[root@bsrunserver1 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda5 datavg lvm2 a-- 208.06g 153.06g
/dev/sddlmba tmpvg lvm2 a-- 200.00g 49.99g
/dev/sddlmbb tmpvg lvm2 a-- 200.00g 0
[root@bsrunserver1 ~]# cd /etc/lvm
[root@bsrunserver1 lvm]# more lvm.conf
# Don't have more than one filter line active at once: only one gets
used.

# Run vgscan after you change this parameter to ensure that
# the cache file gets regenerated (see below).
# If it doesn't do what you expect, check the output of 'vgscan -vvvv'.

# By default we accept every block device:
# filter = [ "a/.*/" ]

# Exclude the cdrom drive
# filter = [ "r|/dev/cdrom|" ]

# When testing I like to work with just loopback devices:
# filter = [ "a/loop/", "r/.*/" ]

# Or maybe all loops and ide drives except hdc:
# filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]

# Use anchors if you want to be really specific
# filter = [ "a|^/dev/hda8$|", "r/.*/" ]
filter = [ "a|/dev/sddlm.*|", "a|^/dev/sda5$|", "r|.*|" ]

[root@bsrunserver1 lvm]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda4 30963708 9178396 20212448 32% /
tmpfs 99105596 620228 98485368 1% /dev/shm
/dev/sda2 198337 33546 154551 18% /boot
/dev/sda1 204580 260 204320 1% /boot/efi
/dev/mapper/datavg-oraclelv
51606140 31486984 17497716 65% /oracle
172.16.110.25:/Tbackup
722486368 579049760 106736448 85% /Tbackup
/dev/mapper/tmpvg-oradatalv
361243236 266027580 76865576 78% /oradata
/dev/mapper/datavg-lvodc
5160576 680684 4217748 14% /odc
[root@bsrunserver1 lvm]#
You have new mail in /var/spool/mail/root
[root@bsrunserver1 lvm]#
[root@bsrunserver1 lvm]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda5 datavg lvm2 a-- 208.06g 153.06g
/dev/sddlmba tmpvg lvm2 a-- 200.00g 49.99g
/dev/sddlmbb tmpvg lvm2 a-- 200.00g 0
[root@bsrunserver1 lvm]#
進入文件:
[root@bsrunbak lvm]# cd /opt/D*/bin
or
[root@bsrunbak bin]# pwd
/opt/DynamicLinkManager/bin
顯示HDS存儲卷:
[root@bsrunbak lvm]# ./dlnkmgr view -lu

『陸』 linux上安裝RAC時不使用asmlib的多路徑怎麼配置

FYI

/dev/mapper/mpathXX


如果使用了 多路徑方案, 可以直接使用multipath 綁定設備名 不需要用到 asmlib或UDEV

請直接參考 文檔:Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5 [ID 605828.1]

[root@vrh1~]#foriin`cat/proc/partitions|awk'{print$4}'|grepsd|grep[a-z]$`;doecho"###$i:`scsi_id-g-u-s/block/$i`";done
###sda:SATA_VBOX_HARDDISK_VB83d4445f-b8790695_
###sdb:SATA_VBOX_HARDDISK_VB0db2f233-269850e0_
###sdc:SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_
###sdd:SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_
###sde:SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_
###sdf:SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_
###sdg:SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_
###sdh:SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_
###sdi:SATA_VBOX_HARDDISK_VB3a556907-2b72391d_
###sdj:SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_
###sdk:SATA_VBOX_HARDDISK_VB743e1567-d0009678_


[root@vrh1~]#grep-v^#/etc/multipath.conf
defaults{
user_friendly_namesyes
}
defaults{
udev_dir/dev
polling_interval10
selector"round-robin0"
path_grouping_policyfailover
getuid_callout"/sbin/scsi_id-g-u-s/block/%n"
prio_callout/bin/true
path_checkerreadsector0
rr_min_io100
rr_weightpriorities
failbackimmediate
#no_path_retryfail
user_friendly_nameyes
}
devnode_blacklist{
devnode"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode"^hd[a-z]"
devnode"^cciss!c[0-9]d[0-9]*"
}
multipaths{
multipath{
wwidSATA_VBOX_HARDDISK_VB0db2f233-269850e0_
aliasvoting1
path_grouping_policyfailover
}
multipath{
wwidSATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_
aliasvoting2
path_grouping_policyfailover
}
multipath{
wwidSATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_
aliasvoting3
path_grouping_policyfailover
}
multipath{
wwidSATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_
aliasocr1
path_grouping_policyfailover
}
multipath{
wwidSATA_VBOX_HARDDISK_VB4915e6e3-737b312e_
aliasocr2
path_grouping_policyfailover
}
multipath{
wwidSATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_
aliasocr3
path_grouping_policyfailover
}
}[root@vrh1~]#multipath
[root@vrh1~]#multipath-ll
mpath2(SATA_VBOX_HARDDISK_VB3a556907-2b72391d_)dm-9ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-8:0:0:0sdi8:128activereadyrunning
mpath1(SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_)dm-8ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-7:0:0:0sdh8:112activereadyrunning
ocr3(SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_)dm-7ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-6:0:0:0sdg8:96activereadyrunning
ocr2(SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_)dm-6ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-5:0:0:0sdf8:80activereadyrunning
ocr1(SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_)dm-5ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-4:0:0:0sde8:64activereadyrunning
voting3(SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_)dm-4ATA,VBOXHARDDISK
size=40Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-3:0:0:0sdd8:48activereadyrunning
voting2(SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_)dm-3ATA,VBOXHARDDISK
size=40Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-2:0:0:0sdc8:32activereadyrunning
voting1(SATA_VBOX_HARDDISK_VB0db2f233-269850e0_)dm-2ATA,VBOXHARDDISK
size=40Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-1:0:0:0sdb8:16activereadyrunning
mpath4(SATA_VBOX_HARDDISK_VB743e1567-d0009678_)dm-11ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-10:0:0:0sdk8:160activereadyrunning
mpath3(SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_)dm-10ATA,VBOXHARDDISK
size=5.0Gfeatures='0'hwhandler='0'wp=rw
`-+-policy='round-robin0'prio=1status=active
`-9:0:0:0sdj8:144activereadyrunning[root@vrh1~]#dmsetupls|sort
mpath1(253,8)
mpath2(253,9)
mpath3(253,10)
mpath4(253,11)
ocr1(253,5)
ocr2(253,6)
ocr3(253,7)
VolGroup00-LogVol00(253,0)
VolGroup00-LogVol01(253,1)
voting1(253,2)
voting2(253,3)
voting3(253,4)
[root@vrh1~]#ls-l/dev/mapper/*
crw-------1rootroot10,62Oct1709:58/dev/mapper/control
brw-rw----1rootdisk253,8Oct1900:11/dev/mapper/mpath1
brw-rw----1rootdisk253,9Oct1900:11/dev/mapper/mpath2
brw-rw----1rootdisk253,10Oct1900:11/dev/mapper/mpath3
brw-rw----1rootdisk253,11Oct1900:11/dev/mapper/mpath4
brw-rw----1rootdisk253,5Oct1900:11/dev/mapper/ocr1
brw-rw----1rootdisk253,6Oct1900:11/dev/mapper/ocr2
brw-rw----1rootdisk253,7Oct1900:11/dev/mapper/ocr3
brw-rw----1rootdisk253,0Oct1709:58/dev/mapper/VolGroup00-LogVol00
brw-rw----1rootdisk253,1Oct1709:58/dev/mapper/VolGroup00-LogVol01
brw-rw----1rootdisk253,2Oct1900:11/dev/mapper/voting1
brw-rw----1rootdisk253,3Oct1900:11/dev/mapper/voting2
brw-rw----1rootdisk253,4Oct1900:11/dev/mapper/voting3
[root@vrh1~]#ls-l/dev/dm*
brw-rw----1rootroot253,0Oct1709:58/dev/dm-0
brw-rw----1rootroot253,1Oct1709:58/dev/dm-1
brw-rw----1rootroot253,10Oct1900:11/dev/dm-10
brw-rw----1rootroot253,11Oct1900:11/dev/dm-11
brw-rw----1rootroot253,2Oct1900:11/dev/dm-2
brw-rw----1rootroot253,3Oct1900:11/dev/dm-3
brw-rw----1rootroot253,4Oct1900:11/dev/dm-4
brw-rw----1rootroot253,5Oct1900:11/dev/dm-5
brw-rw----1rootroot253,6Oct1900:11/dev/dm-6
brw-rw----1rootroot253,7Oct1900:11/dev/dm-7
brw-rw----1rootroot253,8Oct1900:11/dev/dm-8
brw-rw----1rootroot253,9Oct1900:11/dev/dm-9
[root@vrh1~]#ls-l/dev/disk/by-id/
total0
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB0db2f233-269850e0->../../asm-diskb
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB3a556907-2b72391d->../../asm-diski
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB4915e6e3-737b312e->../../asm-diskf
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9->../../asm-diskg
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a->../../asm-diske
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB743e1567-d0009678->../../asm-diskk
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4->../../asm-diskj
lrwxrwxrwx1rootroot9Oct1709:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695->../../sda
lrwxrwxrwx1rootroot10Oct1709:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part1->../../sda1
lrwxrwxrwx1rootroot10Oct1709:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part2->../../sda2
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33->../../asm-diskc
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d->../../asm-diskh
lrwxrwxrwx1rootroot15Oct1709:58scsi-SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8->../../asm-diskd
ReportAbuseLike(0)
2.Re:asm磁碟使用鏈路聚合設備名,IO性能只有非聚合設備的1/6!
LiuMaclean(劉相兵)
Expert
LiuMaclean(劉相兵)Jul21,201311:09AM(inresponseto13628)
step1:
[oracle@vrh8mapper]$cat/etc/multipath.conf
multipaths{
multipath{
wwidSATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_
aliasasm-disk1
mode660
uid501
gid503
}


multipath{
wwidSATA_VBOX_HARDDISK_VB0db2f233-269850e0_
aliasasm-disk2
mode660
uid501
gid503
}


multipath{
wwidSATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_
aliasasm-disk3
mode660
uid501
gid503
}
}


step2:step3:

[oracle@vrh8mapper]$ls-l/dev/mapper/asm-disk*
brw-rw----1gridasmadmin253,4Jul2107:02/dev/mapper/asm-disk1
brw-rw----1gridasmadmin253,2Jul2107:02/dev/mapper/asm-disk2
brw-rw----1gridasmadmin253,3Jul2107:02/dev/mapper/asm-disk3

『柒』 multipath多路徑,Linux系統底層存儲擴容了,如何擴大文件系統

linux伺服器通過multipath多路徑連接到共享存儲,那麼當文件系統空間不足的時候,有幾種方式可以擴展文件系統的大小:

1、pv不變,原lun存儲擴大容量,擴大lv,擴大文件系統

2、新增pv,加入到vg中,擴大lv,擴大文件系統

下文是針對場景1的情況下如何操作(但是個人建議採取新建pv的方式2進行):

Environment

If you have this specific scenario, you can use the following steps:

Note: if these lv's are part of a clustered vg, steps 1 and 2 need to be performed on all nodes. 注意:集群模式下步驟1和步驟2兩個節點都需要執行。

1) Update block devices

Note: This step needs to be run against any sd devices mapping to that lun. When using multipath, there will be more than one. 通過multipath -ll命令查看每個聚合卷對應的路徑。

2) Update multipath device

例子:

3) Resize the physical volume, which will also resize the volume group

4) Resize your logical volume (the below command takes all available space in the vg)

5) Resize your filesystem

6) Verify vg, lv and filesystem extension has worked appropriately

模擬存儲端擴容testlv增加

查看客戶端多路徑情況

客戶端更新存儲

更新聚合設備

更新pv空間

更新lv空間

更新文件系統空間

『捌』 新裝 linux 伺服器,掛載原有多路徑下lvm磁碟

先用fdisk -l 查看目前磁碟掛載情況

嘗試掛載 /dev/xvdb 到 /data 目錄

mkdir /data

mount /dev/xvdb /data


如果報錯:

mount:you must specify the filesystem type

就格式化當前的設備

mkfs.ext4 /dev/xvdb


注意:首先 df -T -h 查看當前被掛載的設備的文件系統類型

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root

ext4 16G 795M 14G 6% /

tmpfs tmpfs 5.8G 0 5.8G 0% /dev/shm

/dev/xvda1 ext4 485M 32M 429M 7% /boot

如果其他的硬碟是 ext3 就使用 mkfs.ext3 /dev/xvdb

如果是ext4 就使用 mkfs.ext3 /dev/xvdb然後再次嘗試掛載設備

mount /dev/xvdb /data

注意:這種掛在為臨時掛在,在系統重啟之後掛載信息就會丟失,為了解決這個問題就必須要修改/etc/fstab 這個文件,添加信息進去

/dev/xvdb /opt ext4 defaults 1 2


fstab中存放了與分區有關的重要信息,其中每一行為一個分區記錄,每一行又可分為六個部份,下面以/dev/hda7 / ext2 defaults 1 1為例逐個說明:

  1. 第一項是您想要mount的儲存裝置的實體位置,如hdb或如上例的/dev/hda7。設備名或者設備卷標名,(/dev/sda10 或者 LABEL=/)[源設備位置]

  2. 2. 第二項就是您想要將其加入至哪個目錄位置,如/home或如上例的/,這其實就是在安裝時提示的掛入點。設備掛載目錄(例如上面的「/」或者「/mnt/D/」)[將要掛載到的位置]

  3. 3. 第三項就是所謂的local filesystem,其包含了以下格式:如ext、ext2、msdos、iso9660、nfs、swap等,或如上例的ext2,可以參見 /prco/filesystems說明。設備文件系統(例如上面的「ext3」或者「vfat」)[源設備的文件系統格式】、 4. 第四項就是mount時,所要設定的狀態,如ro(只讀)或如上例的defaults(包括了其它參數如rw, suid, dev, exec, auto, nouser, and async),可以參見「mount nfs」。(看幫助man mount)

  4. 對於已經掛載好的設備,例如上面的/dev/sda2,現在要改變掛載參數,這時可以不用卸載該設備,而可以使用下面的命令(沒有掛載的設 備,remount 這個參數無效)#mount /mnt/D/ -o remount,ro (改defaults為ro)為了安全起見,可以指明其他掛載參數,例如:

  5. noexec(不允許可執行文件可執行,但千萬不要把根分區掛為noexec,那就無法使用系統了,連mount 命令都無法使用了,這時只有重新做系統了!nodev(不允許掛載設備文件)nosuid,nosgid(不允許有suid和sgid屬 性)nouser(不允許普通用戶掛載)

  6. 5. 第五項是提供DUMP功能,在系統DUMP時是否需要BACKUP的標志位,其內定值是0。指明是否要備份,(0為不備份,1為要備份,一般根分區要備份)

  7. 6. 第六項是設定此filesystem是否要在開機時做check的動作,除了root的filesystem其必要的check為1之外,其它皆可視需要 設定,內定值是0。指明自檢順序。 (0為不自檢,1或者2為要自檢,如果是根分區要設為1,其他分區只能是2)</ol>

『玖』 Linux多路徑配置

如果使用了多路徑方案,可以直接使用multipath綁定設備名不需要用到asmlib或UDEV請直接參考文檔:Configuringnon-(11.1.0,11.2.0)onRHEL5/OL5[ID605828.1][root@vrh1~]#foriin`cat/proc/partitions|awk'{print$4}'|grepsd|grep[a-z]$`;doecho"###$i:`scsi_id-g-u-s/block/$i`";done###sda:SATA_VBOX_HARDDISK_VB83d4445f-b8790695_###sdb:SATA_VBOX_HARDDISK_VB0db2f233-269850e0_###sdc:SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_###sdd:SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_###sde:SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_###sdf:SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_###sdg:SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_###sdh:SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_###sdi:SATA_VBOX_HARDDISK_VB3a556907-2b72391d_###sdj:SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_###sdk:SATA_VBOX_HARDDISK_VB743e1567-d0009678_[root@vrh1~]#grep-v^#/etc/multipath.confdefaults{user_friendly_namesyes}defaults{udev_dir/devpolling_interval10selector"round-robin0"path_grouping_policyfailovergetuid_callout"/sbin/scsi_id-g-u-s/block/%n"prio_callout/bin/truepath_checkerreadsector0rr_min_io100rr_#no_path_retryfailuser_friendly_nameyes}devnode_blacklist{devnode"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"devnode"^hd[a-z]"devnode"^cciss!c[0-9]d[0-9]*"}multipaths{multipath{wwidSATA_VBOX_HARDDISK_VB0db2f233-269850e0_aliasvoting1path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_aliasvoting2path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_aliasvoting3path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_aliasocr1path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VB4915e6e3-737b312e_aliasocr2path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_aliasocr3path_grouping_policyfailover}}[root@vrh1~]#multipath[root@vrh1~]#multipath-llmpath2(SATA_VBOX_HARDDISK_VB3a556907-2b72391d_)dm-9ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-8:0:0:0sdi8:128activereadyrunningmpath1(SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_)dm-8ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-7:0:0:0sdh8:112activereadyrunningocr3(SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_)dm-7ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-6:0:0:0sdg8:96activereadyrunningocr2(SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_)dm-6ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-5:0:0:0sdf8:80activereadyrunningocr1(SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_)dm-5ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-4:0:0:0sde8:64activereadyrunningvoting3(SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_)dm-4ATA,VBOXHARDDISKsize=40Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-3:0:0:0sdd8:48activereadyrunningvoting2(SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_)dm-3ATA,VBOXHARDDISKsize=40Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-2:0:0:0sdc8:32activereadyrunningvoting1(SATA_VBOX_HARDDISK_VB0db2f233-269850e0_)dm-2ATA,VBOXHARDDISKsize=40Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-1:0:0:0sdb8:16activereadyrunningmpath4(SATA_VBOX_HARDDISK_VB743e1567-d0009678_)dm-11ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-10:0:0:0sdk8:160activereadyrunningmpath3(SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_)dm-10ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-9:0:0:0sdj8:144activereadyrunning[root@vrh1~]#dmsetupls|sortmpath1(253,8)mpath2(253,9)mpath3(253,10)mpath4(253,11)ocr1(253,5)ocr2(253,6)ocr3(253,7)VolGroup00-LogVol00(253,0)VolGroup00-LogVol01(253,1)voting1(253,2)voting2(253,3)voting3(253,4)[root@vrh1~]#ls-l/dev/mapper/*crw-------1rootroot10,62Oct1709:58/dev/mapper/controlbrw-rw----1rootdisk253,8Oct1900:11/dev/mapper/mpath1brw-rw----1rootdisk253,9Oct1900:11/dev/mapper/mpath2brw-rw----1rootdisk253,10Oct1900:11/dev/mapper/mpath3brw-rw----1rootdisk253,11Oct1900:11/dev/mapper/mpath4brw-rw----1rootdisk253,5Oct1900:11/dev/mapper/ocr1brw-rw----1rootdisk253,6Oct1900:11/dev/mapper/ocr2brw-rw----1rootdisk253,7Oct1900:11/dev/mapper/ocr3brw-rw----1rootdisk253,0Oct1709:58/dev/mapper/VolGroup00-LogVol00brw-rw----1rootdisk253,1Oct1709:58/dev/mapper/VolGroup00-LogVol01brw-rw----1rootdisk253,2Oct1900:11/dev/mapper/voting1brw-rw----1rootdisk253,3Oct1900:11/dev/mapper/voting2brw-rw----1rootdisk253,4Oct1900:11/dev/mapper/voting3[root@vrh1~]#ls-l/dev/dm*brw-rw----1rootroot253,0Oct1709:58/dev/dm-0brw-rw----1rootroot253,1Oct1709:58/dev/dm-1brw-rw----1rootroot253,10Oct1900:11/dev/dm-10brw-rw----1rootroot253,11Oct1900:11/dev/dm-11brw-rw----1rootroot253,2Oct1900:11/dev/dm-2brw-rw----1rootroot253,3Oct1900:11/dev/dm-3brw-rw----1rootroot253,4Oct1900:11/dev/dm-4brw-rw----1rootroot253,5Oct1900:11/dev/dm-5brw-rw----1rootroot253,6Oct1900:11/dev/dm-6brw-rw----1rootroot253,7Oct1900:11/dev/dm-7brw-rw----1rootroot253,8Oct1900:11/dev/dm-8brw-rw----1rootroot253,9Oct1900:11/dev/dm-9[root@vrh1~]#ls-l/dev/disk/by-id/:58scsi-SATA_VBOX_HARDDISK_VB0db2f233-269850e0->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB3a556907-2b72391d->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB4915e6e3-737b312e->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB743e1567-d0009678->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695->../../:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part1->../../:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part2->../../:58scsi-SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33->../../asm-:58scsi-SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d->../../asm-:58scsi-SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8->../../asm-diskdReportAbuseLike(0)2.Re:asm磁碟使用鏈路聚合設備名,IO性能只有非聚合設備的1/6!LiuMaclean(劉相兵)ExpertLiuMaclean(劉相兵)Jul21,201311:09AM(inresponseto13628)step1:[oracle@vrh8mapper]$cat/etc/multipath.confmultipaths{multipath{wwidSATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_aliasasm-disk1mode660uid501gid503}multipath{wwidSATA_VBOX_HARDDISK_VB0db2f233-269850e0_aliasasm-disk2mode660uid501gid503}multipath{wwidSATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_aliasasm-disk3mode660uid501gid503}}第二步:第三步:[oracle@vrh8mapper]$ls-l/dev/mapper/asm-disk*brw-rw----1gridasmadmin253,4Jul2107:02/dev/mapper/asm-disk1brw-rw----1gridasmadmin253,2Jul2107:02/dev/mapper/asm-disk2brw-rw----1gridasmadmin253,3Jul2107:02/dev/mapper/asm-disk3

『拾』 linux多路徑mpath怎麼修改名稱

Linux下多路徑Multipath的簡單配置
1、啟用Multipath:
(1)啟動multipathd服務
#service
multipathd
start
或者
#/etc/init.d/multipathd
start
(2)修改multipath配置文件/etc/multipath.conf:
a
默認情況下所以的設備都在multipath的黑名單中,所以即使啟動了multipathd服務並加在了內核模塊,multipath也不會對鏈路進行聚合,找到下面的3行並注釋掉(在行首加上#號)
#devnode_blacklist
{
#
devnode
"*"
#}
b
默認情況下multipath生成dm設備之後,會同時在/dev/mapper/下生成以磁碟wwid為名的符號鏈接指向對應的dm設備。如果想生成mpath設備,則需要打開user_friendly_names選項,將配置文件中下面3行的注釋取消(去掉行首的#號)
defaults
{
user_friendly_names
yes
}
(3)重啟multipathd服務(修改multipath.conf文件之後都應該重啟multipath服務)
(4)掃描磁碟
#multipath
-v2
使用上面命令之後,系統中會出現鏈路聚合之後的dm設備,同時也會在/dev/mapper/、/dev/mpath/目錄下生成相應的設備。
查看multipath拓撲結構
#multipath
-ll
另外一個重要的文件是/var/lib/multipath/bindings,這個文件中是磁碟的別名和wwid的對應關系,典型的例子是:
mpath0

(5)需要注意的問題,multipath也會為本地的磁碟生成相應的dm設備,所以需要在multipath.conf中將本地磁碟加入到黑名單,配置的方法可以參考下面的示例
devnode_blacklist
{
wwid

devnode
"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode
"^hd[a-z]"
}
如上例所示,可以通過wwid或者設備名將本地磁碟加入到黑名單中。
2、固定multipath設備的命名:
通過wwid和設備別名一一對應的方式固定multipath設備的名稱,這些和別名對應的設備會被創建到/dev/mapper/目錄下,使用時直接使用這個目錄的的設備。
(1)通過/var/lib/multipath/bindings可以獲取所有磁碟的wwid,確定每個磁碟的別名之後,在/etc/multipath.conf中的multipaths段中加入相應的配置,如將wwid為的磁碟命名為etl01,wwid為的磁碟命名為etl02,配置文件如下所示
multipaths
{
multipath
{
wwid

alias
etl01
}
multipath
{
wwid

alias
etl02
}
}
(2)配置完成之後,重啟multipathd服務,使用下面的命令清空已有的multipath記錄
#multipath
-F
然後使用multipath
-v2重新掃描設備,這時會在/dev/mapper/目錄下生成和別名對應的設備文件。
#ls
/dev/mapper/
control
etl01
eth02
(3)如果多台伺服器的存儲鏈路完全相同,並希望各伺服器上同一磁碟的設備名相同,可以在一台伺服器上配置好別名綁定之後,將multipaths
{
}中間的配置復制到其他伺服器,這樣各台伺服器/dev/mapper/下面的設備將會保持一致。

熱點內容
美嘉演算法口訣 發布:2025-05-16 06:03:15 瀏覽:952
c程序編譯連接 發布:2025-05-16 06:02:36 瀏覽:964
腳本魔獸 發布:2025-05-16 06:01:52 瀏覽:330
文件夾python 發布:2025-05-16 06:01:43 瀏覽:627
電腦我的世界伺服器游戲幣 發布:2025-05-16 05:27:25 瀏覽:487
索尼手機為什麼不能用安卓10 發布:2025-05-16 05:18:46 瀏覽:784
蔚來es6選擇哪些配置實用 發布:2025-05-16 05:18:05 瀏覽:130
小米如何掃碼wifi密碼 發布:2025-05-16 05:13:38 瀏覽:807
樓層密碼是什麼意思 發布:2025-05-16 05:13:37 瀏覽:13
創建文件夾失敗 發布:2025-05-16 05:12:59 瀏覽:396