當前位置:首頁 » 操作系統 » linux多路徑配置

linux多路徑配置

發布時間: 2023-04-03 21:35:54

1. linux多路徑mpath怎麼修改名稱

Linux下多路徑Multipath的簡單配置
1、啟用Multipath:
(1)啟動multipathd服務
#service
multipathd
start
或者
#/etc/init.d/multipathd
start
(2)修改multipath配置文件/etc/multipath.conf:
a
默認情況下所以的設備都在multipath的黑名單中,所以即使啟動了multipathd服務並加在了內核模塊,multipath也不會對鏈路進行聚合,找到下面的3行並注釋掉(在行首加上#號)
#devnode_blacklist
{
#
devnode
"*"
#}
b
默認情況下multipath生成dm設備之後,會同時在/dev/mapper/下生成以磁碟wwid為名的符號鏈接指向對應的dm設備。如果想生成mpath設備,則需要打開user_friendly_names選項,將配置文件中下面3行的注釋取消(去掉行首的#號)
defaults
{
user_friendly_names
yes
}
(3)重啟multipathd服務(修改multipath.conf文件之後都應該重啟multipath服務)
(4)掃描磁碟
#multipath
-v2
使用上面命令之後,系統中會出現鏈路聚合之後的dm設備,同時也會在/dev/mapper/、/dev/mpath/目錄下生成相應的設備。
查看multipath拓撲結構
#multipath
-ll
另外一個重要的文件是/var/lib/multipath/bindings,這個文件中是磁碟的別名和wwid的對應關系,典型的例子是:
mpath0

(5)需要注意的問題,multipath也會為本地的磁碟生成相應的dm設備,所以需要在multipath.conf中將本地磁碟加入到黑名單,配置的方法可以參考下面的示例
devnode_blacklist
{
wwid

devnode
"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode
"^hd[a-z]"
}
如上例所示,可以通過wwid或者設備名將本地磁碟加入到黑名單中。
2、固定multipath設備的命名:
通過wwid和設備別名一一對應的方式固定multipath設備的名稱,這些和別名對應的設備會被創建到/dev/mapper/目錄下,使用時直接使用這個目錄的的設備。
(1)通過/var/lib/multipath/bindings可以獲取所有磁碟的wwid,確定每個磁碟的別名之後,在/etc/multipath.conf中的multipaths段中加入相應的配置,如將wwid為的磁碟命名為etl01,wwid為的磁碟命名為etl02,配置文件如下所示
multipaths
{
multipath
{
wwid

alias
etl01
}
multipath
{
wwid

alias
etl02
}
}
(2)配置完成之後,重啟multipathd服務,使用下面的命令清空已有的multipath記錄
#multipath
-F
然後使用multipath
-v2重新掃描設備,這時會在/dev/mapper/目錄下生成和別名對應的設備文件。
#ls
/dev/mapper/
control
etl01
eth02
(3)如果多台伺服器的存儲鏈路完全相同,並希望各伺服器上同一磁碟的設備名相同,可以在一台伺服器上配置好別名綁定之後,將multipaths
{
}中間的配置復制到其他伺服器,這樣各台伺服器/dev/mapper/下面的設備將會保持一致。

2. yum安裝多路徑

一、什麼是多路徑
普通的電腦主機都是一個硬碟掛接到一個匯流排上,這里是一對一的關系。而到了有光纖組成的SAN環境,或者由iSCSI組成的IPSAN環境,由於主機和存儲通過了光纖交換機或者多塊網卡及IP來連接,這樣的話,就構成了多對多的關系。也就是說,主機到存儲可以有多條路徑可以選擇。主機到存儲之間的IO由多條路好寬徑可以選擇。每個主機到所對應的存儲可以經過幾條不同的路徑,如果是同時使用的話,I/O流量如何分配?其中一條路徑壞掉了,如何處理?還有在操作系統的角度來看,每條路徑,操作系統會認為是友嘩亮一個實際存在的物理盤,但實際上只是通向同一個物理盤的不同路徑而已,這樣是在使用的時候,就給用戶帶來了困惑。多路徑軟體就是為了解決上面的問題應運而生的。
多路徑的主要功能就是和存儲設備一起配合實現如下功能:
1.故障的切換和恢復
2.IO流量的負載均衡
3.磁碟的虛擬化
由於多路徑軟體是需要和存儲在一起配合使用的,不同的廠商基於不同的操作系統,都提供了不同的版本。並且有的廠商,軟體和硬體也不是一起賣的,如果要使用多路徑軟體的話,可能還需要向廠商購買license才行。比如EMC公司基於linux下的多路徑軟體,就需要單獨的購買license。好在, RedHat和Suse的2.6的內核中都自帶了免費的多路徑軟體包,並且可以免費使用,同時也是一個比較通用的包,可以支持大多數存儲廠商的設備,即使是一些不是出名的廠商,通過對配置文件進行稍作修改,也是可以支持並運行的很好的。
二、Linux下multipath介紹,需要以下工具包:
在CentOS 5中,最小安裝系統時multipath已經被安裝,查看multipath是否安裝如下:

1、device-mapper-multipath:即multipath-tools。主要提供multipathd和multipath等工具和 multipath.conf等配置文件。這些工具通過device mapper的ioctr的介面創建和配置multipath設備(調用device-mapper的用戶空間庫。創建的多路徑設備會在/dev /mapper中)。
2、 device-mapper:主要包括兩大部分:內核部分和用戶部分。內核部分主要由device mapper核心(dm.ko)和一些target driver(md-multipath.ko)。核心完成設備的映射,而target根據映射關系和自身特點具體處理從mappered device 下來的i/o。同時,在核心部分,提供了一個介面,用戶通過ioctr可和內核部分通信,以指導內核驅動的行為,比如如何創建mappered device,這些divece的屬性等。linux device mapper的用戶空間部分主要包括device-mapper這個包。其中包括dmsetup工具和一些幫助創建和配置mappered device的庫。這些庫主要抽象,封裝了與ioctr通信的介面,以便方便創建蘆者和配置mappered device。multipath-tool的程序中就需要調用這些庫。
3、dm-multipath.ko和dm.ko:dm.ko是device mapper驅動。它是實現multipath的基礎。dm-multipath其實是dm的一個target驅動。
4、scsi_id: 包含在udev程序包中,可以在multipath.conf中配置該程序來獲取scsi設備的序號。通過序號,便可以判斷多個路徑對應了同一設備。這個是多路徑實現的關鍵。scsi_id是通過sg驅動,向設備發送EVPD page80或page83 的inquery命令來查詢scsi設備的標識。但一些設備並不支持EVPD 的inquery命令,所以他們無法被用來生成multipath設備。但可以改寫scsi_id,為不能提供scsi設備標識的設備虛擬一個標識符,並輸出到標准輸出。multipath程序在創建multipath設備時,會調用scsi_id,從其標准輸出中獲得該設備的scsi id。在改寫時,需要修改scsi_id程序的返回值為0。因為在multipath程序中,會檢查該直來確定scsi id是否已經成功得到。
三、multipath在CentOS 5中的基本配置過程:
1、安裝和載入多路徑軟體包
# yum –y install device-mapper device-mapper-multipath
# chkconfig –level 2345 multipathd on #設置成開機自啟動multipathd
# lsmod |grep dm_multipath #來檢查安裝是否正常

如果模塊沒有載入成功請使用下列命初始化DM,或重啟系統
---Use the following commands to initialize and start DM for the first time:
# modprobe dm-multipath
# modprobe dm-round-robin
# service multipathd start
# multipath –v2
2、配置multipath:
Multipath的配置文件是/etc/multipath.conf , 如需要multipath正常工作只需要如下配置即可:(如果需要更加詳細的配置,請看本文後續的介紹)
blacklist {
devnode "^sda"
}
defaults {
user_friendly_names yes
path_grouping_policy multibus
failback immediate
no_path_retry fail
}
# vi /etc/multipath.conf

3、multipath基本操作命令
# /etc/init.d/multipathd start #開啟mulitipath服務
# multipath -F #刪除現有路徑
# multipath -v2 #格式化路徑
# multipath -ll #查看多路徑

如果配置正確的話就會在/dev/mapper/目錄下多出mpath0、mpath1等之類設備。

用fdisk -l命令可以看到多路徑軟體創建的磁碟,如下圖中的/dev/dm-[0-3]

4、multipath磁碟的基本操作
要對多路徑軟體生成的磁碟進行操作直接操作/dev/mapper/目錄下的磁碟就行.
在對多路徑軟體生成的磁碟進行分區之前最好運行一下pvcreate命令:
# pvcreate /dev/mapper/mpath0
# fdisk /dev/mapper/mpath0

用fdisk對多路徑軟體生成的磁碟進行分區保存時會有一個報錯,此報錯不用理會。
fdisk對多路徑軟體生成的磁碟進行分區之後,所生成的磁碟分區並沒有馬上添加到/dev/目錄下,此時我們要重啟IPSAN或者FCSAN的驅動,如果是用iscsi-initiator來連接IPSAN的重啟ISCSI服務就可以發現所生成的磁碟分區了
# service iscsi restart
# ls -l /dev/mapper/

如上圖中的mpath0p1和mpath1p1就是我們對multipath磁碟進行的分區
# mkfs.ext3 /dev/mapper/mpath0p1 #對mpath1p1分區格式化成ext3文件系統
# mount /dev/mapper/mpath0p1 /ipsan/ #掛載mpath1p1分區

四、multipath的高有配置
以上都是用multipath的默認配置來完成multipath的配置,比如映射設備的名稱,multipath負載均衡的方法都是默認設置。那有沒有按照我們自己定義的方法來配置multipath呢,當可以。
1、multipath.conf文件的配置
接下來的工作就是要編輯/etc/multipath.conf的配置文件
multipath.conf主要包括blacklist、multipaths、devices三部份的配置
blacklist配置
blacklist {
devnode "^sda"
}
Multipaths部分配置multipaths和devices兩部份的配置。
multipaths {
multipath {
wwid **************** #此值multipath -v3可以看到
alias iscsi-dm0 #映射後的別名,可以隨便取
path_grouping_policy multibus #路徑組策略
path_checker tur #決定路徑狀態的方法
path_selector "round-robin 0" #選擇那條路徑進行下一個IO操作的方法
}
}
Devices部分配置
devices {
device {
vendor "iSCSI-Enterprise" #廠商名稱
proct "Virtual disk" #產品型號
path_grouping_policy multibus #默認的路徑組策略
getuid_callout "/sbin/scsi_id -g -u -s /block/%n" #獲得唯一設備號使用的默認程序
prio_callout "/sbin/acs_prio_alua %d" #獲取有限級數值使用的默認程序
path_checker readsector0 #決定路徑狀態的方法
path_selector "round-robin 0" #選擇那條路徑進行下一個IO操作的方法
failback immediate #故障恢復的模式
no_path_retry queue #在disable queue之前系統嘗試使用失效路徑的次數的數值
rr_min_io 100 #在當前的用戶組中,在切換到另外一條路徑之前的IO請求的數目
}
}
如下是一個完整的配置文件
blacklist {
devnode "^sda"
}
defaults {
user_friendly_names no
}
multipaths {
multipath {
wwid
alias iscsi-dm0
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
multipath {
wwid
alias iscsi-dm1
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
multipath {
wwid
alias iscsi-dm2
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
multipath {
wwid
alias iscsi-dm3
path_grouping_policy multibus
path_checker tur
path_selector "round-robin 0"
}
}
devices {
device {
vendor "iSCSI-Enterprise"
proct "Virtual disk"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
path_checker readsector0
path_selector "round-robin 0"
}
}
獲取wwid的方法:
(1)默認情況下,將使用 /var/lib/multipath/bindings 內的配置設定具體每個多路徑設備名,如果在/etc/multipath.conf中有設定各wwid 別名,別名會覆蓋此設定。

(2)# multipath -v3命令查找

2、負載均衡測試
使用dd命令來對設備進行寫操作,並同時通過iostat來查看I/0狀態,命令及輸出如下:
# dd if=/dev/zero of=/dev/mapper/iscsi-dm1p1
開啟另外一個終端用以下命令查看IO情況
# iostat 10 10

通過上述輸出,我們看到,在對/dev/mapper/iscsi-dm1p1讀寫時,實際上是通過對/dev/md-1包含的當前active的所有設備,即/dev/sde1,/dev/shl這2條路徑來完成對實際的LUN的寫過程。
3、路徑切換測試
首先,我們拔掉伺服器上一根網線,經過不到10秒,我們看到:MPIO成功地從上述「失敗」的路徑/dev/sel切換到了另外一條路徑/dev/sdh1上。
-----------------------------------
©著作權歸作者所有:來自51作者rtking的原創作品,請聯系作者獲取轉載授權,否則將追究法律責任
LINUX下多路徑(multi-path)介紹及使

3. RH linux 5.9 下做多路徑聚合 問題,不知道怎樣才算配置成功,路徑聚合成功 (存儲是HP EVA4400+ )

應該是配置成功了,樓上的網友回答得相當專業。
一般情況下,Linux配完multipath後用fdisk -l查看,重復的磁碟還是能看得到。應該看到的是n多個/dev/sd*和後來生出來的/dev/dm-*(與mpath*分別對應)。這點與Windows上有所不用,我記得Windows上配完多路徑後是看不到重復的盤了。
你存儲上有7個LUN的話,那應該是7個dm-*才對啊,怎麼有8個呢?不解。。。
還有要注意的是:應該使用/dev/mapper/mpath*(multipath虛擬出來的多路徑設備),對它進行分區等操作。/dev/dm-* 是軟體內部自身使用的,不要用。

4. Linux系統怎麼配置多路徑

Linux多路徑指的是除了主機和硬碟一條路徑的連接,還包括了主機和網路伺服器的連接形成的主機一對多的路徑連接關系。通過多路徑的連接,實現了磁碟的虛擬化。

1、安裝多路徑軟體包:
device-mapper-1.02.67-2.el5
device-mapper-event-1.02.67.2.el5
device-mapper-multipath-0.4.7-48.el5
[root@RKDB01 Server]# rpm -ivh device-mapper-1.02.67-2.el5.x86_64.rpm
warning: device-mapper-1.02.67-2.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing.。。 ########################################### [100%]
package device-mapper-1.02.67-2.el5.x86_64 is already installed
[root@RKDB01 Server]# rpm -ivh device-mapper-event-1.02.67-2.el5.x86_64.rpm
warning: device-mapper-event-1.02.67-2.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing.。。 ########################################### [100%]
package device-mapper-event-1.02.67-2.el5.x86_64 is already installed
[root@RKDB01 Server]# rpm -ivh device-mapper-multipath-0.4.7-48.el5.x86_64.rpm
warning: device-mapper-multipath-0.4.7-48.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing.。。 ########################################### [100%]
package device-mapper-multipath-0.4.7-48.el5.x86_64 is already installed
2、設置開機啟動,並檢查安裝包是否正常:
chkconfig --level 345 multipathd on
lsmod |grep dm_multipath
[root@RKDB01 Server]# chkconfig --level 345 multipathd on
[root@RKDB01 Server]# lsmod |grep dm_multipath
dm_multipath 58969 0
scsi_dh 42561 1 dm_multipath
dm_mod 102417 4 dm_mirror,dm_multipath,dm_raid45,dm_log
[root@RKDB01 Server]#
3、配置multipathd 使其正常工作,編輯/etc/multipath.conf,開放如下內容:
defaults {
udev_dir /dev
polling_interval 10
selector 「round-robin 0」
path_grouping_policy multibus
getuid_callout 「/sbin/scsi_id -g -u -s /block/%n」
prio_callout none
path_checker readsector0
rr_min_io 100
max_fds 8192
rr_weight priorities
failback immediate
no_path_retry fail
user_friendly_names yes
}
blacklist {
wwid 26353900f02796769
devnode 「^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*」
devnode 「^hd[a-z]」
}
4、並關閉如下內容
#blacklist {
# devnode 「*」
#}
#defaults {
27 # user_friendly_names yes
28 #}
5、完成之後執行如下命令發現多路徑:
[root@RKDB01 Server]# modprobe dm-multipath
[root@RKDB01 Server]# multipath -F
[root@RKDB01 Server]# multipath dm-multipath
[root@RKDB01 Server]# multipath dm-round-robin
[root@RKDB01 Server]# service multipathd restart
正在關閉multipathd 埠監控程序: [確定]
正在啟動守護進程multipathd: [確定]
[root@RKDB01 Server]# multipath -v2
[root@RKDB01 Server]# multipath -v2
[root@RKDB01 Server]# multipath -ll
mpath1 () dm-0 TOYOU,NetStor_iSUM510
[size=3.3T][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=2][ena bled]
\_ 1:0:0:0 sdb 8:16 [failed][ready]
\_ 1:0:1:0 sdc 8:32 [failed][ready]
[root@RKDB01 Server]#
6、重啟伺服器後,可以看到多路徑信息了:
[root@RKDB01 ~]# ll /dev/mapper/
總計 0
crw------- 1 root root 10, 60 11-05 22:35 control
brw-rw---- 1 root disk 253, 0 11-05 22:35 mpath1
brw-rw---- 1 root disk 253, 1 11-05 22:35 mpath2
[root@RKDB01 ~]# multipath -ll
mpath2 () dm-1 TOYOU,NetStor_iSUM510
[size=3.2T][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:0:1 sdc 8:32 [active][ready]
\_ 1:0:1:1 sde 8:64 [active][ready]
mpath1 () dm-0 TOYOU,NetStor_iSUM510
[size=20G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:0:0 sdb 8:16 [active][ready]
\_ 1:0:1:0 sdd 8:48 [active][ready]
7、通過fdisk 看可以生成了DM-0/DM-1兩個盤,正是上面sdc/sde,sdb/sdd多路徑後出來的:
[root@RKDB01 ~]# fdisk -l
Disk /dev/sda: 299.4 GB, 299439751168 bytes
255 heads, 63 sectors/track, 36404 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 38 305203+ 83 Linux
/dev/sda2 39 13092 104856255 83 Linux
/dev/sda3 13093 19619 52428127+ 83 Linux
/dev/sda4 19620 36404 134825512+ 5 Extended
/dev/sda5 19620 26146 52428096 83 Linux
/dev/sda6 26147 28757 20972826 83 Linux
/dev/sda7 28758 30324 12586896 82 Linux swap / Solaris
/dev/sda8 30325 36404 48837568+ 83 Linux
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn『t contain a valid partition table
Disk /dev/sdc: 3568.4 GB, 3568429957120 bytes
255 heads, 63 sectors/track, 433836 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn』t contain a valid partition table
Disk /dev/sdd: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn『t contain a valid partition table
Disk /dev/sde: 3568.4 GB, 3568429957120 bytes
255 heads, 63 sectors/track, 433836 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn』t contain a valid partition table
Disk /dev/dm-0: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-0 doesn『t contain a valid partition table
Disk /dev/dm-1: 3568.4 GB, 3568429957120 bytes
255 heads, 63 sectors/track, 433836 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-1 doesn』t contain a valid partition table
Disk /dev/sdf: 4009 MB, 4009754624 bytes
255 heads, 63 sectors/track, 487 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf4 * 1 488 3915744+ b W95 FAT32
Partition 4 has different physical/logical endings:
phys=(486, 254, 63) logical=(487, 125, 22)
[root@RKDB01 ~]#
8、同時也可以在/dev/mapper目錄中查看到多路徑映射的信息:
[root@RKDB01 ~]# ll /dev/mapper/
總計 0
crw------- 1 root root 10, 60 11-06 00:49 control
brw-rw---- 1 root disk 253, 2 11-06 00:49 data-data001
brw-rw---- 1 root disk 253, 0 11-06 00:49 mpath1
brw-rw---- 1 root disk 253, 1 11-06 00:49 mpath2

5. Linux系統怎麼配置多路徑

是要掛存儲嗎?你需要安裝multipath,然後配置文件是:/etc/multipath.conf

6. multipath多路徑,Linux系統底層存儲擴容了,如何擴大文件系統

linux伺服器通過multipath多路徑連接到共享存儲,那麼當文件系統空間不足的時候,有幾種方式可以擴展文件系統的大小:

1、pv不變,原lun存儲擴大容量,擴大lv,擴大文件系統

2、新增pv,加入到vg中,擴大lv,擴大文件系統

下文是針對場景1的情況下如何操作(但是個人建議採取新建pv的方式2進行):

Environment

If you have this specific scenario, you can use the following steps:

Note: if these lv's are part of a clustered vg, steps 1 and 2 need to be performed on all nodes. 注意:集群模式下步驟1和步驟2兩個節點都需要執行。

1) Update block devices

Note: This step needs to be run against any sd devices mapping to that lun. When using multipath, there will be more than one. 通過multipath -ll命令查看每個聚合卷對應的路徑。

2) Update multipath device

例子:

3) Resize the physical volume, which will also resize the volume group

4) Resize your logical volume (the below command takes all available space in the vg)

5) Resize your filesystem

6) Verify vg, lv and filesystem extension has worked appropriately

模擬存儲端擴容testlv增加

查看客戶端多路徑情況

客戶端更新存儲

更新聚合設備

更新pv空間

更新lv空間

更新文件系統空間

7. Linux多路徑配置

如果使用了多路徑方案,可以直接使用multipath綁定設備名不需要用到asmlib或UDEV請直接參考文檔:Configuringnon-(11.1.0,11.2.0)onRHEL5/OL5[ID605828.1][root@vrh1~]#foriin`cat/proc/partitions|awk'{print$4}'|grepsd|grep[a-z]$`;doecho"###$i:`scsi_id-g-u-s/block/$i`";done###sda:SATA_VBOX_HARDDISK_VB83d4445f-b8790695_###sdb:SATA_VBOX_HARDDISK_VB0db2f233-269850e0_###sdc:SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_###sdd:SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_###sde:SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_###sdf:SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_###sdg:SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_###sdh:SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_###sdi:SATA_VBOX_HARDDISK_VB3a556907-2b72391d_###sdj:SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_###sdk:SATA_VBOX_HARDDISK_VB743e1567-d0009678_[root@vrh1~]#grep-v^#/etc/multipath.confdefaults{user_friendly_namesyes}defaults{udev_dir/devpolling_interval10selector"round-robin0"path_grouping_policyfailovergetuid_callout"/sbin/scsi_id-g-u-s/block/%n"prio_callout/bin/truepath_checkerreadsector0rr_min_io100rr_#no_path_retryfailuser_friendly_nameyes}devnode_blacklist{devnode"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"devnode"^hd[a-z]"devnode"^cciss!c[0-9]d[0-9]*"}multipaths{multipath{wwidSATA_VBOX_HARDDISK_VB0db2f233-269850e0_aliasvoting1path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_aliasvoting2path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_aliasvoting3path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_aliasocr1path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VB4915e6e3-737b312e_aliasocr2path_grouping_policyfailover}multipath{wwidSATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_aliasocr3path_grouping_policyfailover}}[root@vrh1~]#multipath[root@vrh1~]#multipath-llmpath2(SATA_VBOX_HARDDISK_VB3a556907-2b72391d_)dm-9ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-8:0:0:0sdi8:128activereadyrunningmpath1(SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_)dm-8ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-7:0:0:0sdh8:112activereadyrunningocr3(SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_)dm-7ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-6:0:0:0sdg8:96activereadyrunningocr2(SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_)dm-6ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-5:0:0:0sdf8:80activereadyrunningocr1(SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_)dm-5ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-4:0:0:0sde8:64activereadyrunningvoting3(SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_)dm-4ATA,VBOXHARDDISKsize=40Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-3:0:0:0sdd8:48activereadyrunningvoting2(SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_)dm-3ATA,VBOXHARDDISKsize=40Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-2:0:0:0sdc8:32activereadyrunningvoting1(SATA_VBOX_HARDDISK_VB0db2f233-269850e0_)dm-2ATA,VBOXHARDDISKsize=40Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-1:0:0:0sdb8:16activereadyrunningmpath4(SATA_VBOX_HARDDISK_VB743e1567-d0009678_)dm-11ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-10:0:0:0sdk8:160activereadyrunningmpath3(SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_)dm-10ATA,VBOXHARDDISKsize=5.0Gfeatures='0'hwhandler='0'wp=rw`-+-policy='round-robin0'prio=1status=active`-9:0:0:0sdj8:144activereadyrunning[root@vrh1~]#dmsetupls|sortmpath1(253,8)mpath2(253,9)mpath3(253,10)mpath4(253,11)ocr1(253,5)ocr2(253,6)ocr3(253,7)VolGroup00-LogVol00(253,0)VolGroup00-LogVol01(253,1)voting1(253,2)voting2(253,3)voting3(253,4)[root@vrh1~]#ls-l/dev/mapper/*crw-------1rootroot10,62Oct1709:58/dev/mapper/controlbrw-rw----1rootdisk253,8Oct1900:11/dev/mapper/mpath1brw-rw----1rootdisk253,9Oct1900:11/dev/mapper/mpath2brw-rw----1rootdisk253,10Oct1900:11/dev/mapper/mpath3brw-rw----1rootdisk253,11Oct1900:11/dev/mapper/mpath4brw-rw----1rootdisk253,5Oct1900:11/dev/mapper/ocr1brw-rw----1rootdisk253,6Oct1900:11/dev/mapper/ocr2brw-rw----1rootdisk253,7Oct1900:11/dev/mapper/ocr3brw-rw----1rootdisk253,0Oct1709:58/dev/mapper/VolGroup00-LogVol00brw-rw----1rootdisk253,1Oct1709:58/dev/mapper/VolGroup00-LogVol01brw-rw----1rootdisk253,2Oct1900:11/dev/mapper/voting1brw-rw----1rootdisk253,3Oct1900:11/dev/mapper/voting2brw-rw----1rootdisk253,4Oct1900:11/dev/mapper/voting3[root@vrh1~]#ls-l/dev/dm*brw-rw----1rootroot253,0Oct1709:58/dev/dm-0brw-rw----1rootroot253,1Oct1709:58/dev/dm-1brw-rw----1rootroot253,10Oct1900:11/dev/dm-10brw-rw----1rootroot253,11Oct1900:11/dev/dm-11brw-rw----1rootroot253,2Oct1900:11/dev/dm-2brw-rw----1rootroot253,3Oct1900:11/dev/dm-3brw-rw----1rootroot253,4Oct1900:11/dev/dm-4brw-rw----1rootroot253,5Oct1900:11/dev/dm-5brw-rw----1rootroot253,6Oct1900:11/dev/dm-6brw-rw----1rootroot253,7Oct1900:11/dev/dm-7brw-rw----1rootroot253,8Oct1900:11/dev/dm-8brw-rw----1rootroot253,9Oct1900:11/dev/dm-9[root@vrh1~]#ls-l/dev/disk/by-id/:58scsi-SATA_VBOX_HARDDISK_VB0db2f233-269850e0->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB3a556907-2b72391d->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB4915e6e3-737b312e->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB743e1567-d0009678->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4->../../asm-:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695->../../:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part1->../../:58scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part2->../../:58scsi-SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33->../../asm-:58scsi-SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d->../../asm-:58scsi-SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8->../../asm-diskdReportAbuseLike(0)2.Re:asm磁碟使用鏈路聚合設備名,IO性能只有非聚合設備的1/6!LiuMaclean(劉相兵)ExpertLiuMaclean(劉相兵)Jul21,201311:09AM(inresponseto13628)step1:[oracle@vrh8mapper]$cat/etc/multipath.confmultipaths{multipath{wwidSATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_aliasasm-disk1mode660uid501gid503}multipath{wwidSATA_VBOX_HARDDISK_VB0db2f233-269850e0_aliasasm-disk2mode660uid501gid503}multipath{wwidSATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_aliasasm-disk3mode660uid501gid503}}第二步:第三步:[oracle@vrh8mapper]$ls-l/dev/mapper/asm-disk*brw-rw----1gridasmadmin253,4Jul2107:02/dev/mapper/asm-disk1brw-rw----1gridasmadmin253,2Jul2107:02/dev/mapper/asm-disk2brw-rw----1gridasmadmin253,3Jul2107:02/dev/mapper/asm-disk3

8. linux什麼情況下用到多路徑

1、安裝多路徑軟體包:
device-mapper-1.02.67-2.el5
device-mapper-event-1.02.67.2.el5
device-mapper-multipath-0.4.7-48.el5

[plain] view plain print?
[root@RKDB01 Server]# rpm -ivh device-mapper-1.02.67-2.el5.x86_64.rpm
warning: device-mapper-1.02.67-2.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing... ########################################### [100%]
package device-mapper-1.02.67-2.el5.x86_64 is already installed
[root@RKDB01 Server]# rpm -ivh device-mapper-event-1.02.67-2.el5.x86_64.rpm
warning: device-mapper-event-1.02.67-2.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing... ########################################### [100%]
package device-mapper-event-1.02.67-2.el5.x86_64 is already installed
[root@RKDB01 Server]# rpm -ivh device-mapper-multipath-0.4.7-48.el5.x86_64.rpm
warning: device-mapper-multipath-0.4.7-48.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing... ########################################### [100%]
package device-mapper-multipath-0.4.7-48.el5.x86_64 is already installed

2、設置開機啟動,並檢查安裝包是否正常:

chkconfig --level 345 multipathd on
lsmod |grep dm_multipath

[plain] view plain print?
[root@RKDB01 Server]# chkconfig --level 345 multipathd on
[root@RKDB01 Server]# lsmod |grep dm_multipath
dm_multipath 58969 0
scsi_dh 42561 1 dm_multipath
dm_mod 102417 4 dm_mirror,dm_multipath,dm_raid45,dm_log
[root@RKDB01 Server]#

3、配置multipathd 使其正常工作,編輯/etc/multipath.conf,開放如下內容:

[plain] view plain print?
defaults {
udev_dir /dev
polling_interval 10
selector "round-robin 0"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout none
path_checker readsector0
rr_min_io 100
max_fds 8192
rr_weight priorities
failback immediate
no_path_retry fail
user_friendly_names yes
}
blacklist {
wwid 26353900f02796769
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
}

4、並關閉如下內容

[plain] view plain print?
#blacklist {
# devnode "*"
#}
#defaults {
27 # user_friendly_names yes
28 #}

5、完成之後執行如下命令發現多路徑:

[plain] view plain print?
[root@RKDB01 Server]# modprobe dm-multipath
[root@RKDB01 Server]# multipath -F
[root@RKDB01 Server]# multipath dm-multipath
[root@RKDB01 Server]# multipath dm-round-robin
[root@RKDB01 Server]# service multipathd restart
正在關閉multipathd 埠監控程序: [確定]
正在啟動守護進程multipathd: [確定]
[root@RKDB01 Server]# multipath -v2
[root@RKDB01 Server]# multipath -v2
[root@RKDB01 Server]# multipath -ll
mpath1 () dm-0 TOYOU,NetStor_iSUM510
[size=3.3T][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=2][ena bled]
\_ 1:0:0:0 sdb 8:16 [failed][ready]
\_ 1:0:1:0 sdc 8:32 [failed][ready]
[root@RKDB01 Server]#

6、重啟伺服器後,我們可以看到多路徑信息了:

[plain] view plain print?
[root@RKDB01 ~]# ll /dev/mapper/
總計 0
crw------- 1 root root 10, 60 11-05 22:35 control
brw-rw---- 1 root disk 253, 0 11-05 22:35 mpath1
brw-rw---- 1 root disk 253, 1 11-05 22:35 mpath2
[root@RKDB01 ~]# multipath -ll
mpath2 () dm-1 TOYOU,NetStor_iSUM510
[size=3.2T][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:0:1 sdc 8:32 [active][ready]
\_ 1:0:1:1 sde 8:64 [active][ready]
mpath1 () dm-0 TOYOU,NetStor_iSUM510
[size=20G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:0:0 sdb 8:16 [active][ready]
\_ 1:0:1:0 sdd 8:48 [active][ready]

7、通過fdisk 看可以生成了DM-0/DM-1兩個盤,正是上面sdc/sde,sdb/sdd多路徑後出來的:

[plain] view plain print?
[root@RKDB01 ~]# fdisk -l
Disk /dev/sda: 299.4 GB, 299439751168 bytes
255 heads, 63 sectors/track, 36404 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 38 305203+ 83 Linux
/dev/sda2 39 13092 104856255 83 Linux
/dev/sda3 13093 19619 52428127+ 83 Linux
/dev/sda4 19620 36404 134825512+ 5 Extended
/dev/sda5 19620 26146 52428096 83 Linux
/dev/sda6 26147 28757 20972826 83 Linux
/dev/sda7 28758 30324 12586896 82 Linux swap / Solaris
/dev/sda8 30325 36404 48837568+ 83 Linux
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 3568.4 GB, 3568429957120 bytes
255 heads, 63 sectors/track, 433836 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 3568.4 GB, 3568429957120 bytes
255 heads, 63 sectors/track, 433836 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/dm-0: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 3568.4 GB, 3568429957120 bytes
255 heads, 63 sectors/track, 433836 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/sdf: 4009 MB, 4009754624 bytes
255 heads, 63 sectors/track, 487 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf4 * 1 488 3915744+ b W95 FAT32
Partition 4 has different physical/logical endings:
phys=(486, 254, 63) logical=(487, 125, 22)
[root@RKDB01 ~]#

8、我們同時也可以在/dev/mapper目錄中查看到多路徑映射的信息:

[plain] view plain print?
[root@RKDB01 ~]# ll /dev/mapper/
總計 0
crw------- 1 root root 10, 60 11-06 00:49 control
brw-rw---- 1 root disk 253, 2 11-06 00:49 data-data001
brw-rw---- 1 root disk 253, 0 11-06 00:49 mpath1
brw-rw---- 1 root disk 253, 1 11-06 00:49 mpath2

9. linux上安裝RAC時不使用asmlib的多路徑怎麼配置

如果使用了 多路徑方案, 可以直接使用multipath 綁定設備名 不需要用培鏈賣到 asmlib或UDEV

請直接參考 文配逗檔:Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5 [ID 605828.1]

[root@vrh1 ~]# for i in `cat /proc/partitions | awk '{print $4}' |grep sd | grep [a-z]$`; do echo "### $i: `scsi_id -g -u -s /block/$i`"; done
### sda: SATA_VBOX_HARDDISK_VB83d4445f-b8790695_
### sdb: SATA_VBOX_HARDDISK_VB0db2f233-269850e0_
### sdc: SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_
### sdd: SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_
### sde: SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_
### sdf: SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_
### sdg: SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_
### sdh: SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_
### sdi: SATA_VBOX_HARDDISK_VB3a556907-2b72391d_
### sdj: SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_
### sdk: SATA_VBOX_HARDDISK_VB743e1567-d0009678_

[root@vrh1 ~]# grep -v ^# /etc/multipath.conf
defaults {
user_friendly_names yes
}
defaults {
udev_dir /dev
polling_interval 10
selector "round-robin 0"
path_grouping_policy failover
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout /bin/喚攜true
path_checker readsector0
rr_min_io 100
rr_weight priorities
failback immediate
#no_path_retry fail
user_friendly_name yes
}
devnode_blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss!c[0-9]d[0-9]*"
}
multipaths {
multipath {
wwid SATA_VBOX_HARDDISK_VB0db2f233-269850e0_
alias voting1
path_grouping_policy failover
}
multipath {
wwid SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_
alias voting2
path_grouping_policy failover
}
multipath {
wwid SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_
alias voting3
path_grouping_policy failover
}
multipath {
wwid SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_
alias ocr1
path_grouping_policy failover
}
multipath {
wwid SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_
alias ocr2
path_grouping_policy failover
}
multipath {
wwid SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_
alias ocr3
path_grouping_policy failover
}
}

[root@vrh1 ~]# multipath
[root@vrh1 ~]# multipath -ll
mpath2 (SATA_VBOX_HARDDISK_VB3a556907-2b72391d_) dm-9 ATA,VBOX HARDDISK
size=5.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 8:0:0:0 sdi 8:128 active ready running
mpath1 (SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d_) dm-8 ATA,VBOX HARDDISK
size=5.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 7:0:0:0 sdh 8:112 active ready running
ocr3 (SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9_) dm-7 ATA,VBOX HARDDISK
size=5.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 6:0:0:0 sdg 8:96 active ready running
ocr2 (SATA_VBOX_HARDDISK_VB4915e6e3-737b312e_) dm-6 ATA,VBOX HARDDISK
size=5.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 5:0:0:0 sdf 8:80 active ready running
ocr1 (SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a_) dm-5 ATA,VBOX HARDDISK
size=5.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 4:0:0:0 sde 8:64 active ready running
voting3 (SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_) dm-4 ATA,VBOX HARDDISK
size=40G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 3:0:0:0 sdd 8:48 active ready running
voting2 (SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_) dm-3 ATA,VBOX HARDDISK
size=40G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 2:0:0:0 sdc 8:32 active ready running
voting1 (SATA_VBOX_HARDDISK_VB0db2f233-269850e0_) dm-2 ATA,VBOX HARDDISK
size=40G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 1:0:0:0 sdb 8:16 active ready running
mpath4 (SATA_VBOX_HARDDISK_VB743e1567-d0009678_) dm-11 ATA,VBOX HARDDISK
size=5.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 10:0:0:0 sdk 8:160 active ready running
mpath3 (SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4_) dm-10 ATA,VBOX HARDDISK
size=5.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 9:0:0:0 sdj 8:144 active ready running

[root@vrh1 ~]# dmsetup ls | sort
mpath1 (253, 8)
mpath2 (253, 9)
mpath3 (253, 10)
mpath4 (253, 11)
ocr1 (253, 5)
ocr2 (253, 6)
ocr3 (253, 7)
VolGroup00-LogVol00 (253, 0)
VolGroup00-LogVol01 (253, 1)
voting1 (253, 2)
voting2 (253, 3)
voting3 (253, 4)
[root@vrh1 ~]# ls -l /dev/mapper/*
crw------- 1 root root 10, 62 Oct 17 09:58 /dev/mapper/control
brw-rw---- 1 root disk 253, 8 Oct 19 00:11 /dev/mapper/mpath1
brw-rw---- 1 root disk 253, 9 Oct 19 00:11 /dev/mapper/mpath2
brw-rw---- 1 root disk 253, 10 Oct 19 00:11 /dev/mapper/mpath3
brw-rw---- 1 root disk 253, 11 Oct 19 00:11 /dev/mapper/mpath4
brw-rw---- 1 root disk 253, 5 Oct 19 00:11 /dev/mapper/ocr1
brw-rw---- 1 root disk 253, 6 Oct 19 00:11 /dev/mapper/ocr2
brw-rw---- 1 root disk 253, 7 Oct 19 00:11 /dev/mapper/ocr3
brw-rw---- 1 root disk 253, 0 Oct 17 09:58 /dev/mapper/VolGroup00-LogVol00
brw-rw---- 1 root disk 253, 1 Oct 17 09:58 /dev/mapper/VolGroup00-LogVol01
brw-rw---- 1 root disk 253, 2 Oct 19 00:11 /dev/mapper/voting1
brw-rw---- 1 root disk 253, 3 Oct 19 00:11 /dev/mapper/voting2
brw-rw---- 1 root disk 253, 4 Oct 19 00:11 /dev/mapper/voting3
[root@vrh1 ~]# ls -l /dev/dm*
brw-rw---- 1 root root 253, 0 Oct 17 09:58 /dev/dm-0
brw-rw---- 1 root root 253, 1 Oct 17 09:58 /dev/dm-1
brw-rw---- 1 root root 253, 10 Oct 19 00:11 /dev/dm-10
brw-rw---- 1 root root 253, 11 Oct 19 00:11 /dev/dm-11
brw-rw---- 1 root root 253, 2 Oct 19 00:11 /dev/dm-2
brw-rw---- 1 root root 253, 3 Oct 19 00:11 /dev/dm-3
brw-rw---- 1 root root 253, 4 Oct 19 00:11 /dev/dm-4
brw-rw---- 1 root root 253, 5 Oct 19 00:11 /dev/dm-5
brw-rw---- 1 root root 253, 6 Oct 19 00:11 /dev/dm-6
brw-rw---- 1 root root 253, 7 Oct 19 00:11 /dev/dm-7
brw-rw---- 1 root root 253, 8 Oct 19 00:11 /dev/dm-8
brw-rw---- 1 root root 253, 9 Oct 19 00:11 /dev/dm-9
[root@vrh1 ~]# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 15 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VB0db2f233-269850e0 -> ../../asm-diskb
lrwxrwxrwx 1 root root 15 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VB3a556907-2b72391d -> ../../asm-diski
lrwxrwxrwx 1 root root 15 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VB4915e6e3-737b312e -> ../../asm-diskf
lrwxrwxrwx 1 root root 15 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VB512c8f75-37f4a0e9 -> ../../asm-diskg
lrwxrwxrwx 1 root root 15 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VB5a531910-25f4eb9a -> ../../asm-diske
lrwxrwxrwx 1 root root 15 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VB743e1567-d0009678 -> ../../asm-diskk
lrwxrwxrwx 1 root root 15 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VB7ec8476c-08641bd4 -> ../../asm-diskj
lrwxrwxrwx 1 root root 9 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695 -> ../../sda
lrwxrwxrwx 1 root root 10 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VB83d4445f-b8790695-part2 -> ../../sda2
lrwxrwxrwx 1 root root 15 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33 -> ../../asm-diskc
lrwxrwxrwx 1 root root 15 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VBc0115ef6-a48bc15d -> ../../asm-diskh
lrwxrwxrwx 1 root root 15 Oct 17 09:58 scsi-SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8 -> ../../asm-diskd
Report Abuse Like (0)
2. Re: asm磁碟使用鏈路聚合設備名,IO性能只有非聚合設備的1/6!
LiuMaclean(劉相兵)
Expert
LiuMaclean(劉相兵) Jul 21, 2013 11:09 AM (in response to 13628)
step 1:
[oracle@vrh8 mapper]$ cat /etc/multipath.conf

multipaths {
multipath {
wwid SATA_VBOX_HARDDISK_VBf6b74ff7-871d1de8_
alias asm-disk1
mode 660
uid 501
gid 503
}

multipath {
wwid SATA_VBOX_HARDDISK_VB0db2f233-269850e0_
alias asm-disk2
mode 660
uid 501
gid 503
}

multipath {
wwid SATA_VBOX_HARDDISK_VBa56f2571-0dd27b33_
alias asm-disk3
mode 660
uid 501
gid 503
}
}

第二步:

reboot or service multipathd restart

第三步:
[oracle@vrh8 mapper]$ ls -l /dev/mapper/asm-disk*
brw-rw---- 1 grid asmadmin 253, 4 Jul 21 07:02 /dev/mapper/asm-disk1
brw-rw---- 1 grid asmadmin 253, 2 Jul 21 07:02 /dev/mapper/asm-disk2
brw-rw---- 1 grid asmadmin 253, 3 Jul 21 07:02 /dev/mapper/asm-disk3

10. 有哪位朋友在linux下使用udev配置過多路徑的raw

linux下使用udev配置過多路徑的raw
udev 不是多路徑,是Linux kernel 2.6系列的設備管理器。它主要的功能是管理/dev目錄底下的設備節點。它同時也是用來接替devfs及hotplug的功能,這意味著它要在添加/備橘刪除硬體時處理/dev目錄以及所有用戶空間的行為,包括載入firmware時。udev的最新版本依賴於升級後的Linux
kernel
2.6.13的uevent介面的最新版本。使用新版本udev的系統不能在2.6.13以下版本啟動,除非使用noudev參數來禁用udev並使用傳統的/dev來進行設備讀取。
Linux
傳統上使用靜態設備創建方法,因此大量設備節點在 /dev
下創建(有時上千個),而不管相應的硬體設備是否真正存在。通常這由一個MAKEDEV腳本實現,這個腳本包含了許仿謹多通過世界仿大團上(有幽默意味,注)每一個可能存在的設備相關的主設備號和次設備號對mknod程序的調用。採用udev的方法,只有被內核檢測到的設備才會獲取為它們創建的設備節點。因為這些設備節點在每次系統啟動時被創建,他們會被貯存在ramfs(一個內存中的文件系統,不佔用任何磁碟空間).設備節點不需要大量磁碟空間,因此它使用的內存可以忽略。

熱點內容
內置存儲卡可以拆嗎 發布:2025-05-18 04:16:35 瀏覽:336
編譯原理課時設置 發布:2025-05-18 04:13:28 瀏覽:378
linux中進入ip地址伺服器 發布:2025-05-18 04:11:21 瀏覽:612
java用什麼軟體寫 發布:2025-05-18 03:56:19 瀏覽:32
linux配置vim編譯c 發布:2025-05-18 03:55:07 瀏覽:107
砸百鬼腳本 發布:2025-05-18 03:53:34 瀏覽:944
安卓手機如何拍視頻和蘋果一樣 發布:2025-05-18 03:40:47 瀏覽:742
為什麼安卓手機連不上蘋果7熱點 發布:2025-05-18 03:40:13 瀏覽:803
網卡訪問 發布:2025-05-18 03:35:04 瀏覽:511
接收和發送伺服器地址 發布:2025-05-18 03:33:48 瀏覽:372