DRBD (Distributed Replicated Block Device)
1:40 PMTable of Contents
What is DRBD
DRBD (Distributed Replicated Block Device) is a distributed storage system for the GNU/Linux platform. It consists of a kernel module, several userspace management applications and some shell scripts and is normally used on high availability (HA) clusters. DRBD bears similarities to RAID 1, except that it runs over a network.DRBD Installation
There are two options for DRBD installation. One is to download the source and compile it against your kernel. You can compile binaries or RPMs. RPMs are the prefered method as they allow you to upgrade without having to recompile and reinstall the code.Two is to download the RPMs that are compatible with your kernel from a CentOS repository. Due to the variety of linux kernels this is not always an option.
Configuring the Filesystem for DRBD
http://www.drbd.org/docs/install/- Select partitions that are the same size on each server to be your DRBD partition.
- fdisk /dev/cciss/c0d1 - this will vary depending on your server hardware
- partprobe - rescan the bus w/o reboot
- ls /dev/cciss/c0d1* - confirm it exists
c0d1 c0d1p1 - pvcreate /dev/cciss/c0d1p1
Physical volume "/dev/cciss/c0d1p1" successfully created
- vgcreate vg0 /dev/cciss/c0d1p1
Volume group "vg0" successfully created - vgdisplay vg0 | grep "Total PE" - This is used to determine total
Physical Extents. So, the entire disk could be used for the Logical
Volume
Total PE 104986 - lvcreate -l 104986 vg0 -n lvol0
Logical volume "lvol0" created
-
- It's important that these are new, unformatted partitions as DRBD will give an error when trying to initialize the disks for DRBD if the partition is formatted.
- write the drbd configuration file.
- By default there is a /etc/drbd.conf created on install.
include
"drbd.d/global_common.conf"
;
include
"drbd.d/*.res"
;
- Any additional config files should be placed in /etc/drbd.d/ and have an extension of .res to be included in the configurations.
- The only changes you should make in include drbd.d/global_common.conf is:
syncer {
rate 110M;
# rate after al-extents use-rle cpu-mask verify-alg csums-alg
}
- This limits the speed of the syncer to 110M which is the maximum for a dedicate 1GB connection.
- here is a sample config for a resource. These files need to be
identical on each peer. If you have more than one resource they will
need to have different ports specified in their resource configuration
files.
resource mysqldata0 {
on lhradobcndb01p.ood.ops {
device /dev/drbd1;
disk /dev/mapper/vg0-lvol0;
address
10.120
.
111.21
:
7789
;
meta-disk internal;
}
on lhradobcndb02p.ood.ops {
device /dev/drbd1;
disk /dev/mapper/vg0-lvol0;
address
10.120
.
111.22
:
7789
;
meta-disk internal;
}
}
- This configuration calls the resource "mysqldata0" and the drbd device is "/dev/drbd1" This is the device that is mounted. disk is the partition to be used for DRBD, address is that dedicated network IP(if applicable) and port, and meta-disk internal means that the DRBD meta-data will be on /dev/cciss/c0d1p1. You can have your meta data located elsewhere but internal is recommended. If you're wanting to make an existing disk a DRBD disk you can keep the data intact by putting the meta-data elsewhere but that is beyond the scope of this document.
- Once your resource configuration is in place on each server you
can begin initializing DRBD. This can be done with one command which
runs the four commands below and must be completed on each server
independently. ***NOTE the first command has been found to be
inconsistent from server to server. So, the recommended approach is to
run each command individually.
In
this
case
, resource = mysqldata0
drbdadm up resource
-
- first meta-data needs to be created
drbdadm create-md resource
- second the DRBD resource needs to be attached
drbdadm attach resource
- third setup synchronization
drbdadm syncer resource
- fourth connect to the other peer
drbdadm connect resource
- first meta-data needs to be created
- You can check the status of DRBD to see if everything was successful
cat /proc/drbd
version:
8.3
.
0
(api:
88
/proto:
86
-
89
)
GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by buildsystem
@linbit
,
2008
-
12
-
18
16
:
02
:
26
1
: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r---
ns:
0
nr:
0
dw:
0
dr:
0
al:
0
bm:
0
lo:
0
pe:
0
ua:
0
ap:
0
ep:
1
wo:b oos:
200768
- you can now begin disk synchronization. You will need to decide
which disk will be the initial source. If you're trying to preserve
data, it's important that you select the server that has the data to
preserve. On the server that is to be the syncer source run the
following command:
drbdadm -- --overwrite-data-of-peer primary resource
- You can monitor the syncer progress by using cat /proc/drbd. While syncing you can format, mount, and begin working with the resource on the primary node though you will have reduced performance until syncing is completed.
-
- By default there is a /etc/drbd.conf created on install.
- mkfs.ext3 /dev/drbd1
- Node1 - mount /dev/drbd1 /data
- Node1 - for i in $(seq 1 5) ; do dd if=/dev/zero of=/data/file$i bs=1M count=100;done
- Node1 - umount /data ; drbdadm secondary mysqldata0
- Node2 - drbdadm primary mysqldata0 ; mount /dev/drbd1 /data
- Node2 - ls /data/ - should output 'file1 file2 file3 file4 file5' - If so, great data was replicated. Next step:
- Node2 - rm /data/file2 ; dd if=/dev/zero of=/data/file6 bs=100M count=2
- Node2 - umount /data ; drbdadm secondary mysqldata0
- Node1 - drbdadm primary mysqldata0 ; mount /dev/drbd1 /data
- Node1 - ls /data - should output - 'file1 file3 file4 file5 file6' - If so, DRBD is working
- chkconfig drbd on
0 comments