rebuild/reconfigure PHX storage to improve performance

Description

Currently the storage setup is not well suited for CI workloads:
1) production and builder VMs are on the same RAID volume, so there is no resource tiering
2) all the storage is DRBD backed, even for data that doesn't need to be sync'ed
3) RAID50 is used which is slow on random writes
4) there is no dedicated DRBD sync network

This ticket is to track changes to optimize the storage and make it perform better while not losing reliability.

Activity

Show:

Former user March 3, 2017 at 3:29 PM

Archiving this: we moved CI workloads off the central storage. The issue is no longer relevant as our storage is now doing what it was initially intended to do.

Former user July 22, 2016 at 8:21 AM
Edited

For now I'll start with replicating the storage setup in the lab to see how to disable DRBD mirroring safely to rebuild the other storage host.

Storage setup currently:
kernel-2.6.32-431.17.1.el6.x86_64
kmod-drbd84-8.4.4-1.el6.elrepo.x86_64
drbd84-utils-8.4.4-2.el6.elrepo.x86_64
pacemaker-1.1.10-14.el6_5.3.x86_64
corosync-1.4.1-17.el6_5.1.x86_64
cman-3.0.12.1-59.el6_5.2.x86_64

/dev/sda (12TB RAID50 volume)
enslaved by /dev/drbd0
mounted at /srv/ovirt_storage as ext4

There's a cluster set up using corosync and pacemaker to provide an NFS export from the above mountpoint via a floating IP.

Won't Do

Details

Assignee

Reporter

Components

Priority

Created July 21, 2016 at 11:49 AM
Updated April 2, 2017 at 12:51 PM
Resolved March 3, 2017 at 3:29 PM