Today's outage was a clear reminder that our current storage configuration does not serve us well. We hardly know how to debug it, it seems to not be resistant to the very issues it was supposed to protect against and introduce potential failure scenarios of its own.
I suggest we implement a new storage layout that meets the following criteria:
Ultimate simplicity at the lower level of the stack. More specifically:
The storage severs should be simple NFS or iSCSI servers. No DRBD and no exotic file-systems.
Only simple storage will be presented to oVirt for use as storage domains
Separation of resources between critical services - The 'Jenkins" master for e.g. should not share resources with the "resources" server or anything else.The separation should hold true down to the physical spindle level.
Duplication of services and use of local storage where possible - this is a longer term effort - but we have some low hanging fruits here like artifactory, where simple DNS/LB-based fail-over between two identical hosts would probably suffice.
Complexity only where needed and up the stack. For example we can just have the storage for Jenkins be mirrored at the VM level with fail-over to a backup VM.
can you give an update on this? what is the plan for reprovisoning the next storage server?
let's target it to post 4.2.3, after we upgrade HE to latest 4.2.3 and it works well
NFS migration complete, storage01 shut down and can be rebuilt.
Here's the partitioning from storage02 - I will likely repeat it unless we need some other specifics:
OS plus NFS shares
prod systems tier 1 like resources
prod systems tier 2
Need to have VLANs configured on it as well as patch BIOS and enable PXE.
Both storage servers rebuilt to offer block storage via iSCSI plus NFS for some use cases. Closing the tracker ticket.