4

Along the lines of this MSDN article, how would you setup a pair of Ubuntu or CentOS (or RHEL) servers in a cluster so they appear to hosted applications as a single server, but will continue to work even if a node in the cluster becomes inoperable (or needs maintenance, etc)?

I presume this is possible.

Braiam
  • 35,991
warren
  • 1,848
  • Of course it is possible, even easy depending on what you want to do. Unfortunately, your linked MSDN article only mentions the two broad concepts and you have to drill down through dozens of pages to see how to cluster the most basic services. This question is unanswerable without considerable additional detail. – msw Sep 18 '13 at 00:51
  • 1
    Here's a recent 200 page primer about how to cluster with RHEL 6, but the concepts are broadly applicable. – msw Sep 18 '13 at 00:54
  • @msw - how would you suggest narrowing this question to allow it to reopen? – warren Sep 18 '13 at 15:10
  • @warren Let me know more details what you want, so that I can extend my answer.. – Rahul Patil Sep 18 '13 at 19:05
  • @RahulPatil - I'm looking for the fundamentals of setting up a cluster for a generic service to run on .. could be mail, could be Apache, could be another app. – warren Sep 18 '13 at 20:55
  • The answer highly depends on your needs and on the type of service you want to run. Is a failover-cluster enough? Should session-information be replicated? – Nils Sep 18 '13 at 21:10
  • @Nils - I'm trying to mimic what Windows clustering can do with a nearly-live failover from one host to another in the event one server has issues (maintenance window, hardware failure, etc) – warren Sep 20 '13 at 14:40
  • @warren your MSDN article links rather to a failover-cluster. Whether that is "nearly live" depends on how fast a startup can be done on the "passive" side of the cluster. – Nils Sep 20 '13 at 21:04

2 Answers2

3

You can use Redhat Cluster Suit for the same.

Let's understand little bit of clustering

There is a different cluster for every problem. Generally speaking though, there are two main problems that clusters try to resolve. Performance and High Availability. and As per your requirement ( will continue to work even if a node in the cluster becomes inoperable (or needs maintenance, etc)), You can Setup High Availability clustering.

High Availability clustering

The cluster will provide a shared file systems and will provide for the high availability. You will be able to have servers live-migrate during planned node outages and automatically restart on a surviving node when the original host node fails.

Some Practical points

  • You can Setup Staging environment in Vmware Esxi or Vmware Workstation to test your Application
  • Minimum 2 Node ( with share storage to avoid split-brain situation ) and Maximum 16 nodes support as per Redhat Documentation

Cluster Management tools

In RHEL5.x/CentOS 5.x there are three tools

  • Conga ( Manage Cluster from WebGUI )
  • system-config-cluster (cluster administration graphical user interface (GUI) )
  • ccs_tool (Command line tool, but not all options available )

In RHEL 6.x/CentOS 6.x system-config-cluster tool has been deprecated and removed without replacement

Rahul Patil
  • 24,711
-2

The setup is a little time consuming (as any clustering project would be, haha). I have done this a few times to test with at home, so I know it works pretty well. I use the method described in the guide below:

Linux Virtual Servers

Let me know if you have any questions about the implementation. I remember I ran into a few problems that required some hard thinking but I cannot quite remember what they were (one of the side effects of getting old).

  • 3
    Instead of that, why not just go ahead and explain in detail before he bumps into a problem. – Braiam Sep 18 '13 at 01:21
  • 1
    Welcome to Stack Exchange! Here it is not considered good ettiquite to simply post a link. Linking is fine as long as you reproduce the content that you're linking to - summaries or easier wording is best, but exact reproductions are usually fine! – strugee Sep 18 '13 at 06:39