2

I am on a RHEL7 system. I am new to haproxy. I think I have some sort of problem. Here is address that I'd like to use.

[root@haproxy-el7-001 haproxy]# grep 1936 /etc/haproxy/haproxy.cfg 
  bind 10.29.103.39:1936 

Here is what my haproxy.cfg looks like around that ...

listen haproxy_stats
  bind 10.29.103.39:1936  
  mode  http   
  stats  enable
  stats  hide-version
  stats  realm Haproxy\ Statistics
  stats  uri / 
  stats  auth xxxxx:xxxxx

I do not have my other services that I will be load balancing but I would still like to be able to look at the stats like so ...

[root@haproxy-el7-001 haproxy]# wget http://10.29.103.39:1936
--2015-02-17 19:11:33--  http://10.29.103.39:1936/
Connecting to 10.29.103.39:1936... failed: No route to host.

The service, haproxy is running:

[root@haproxy-el7-001 ~]# systemctl -l status haproxy
haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled)
   Active: active (running) since Tue 2015-02-17 18:47:57 EST; 16s ago
 Main PID: 16448 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
           ├─16448 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           ├─16449 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           ├─16450 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           └─16451 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Feb 17 18:48:10 haproxy-el7-001 haproxy[16451]: Server heat_api_cluster/mgmt-el7-001 is DOWN, reason: Layer4 timeout, check duration: 10001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 17 18:48:10 haproxy-el7-001 haproxy[16451]: backend heat_api_cluster has no server available!
Feb 17 18:48:10 haproxy-el7-001 haproxy[16450]: Server heat_api_cluster/mgmt-el7-001 is DOWN, reason: Layer4 timeout, check duration: 10001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 17 18:48:10 haproxy-el7-001 haproxy[16450]: backend heat_api_cluster has no server available!
Feb 17 18:48:12 haproxy-el7-001 haproxy[16450]: Server keystone-admin-api/mgmt-el7-001 is DOWN, reason: Layer4 timeout, check duration: 10000ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 17 18:48:12 haproxy-el7-001 haproxy[16450]: backend keystone-admin-api has no server available!
Feb 17 18:48:12 haproxy-el7-001 haproxy[16451]: Server keystone-admin-api/mgmt-el7-001 is DOWN, reason: Layer4 timeout, check duration: 10001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 17 18:48:12 haproxy-el7-001 haproxy[16451]: backend keystone-admin-api has no server available!
Feb 17 18:48:13 haproxy-el7-001 haproxy[16451]: Server keystone-public-api/mgmt-el7-001 is DOWN, reason: Layer4 timeout, check duration: 10001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb 17 18:48:13 haproxy-el7-001 haproxy[16451]: backend keystone-public-api has no server available!

Here is the output of ip a and I do not see the vip, 10.29.103.39, listed.

[root@haproxy-el7-001 haproxy]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:a4:77:2b brd ff:ff:ff:ff:ff:ff
    inet 10.29.103.37/26 brd 10.29.103.63 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fea4:772b/64 scope link 
       valid_lft forever preferred_lft forever

what am I doing wrong here?

muru
  • 72,889
Red Cricket
  • 2,203
  • How did you configure the virtual interface? Did you bring it up correctly? – muru Feb 18 '15 at 00:41
  • I am not sure. I guess that is what I am doing wrong. I was unaware that I had to do anything to configure the virt. interface other than list it in haproxy.cnf. Please enlighten me. – Red Cricket Feb 18 '15 at 00:43
  • Try http://unix.stackexchange.com/q/88354/70524 or http://unix.stackexchange.com/a/152339/70524 – muru Feb 18 '15 at 00:45
  • Thanks muru! I will take a look and I just realized that I am using the puppet module "PuppetLabs Module for haproxy" I wonder why it didn't take care of the virtual for me. – Red Cricket Feb 18 '15 at 02:17
  • Thank muru. In RHEL7 there is no eth0 anymore. The interface is now called ens160. I had to make a change in my puppet code. – Red Cricket Feb 18 '15 at 16:46
  • could you post that as answer, along with what modification you made? It may help others using the module. – muru Feb 18 '15 at 19:20

1 Answers1

1

I'm using an in-house Puppet module that uses Puppetlabs HAProxy module. The puppet code did look like this ...

  keepalived::instance { 'haproxy-vip':
    advert_int        => '1',
    priority          => "$priority",
    state             => "$state",
    virtual_router_id => "$vrouter_id",
    interface         => 'eth0',
    virtual_ips       => [ $controller_vip, $swift_vip ],
    track_script      => [ 'check_haproxy' ],
  }

... and this code was not tested on RHEL7. RHEL7 interface names may be a little different. On my RHEL7 system the primary nic is called 'ens160'. I change the puppet code to do use the fact "interfaces" like so ...

  $allnics = split( $interfaces, "," )
  $interface = $allnics[0]

  keepalived::instance { 'haproxy-vip':
    advert_int        => '1',
    priority          => "$priority",
    state             => "$state",
    virtual_router_id => "$vrouter_id",
    interface         => "$interface",
    virtual_ips       => [ $controller_vip, $swift_vip ],
    track_script      => [ 'check_haproxy' ],
  }
Red Cricket
  • 2,203