Monday, 17 April 2017

Comparing fabio and traefik

I was tasked to compare two modern dynamic load balancers: Traefik and Fabio.

Consul support

Both balancers can be configured to store their’s configuration in Consul and get info for routing table from Consul catalog.

Proxy protocol

Fabio supports proxy protocol.
There’s PR in Traefik’s repo requesting this feature.

Let’s Encrypt

Out of two, only Traefik allows you to request certificates from Let’s Encrypt.
There’s issue in Fabio’s repo requesting this feature.

Websocket

Both balancers support websockets.

TLS ciphers

Only in Traefik TLS ciphers and minimal TLS version can be configured.
There’s issue in Fabio’s repo.

Auth for admin UI

Both Traefik and Fabio provide admin web UI but only in Traefik web UI can be “secured” with basic auth.

Keep-alive

Only Fabio can be configured with keep-alive for outgoing connections.

Dropping root privileges

Neither Fabio nor Traefik drops privileges after start.
setcap 'cap_net_bind_service=+ep' $(which traefik) can be utilized as a workaround.
Issue in Fabio’s repo.
Issue in Traefiks’ repo.

Basic configurations

Fabio and Traefik have different approach in exposing services to the outside world.
With Fabio a service has to be explicitly configured to be exposed:

{
    "services": [
        {
            "name": "nginx", 
            "port": 80, 
            "tags": [
                "urlprefix-/nginx"
            ]
        }
    ]
}

If service is not configured with urlprefix tag nothing will be exposed.
Traefik will expose all the services it will find in Consul’s catalog with default frontend.rule.
That may or may be not what is required.
For example, if there are only a couple of services you want to expose to the outside world.
To restrain all the services from being exposed Traefik provides constraints:

{
    "services": [
        {
            "name": "nginx", 
            "port": 80, 
            "tags": [
                "traefik.tags=trk", 
                "traefik.frontend.rule=PathPrefix:/nginx", 
            ]
        }
    ]
}

Fabio

fabio.properties

proxy.cs = cs=fasten;type=file;cert=fasten.com.c;key=fasten.com.key
proxy.addr=:9999;cs=fasten
./fabio-1.4.2-go1.8.1-linux_amd64 -cfg fabio.properties

Traefik

traefik.toml

checkNewVersion = false
defaultEntryPoints = ["https"]

[entryPoints]
  [entryPoints.https]
  address = ":8443"
    [entryPoints.https.tls]
      [[entryPoints.https.tls.certificates]]
      CertFile = "fasten.com.c"
      KeyFile = "fasten.com.key"

[consulCatalog]
endpoint = "127.0.0.1:8500"
constraints = ["tag==trk"]
domain = "fasten.com"

[web]
address = ":8888"
ReadOnly = true
./traefik_linux-amd64 -c traefik.toml --debug

Benchmarks

balancers were given t2.micro instance and configured like this:

sudo sysctl -w fs.file-max="9999999"
sudo sysctl -w fs.nr_open="9999999"
cat > /etc/security/limits.d/95-nofile.conf <<EOF
kostyrev soft nofile 102400
kostyrev hard nofile 102400
EOF

Behind each balancer there were two t2.medium instances with nginx installed.
AB and wrk were used to perform the benchmarks.

AB

Fabio

$ ab -c 1000 -t 60 https://52.23.178.71:8443/nginx/
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 52.23.178.71 (be patient)
Completed 5000 requests
Completed 10000 requests
Finished 11091 requests


Server Software:        nginx/1.10.2
Server Hostname:        52.23.178.71
Server Port:            8443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128

Document Path:          /nginx/
Document Length:        3770 bytes

Concurrency Level:      1000
Time taken for tests:   60.024 seconds
Complete requests:      11091
Failed requests:        0
Total transferred:      44429898 bytes
HTML transferred:       42013728 bytes
Requests per second:    184.78 [#/sec] (mean)
Time per request:       5411.977 [ms] (mean)
Time per request:       5.412 [ms] (mean, across all concurrent requests)
Transfer rate:          722.85 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      580 3121 4384.5   1736   49182
Processing:   144 1180 1945.7    631   44713
Waiting:      139  753 956.5    456   27828
Total:        828 4300 4876.4   2897   49507

Percentage of the requests served within a certain time (ms)
  50%   2897
  66%   3667
  75%   4487
  80%   5365
  90%   8403
  95%  12731
  98%  20592
  99%  28657
 100%  49507 (longest request)

Traefik

$ ab -c 1000 -t 60 https://54.144.22.55:8443/nginx/
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 54.144.22.55 (be patient)
Completed 5000 requests
Completed 10000 requests
Finished 10727 requests


Server Software:        nginx/1.10.2
Server Hostname:        54.144.22.55
Server Port:            8443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128

Document Path:          /nginx/
Document Length:        3770 bytes

Concurrency Level:      1000
Time taken for tests:   60.004 seconds
Complete requests:      10727
Failed requests:        0
Total transferred:      42949883 bytes
HTML transferred:       40616058 bytes
Requests per second:    178.77 [#/sec] (mean)
Time per request:       5593.777 [ms] (mean)
Time per request:       5.594 [ms] (mean, across all concurrent requests)
Transfer rate:          699.00 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      547 2988 4769.6   1652   57481
Processing:   136 1037 1664.9    564   37961
Waiting:      133  729 811.3    477   23277
Total:        786 4025 5242.2   2567   59809

Percentage of the requests served within a certain time (ms)
  50%   2567
  66%   3199
  75%   3817
  80%   4378
  90%   7032
  95%  11544
  98%  21974
  99%  34017
 100%  59809 (longest request)

wrk

Fabio

$ ./wrk -t20 -c1000 -d60s --latency https://52.23.178.71:8443/nginx/
Running 1m test @ https://52.23.178.71:8443/nginx/
  20 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   416.75ms  192.19ms   1.99s    86.01%
    Req/Sec   106.37     46.31   333.00     67.31%
  Latency Distribution
     50%  365.10ms
     75%  408.54ms
     90%  646.54ms
     99%    1.21s 
  121651 requests in 1.00m, 462.32MB read
  Socket errors: connect 0, read 0, write 0, timeout 289
Requests/sec:   2026.39
Transfer/sec:      7.70MB

Traefik

$ ./wrk -t20 -c1000 -d60s --latency https://54.144.22.55:8443/nginx/
Running 1m test @ https://54.144.22.55:8443/nginx/
  20 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   478.76ms  239.89ms   2.00s    85.90%
    Req/Sec    95.34     44.76   470.00     68.26%
  Latency Distribution
     50%  385.02ms
     75%  490.87ms
     90%  768.83ms
     99%    1.47s 
  106907 requests in 1.00m, 406.29MB read
  Socket errors: connect 0, read 0, write 0, timeout 359
Requests/sec:   1779.92
Transfer/sec:      6.76MB
compare fabio traefik
proxy protocol + -
letsencrypt - +
consul backend + +
websockets + +
tls ciphers - +
basic auth - +
config in consul + +
community - +
keepalive + -
benchmarks + -

Written with StackEdit.

Tuesday, 11 October 2016

Configuring Google Cloud SDK and kubectl

quick and dirty steps
  1. install sdk
  2. configure kubectl:
gcloud config set container/use_client_certificate True

gcloud container clusters get-credentials cluster-name

Saturday, 13 August 2016

Koji build system

There are many ways you can create RPM packages for your code.
If you are an open source project you can always use Copr. It is very easy to get started with.
But when you want to build RPM packages for company’s closed source projects you don’t have much of a choice:
- OBS
- Koji

OBS

OBS forces you to download appliance with OpenSUSE on-board. So you have to learn how OpenSUSE operates. Over seven years I’ve been working in Rhel/CentOS environments and that’s why OBS was not an option for me.
But OBS can build for many platforms.

Luckily for me, the team I currently work in, needs to build packages just for CentOS6/7.

Koji

Koji is the software that builds RPM packages for the Fedora project.
Koji heavily uses existing tools:
- mock
- yum
- rpmbuild
- createrepo

Koji is not that easy to deploy and to understand the terminology.
I found those blog posts to be very useful.
Also there are very useful videos:
- Building RPMS: How Fedora’s Koji Works by Dennis Gilmore who is Fedora Release Engineering Lead
- CentOS: Community build service by Thomas Oulevey who is System Engineer at CERN

Because Ansible is the new sexy, I’ve developed a bunch of roles used to deploy an all-in-one PoC setup of Koji.

For creating SRPMs and submitting tasks to Koji we use tito configured to utilize KojiReleaser.

Friday, 19 February 2016

IBM: disable pxebooting on NICs

ssh USERID@X.X.X.X

system> asu set PXE.NicPortPxeMode.1 "UEFI and Legacy Support"
system> asu set PXE.NicPortPxeMode.2 "Disabled"
system> asu set PXE.NicPortPxeMode.3 "Disabled"
system> asu set PXE.NicPortPxeMode.4 "Disabled"

I needed to change

system> asu show BroadcomGigabitEthernet*.LegacyBootProtocol
BroadcomGigabitEthernetBCM5719-40F2E9BA7038.LegacyBootProtocol=PXE
BroadcomGigabitEthernetBCM5719-40F2E9BA7039.LegacyBootProtocol=NONE
BroadcomGigabitEthernetBCM5719-40F2E9BA703A.LegacyBootProtocol=NONE
BroadcomGigabitEthernetBCM5719-40F2E9BA703B.LegacyBootProtocol=NONE

IBM: Force PXE booting on next reboot

ssh USERID@X.X.X.X

system> pxeboot -en enabled

Thursday, 15 January 2015

PXELinux global default

LABEL discovery
MENU LABEL Foreman Discovery
MENU DEFAULT
KERNEL boot/fdi-image/vmlinuz0
APPEND rootflags=loop initrd=boot/fdi-image/initrd0.img root=live:/fdi.iso foreman.url=https://url rootfstype=auto ro rd.live.image rd.lvm=0 rootflags=ro crashkernel=128M
elevator=deadline max_loop=256 rd.luks=0 rd.md=0 rd.dm=0 nomodeset selinux=0 stateless
IPAPPEND 2

Saturday, 13 December 2014

foreman rhev-h autoinstall

DEFAULT ovirt
TIMEOUT 20
PROMPT 0
LABEL ovirt
KERNEL boot/vmlinuz0
APPEND rootflags=loop initrd=boot/initrd0.img root=live:/rhevh-6.5-20140930.1.el6ev.iso BOOTIF=link storage_init rootfstype=auto ro liveimg check local_boot_trigger=<%= foreman_url("built") %>  management_server=rhevm.example.com:443 rhevm_admin_password=$1$1OIs7Iry$7iD0YeFzWMlphfu7ar1 adminpw=$1$1OIs7Iry$7iD0YeFMlphf7Or1 ssh_pwauth=1 hostname=<%= @host %> ip=<%=@host.ip %> netmask=<%=@host.subnet.mask %> gateway=<%=@host.subnet.gateway %> dns=<%=[@host.subnet.dns_primary,@host.subnet.dns_secondary].reject{|n| n.blank?}.join(',')%> ntp=ntp.ix.ru RD_NO_LVM rd_NO_MULTIPATH rootflags=ro crashkernel=128M elevator=deadline reinstall max_loop=256 rd_NO_LUKS rd_NO_MD rd_NO_DM