Saturday, June 30, 2018

Proxy Limiting


Introduction
------------

- Nginx and Nginx Plus can limit the ff:
    a. number of connections per key value (e.g per IP address)
    b. request rate per key value (limits requests per second or minute)
    c. download speed

Limiting Number of Connections
------------------------------

setup
http {
    limit_conn_zone $server_name zone=servers:10m;
    # $server_name -> key
    # zone         -> used by worker processes to share counters for key values
    # servers      -> zone name
    # 10m          -> zone size
 [...]

  # limits connection on this location
  location /download/ {
      limit_conn addr 1;
  }
} 

Limiting Request Rate
---------------------

setup
http {
  [...]
    # limits to 1 request per second (r/m for request per minute)
    limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
 [...]

  location /download/ {
      # This will limit requests to this location by 1r/s. If requests
      # exeeds rate limit, those are put into queue. `burst` parameter
      # is the max requests that awaits processing. If requests exceeds
      # `burst` value, Nginx will respond a 503 error.
      limit_req zone=one burst=5;

      # You can use this instead to prevent delay set by `burst`
      # limit_req zone=one burst=5 nodelay;
  }
}

Limiting bandwidth
------------------

limits dowload speed to 50k/s
but multiple connections are
still allowed
  location ~ \.bin {
    root /data;
    limit_rate 50k;
  }
limits download speed and at
the same time limits 1 connection
per IP address
http {
  [...]
  limit_conn_zone $binary_remote_addr zone=addr:10m;
  [...]
}

server {
  [...]
  location ~ \.bin {
    root /data;
    limit_conn addr 1;
    limit_rate 50k;
  }
  [...]
}
limits bandwidth after certain size
has been downloaded (e.g: useful
when client needs to download a
file header)
location {
  [...]
  limit_rate_after 500k;
  limit_rate 20k;
  [...]
}

Friday, June 29, 2018

HTTP Authentication (Nginx)


Introduction
------------

- available on both Nginx opensource and Nginx PLUS
- requires password file creation file creation tool (e.g apache-2 utils)
- can enforce restriction based on:
    a. IP address
    b. geographical location

Setting up authentication
-------------------------

1. Install apache2-utils
     # yum install httpd-tools

2. Create a password file containing the 1st user
     # htpasswd -c /etc/nginx/.htpasswd_users user1

3. Add another user if needed
     # htpasswd /etc/nginx/.htpasswd_users user2

4. Make sure selinux (if selinux is enabled) and permission is correct on the
   password file
     # chown nginx /etc/nginx/.htpasswd_users
     # restorecon -Rv /etc/nginx

5. Add the following directives on the location you wish to protect
  location ~ \.(pdf|PDF) {
    root /payroll;
    auth_basic "restricted area";
    auth_basic_user_file /etc/nginx/.htpasswd_users;
  }

6. Restart nginx
     # systemctl restart nginx

7. Try downloading a file from that location
     # wget http://server.home.net/01-01-1970.pdf --user=user1 --password=pass123

Common configurations
---------------------

limiting access to the whole website
server {
    ...
    # Restrict access to all location below
    auth_basic  "My personal files";
    auth_basic_user_file /etc/nginx/.htpasswd_users;

    location ~ \.(mp3|mp4) {
      root /music;
    }

    location ~ \.(jpg|png) {
      root /pictures;
    }
}
bypassing `server` level authentication
server {
    ...
    # Restrict access to all location below except for 1 location
    auth_basic  "My personal files";
    auth_basic_user_file /etc/nginx/.htpasswd_users;

    location ~ \.(mp3|mp4) {  # this will require credentials
      root /music;
    }

    location ~ \.(jpg|png) {  # this will require credentials
      root /pictures;
    }

    location /public {
      root /downloads;
      auth_basic off;  # this wouldn't require credentials
    }
}
Restricting by either source IP or
credentials
location ~ \.txt {
  satisfy any;         # this will honor source IP or credentials
  allow 192.168.1.11;  # Nginx will allow from this source IP only
  deny all;            # others are deny

  root /files/public;

  auth_basic "my personal files";
  auth_basic_user_file /etc/nginx/.htpasswd_users;
}
Restricting by both source IP and
credentials
location ~ \.txt {
  satisfy all;         # source IP and credentials must be correct
  allow 192.168.1.11;  # Nginx will allow from this source IP only
  deny all;            # others are deny

  root /files/public;

  auth_basic "my personal files";
  auth_basic_user_file /etc/nginx/.htpasswd_users;
}

Thursday, June 28, 2018

Securing Proxy HTTP/TCP Traffic


Secure HTTP
-----------

- traffic between nginx proxy server and upstream server can also be encrypted

CLIENT -- SSL --> PROXY SERVER (nginx) -- SSL --> UPSTREAM SERVER (Apache, Nginx, etc..)

1. Get SSL certificate for proxy server
self-signed or signed by CA
2. Configure proxy server
location /upstream {
    proxy_pass https://backend.example.com;
    proxy_ssl_certificate     /etc/nginx/client.pem;
    proxy_ssl_certificate_key /etc/nginx/client.key

    # must be in PEM format
    proxy_ssl_trusted_certificate /etc/nginx/trusted_ca_cert.crt;

    # checks validity of certificates
    proxy_ssl_verify       on;
    proxy_ssl_verify_depth 2;

    # reuses previous sessions w/c reduces number of handshakes
    proxy_ssl_session_reuse on;

    proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    proxy_ssl_ciphers   HIGH:!aNULL:!MD5;
}
3. Configure upstream server
server {
    listen              443 ssl;
    server_name         backend1.example.com;

    # I don't know if these certificates are from the proxy
    # server or from the upstream server
    ssl_certificate     /etc/ssl/certs/server.crt;
    ssl_certificate_key /etc/ssl/certs/server.key;

    # Confused also with this one!
    ssl_client_certificate /etc/ssl/certs/ca.crt;
    ssl_verify_client      off;

    location /yourapp {
        ...
    }
}

Secure TCP

----------

- TCP connection to upstream servers (group of proxied/backend servers) can be
  secured using SSL
- requirements:
    a. Nginx PLUS R6 and later or NGINX Open Source compiled with
       `--with-stream` and `with-stream_ssl_module`
    b. upstream group of servers / proxied TCP servers
    c. SSL certificate and a private key
- setup is similar in securing HTTPS to upstream but `stream` context is used instead

stream {

    upstream backend {
        server backend1.example.com:12345;
        server backend2.example.com:12345;
        server backend3.example.com:12345;
   }

    server {
        listen     12345;
        proxy_pass backend;
        proxy_ssl  on;

        proxy_ssl_certificate         /etc/ssl/certs/backend.crt;
        proxy_ssl_certificate_key     /etc/ssl/certs/backend.key;
        proxy_ssl_protocols           TLSv1 TLSv1.1 TLSv1.2;
        proxy_ssl_ciphers             HIGH:!aNULL:!MD5;
        proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;

        proxy_ssl_verify        on;
        proxy_ssl_verify_depth  2;
        proxy_ssl_session_reuse on;
    }
}


Wednesday, June 27, 2018

SSL basics in Nginx


Setting up an HTTPS Server
--------------------------

Standard settings
server {
    listen              443 ssl;
    server_name         www.example.com;
    ssl_certificate     www.example.com.crt;  # public key (shared to others)
    ssl_certificate_key www.example.com.key;  # private key (must be kept private)
    ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers         HIGH:!aNULL:!MD5;
    ...
}

HTTPS Server Optimization
-------------------------

- SSL operations consume extra CPU resources
- the most CPU-intensive is SSL handshake
- ways to minimize number of operations per client
    a. enable keepalive connections to send several requests via one connection
    b. reuse SSL session parameters to avoid SSL handshakes for parallel and
       subsequent connections
- sessions are stored in the SSL session cache shared between worker
  processes and configured by the `ssl_session_cache` directive
- one megabyte of cache contains about 4000 sessions
- default cache timeout is 5 minutes
- timeout can be increased using the `ssl_session_timeout` directive

Sample config
worker_processes auto;

http {
    ssl_session_cache   shared:SSL:10m;  # 10 MB ssl cache size
    ssl_session_timeout 10m;  # 10 minute timeout

    server {
        listen              443 ssl;
        server_name         www.example.com;
        keepalive_timeout   70;

        ssl_certificate     www.example.com.crt;
        ssl_certificate_key www.example.com.key;
        ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers         HIGH:!aNULL:!MD5;
        ...
    }
}

Optimizing SSL Session Cache
----------------------------

- SSL session cache reduces number of SSL handshake (improves performance)
- It is set via `ssl_session_cache` directive
- By default, Nginx uses built-in type of session cache in your SSL library
    * not optimal
    * 1 worker process can use it
    * causes memory fragmentation
- `shared` can be added to `ssl_session_cache` to share cache among all worker
   processes
    * speeds up later connections
    * connection setup is already known
- 1 MB cache size can hold 4,000 sessions
- By default, Nginx retain cache session parameters for 5 minutes
    * can be changed using `ssl_session_timeout`
    * increase value reduces number of time-consuming handshakes
    * when increased, cache size needs also to be increased

simple setup
ssl_session_cache;
cache shared among workers
ssl_session_cache shared:SSL:1m;
4 hour SSL session timeout
ssl_session_timeout 4h;
combined config
server {
   
    ssl_session_cache   shared:SSL:20m;
    ssl_session_timeout 4h;
}

SSL Certificate Chains
----------------------

- collection of certificates inside a 1 certificate
- order is important, nginx will fail if incorrect
- certificate chain is used by `ssl_certificate` directive
- some certificate authorities sign server certificates using an intermediate
  certificate and in this case the CA provides a bundle of chained certificates
  that should be concatenated to the signed server certificate

creating a certificate chain
error message received when
certificate chain has wrong
ordering inside
SSL_CTX_use_PrivateKey_file(" ... /www.example.com.key") failed
   (SSL: error:0B080074:x509 certificate routines:
    X509_check_private_key:key values mismatch)
checks to see if your HTTP
server sends the complete
certificate chain to your
clients

NOTE: if a bundle cert has
      has not been added,
      only certificate #0
      will be shown
$ openssl s_client -connect www.godaddy.com:443
...
Certificate chain
 0 s:/C=US/ST=Arizona/L=Scottsdale/1.3.6.1.4.1.311.60.2.1.3=US
     /1.3.6.1.4.1.311.60.2.1.2=AZ/O=GoDaddy.com, Inc
     /OU=MIS Department/CN=www.GoDaddy.com
     /serialNumber=0796928-7/2.5.4.15=V1.0, Clause 5.(b)
   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc.
     /OU=http://certificates.godaddy.com/repository
     /CN=Go Daddy Secure Certification Authority
     /serialNumber=07969287
 1 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc.
     /OU=http://certificates.godaddy.com/repository
     /CN=Go Daddy Secure Certification Authority
     /serialNumber=07969287
   i:/C=US/O=The Go Daddy Group, Inc.
     /OU=Go Daddy Class 2 Certification Authority
 2 s:/C=US/O=The Go Daddy Group, Inc.
     /OU=Go Daddy Class 2 Certification Authority
   i:/L=ValiCert Validation Network/O=ValiCert, Inc.
     /OU=ValiCert Class 2 Policy Validation Authority
     /CN=http://www.valicert.com//emailAddress=info@valicert.com

Setting up an HTTP/HTTPS server
-------------------------------

- Nginx can be setup to act both as a HTTP and HTTPS server
- config below can be done on version 0.17.14 and above

setup
server {
    listen              80;
    listen              443 ssl;
    server_name         www.example.com;
    ssl_certificate     www.example.com.crt;
    ssl_certificate_key www.example.com.key;
    ...
}

Name-based HTTP Server
----------------------

- if 2 or more HTTPS server is configured to listen on 1 IP address, the
  server_name used by Nginx is the name on the server certificate
- that happened because SSL handshake occurs before the client browser sends an
  HTTP request; thus, Nginx doesn't know the server_name requsted by the client
  so it uses the one in the server certificate

server {
    listen          443 ssl;
    server_name     www.example.com;
    ssl_certificate www.example.com.crt;
    ...
}

server {
    listen          443 ssl;
    server_name     www.example.org;
    ssl_certificate www.example.org.crt;
    ...
}

- the best way to resolve that issue is to assign separate IPs

server {
    listen          192.168.1.1:443 ssl;
    server_name     www.example.com;
    ssl_certificate www.example.com.crt;
    ...
}

server {
    listen          192.168.1.2:443 ssl;
    server_name     www.example.org;
    ssl_certificate www.example.org.crt;
    ...
}

SSL Certificate with Several names
----------------------------------

Ways of sharing a single IP:

1. certificate with several names in the `SubjectAltName` certificate field
    - e.g: www.example.com and www.example.org
    - the said field only have limited characters

2. certificate with wild card
    - e.g: *.example.org
    - that only matches one level higher
    - matches are: www.example.org
    - doesn't match: example.org or www.sub.example.org

3. Using TLS Server Name Indication (SNI, RFC 6066)
    - allows a browser to pass requested server name during SSL handshake
    - has limited browser support
        * Opera 8.0
        * MSIE 7.0 (only on Windows Vista or higher)
        * Firefox 2.0 and other browsers using Mozilla Platform rv:1.8.1
        * Safari 3.2 .1 (Windows version supports SNI on Vista or higher)
        * Chrome (Windows version supports SNI on Vista or higher, too)
    - only domain names can be passed
    - requirements:
        * OpenSSL 0.9.8f and lower configured with `--enable-tlsext`
        * OpenSSL 0.9.8j (SNI enabled by default)


It's better to place a certificate
w/ several names together with its
private key at the `http` level so
that the rest of the configuration
inherits the same copy
ssl_certificate     common.crt;
ssl_certificate_key common.key;

server {
    listen          443 ssl;
    server_name     www.example.com;
    ...
}

server {
    listen          443 ssl;
    server_name     www.example.org;
    ...
}
Nginx w/ SNI support
$ nginx -V
...
TLS SNI support enabled
...
Nginx w/o SNI support
NGINX was built with SNI support, however, now it is linked
dynamically to an OpenSSL library which has no tlsext support,
therefore SNI is not available
Other SNI compatibility notes
Compatibility Notes

- The SNI support status has been shown by the “-V” switch since versions 0.8.21 and 0.7.62.
- The ssl parameter of the listen directive has been supported since version 0.7.14. Prior to version 0.8.21 it
  could only be specified along with the default parameter.
- SNI has been supported since version 0.5.32.
- The shared SSL session cache has been supported since version 0.5.6.
- From version 0.7.65, 0.8.19 and later the default SSL protocols are SSLv3, TLSv1, TLSv1.1, and TLSv1.2 (if
  supported by the OpenSSL library).
- From version 0.7.64, 0.8.18 and earlier the default SSL protocols are SSLv2, SSLv3, and TLSv1.
- From version 1.0.5 and later the default SSL ciphers are HIGH:!aNULL:!MD5
- From version 0.7.65, 0.8.20 and later the default SSL ciphers are HIGH:!ADH:!MD5
- From version 0.8.19 the default SSL ciphers are ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM
- From version 0.7.64, 0.8.18 and earlier the default SSL ciphers are ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP

Tuesday, June 26, 2018

Serving Static Content


Overview on `root` directive
----------------------------

- specifies directory to search for files (similar to DocumentRoot in apache)
- can be placed within level of http, server, or location contexts


server {
    root /www/data;
      # determines root directory for the 1st 2 locations

    location / {
    }

    location /images/ {
    }

    location ~ \.(mp3|mp4) {
        root /www/media;
          # this location specifies its own root directory
    }
}

Using index files
-----------------

- if requested URI ends with `/` such as http://webserver.com/images/, nginx
  treats the request as a request for directory and tries to look for an index
  file
- `index` directive determines the filename of the index file
  (default: index.html)
- if nginx cannot see an index file, it returns "HTTP Code 404"


prevents returning HTTP code 404
(if file is not found) and return
a directory listing instead
location /images/ {
    autoindex on;
}
lists more than 1 filename for
index file (searches them in order)
location / {
    index index.$geo.html index.htm index.html;
      # `$geo` value depends on client's address
}
if index.html is not found but
index.php is there, the request will be
proxied
location / {
    root /data;
    index index.html index.php;
}

location ~ \.php {
    fastcgi_pass localhost:8000;
    ...
}


Trying Several Options
----------------------

- this done by using `try_files` directive

serves default.gif if the file
requested by URI is not found
server {
    root /www/data;
    location /images/ {
        try_files $uri /images/default.gif;
    }
}
returns an error code if any
of the given options doesn't
resolve the issue
location / {
    try_files $uri $uri/ $uri.html =404;
}
redirect you to a backend
server when the requested file
is not found
location / {
    try_files $uri $uri/ @backend;
}
location @backend {
    proxy_pass http://backend.example.com;
}


Optimizing Speed for Static Content
-----------------------------------

Enabling sendfile

 - nginx handles file transmission itself
 - by default, it copies file into buffer
   before sending
 - enabling sendfile prevents buffering
 - sendfile enables direct copying from
   one file descriptor to another
location /mp3 {
    sendfile           on;
      # enables sendfile

    sendfile_max_chunk 1m;
      # limits file size handled by sendfile
    ...
}
Enabling tcp_nopush

 - enables nginx to send HTTP response
   headers in one packet right after the
   chunk of data has been obtained by
   sendfile
location /mp3 {
    sendfile   on;
    tcp_nopush on;
    ...
}
Enabling tcp_nodelay

 - this disables Nagle's algorithm
 - Nagle's algorithm are for slow
   connections only; consolidates a
   number of small packets into a larger
   one and sends it in 200ms delay
 - by default, `tcp_nodelay` is set to on
   meaning Nagle's algorithm is disabled
location /mp3  {
    tcp_nodelay       on;
    keepalive_timeout 65;
    ...
}
Optimizing backlog queue

 - general rule is when a connection is
   established, it is put into "listen"
   queue of a listen socket

to measure current listen queue:
netstat -Lan

example output:
Current listen queue sizes (qlen/incqlen/maxqlen)
Listen         Local Address        
0/0/128        *.12345           
10/0/128        *.80      
0/0/128        *.8080

output above tells us that there is 10 unaccepted connections
while having a maximum connection limit of 128

if the output is like this:
Current listen queue sizes (qlen/incqlen/maxqlen)
Listen         Local Address        
0/0/128        *.12345           
192/0/128        *.80      
0/0/128        *.8080

the unaccepted connection is greater than the limit which is
common when the website is experiencing heavy traffic

to increase the limit on OS side:
echo 'net.core.somaxconn = 4096' > /etc/sysctl.conf
sysctl -p

then increase the limit on Nginx side afterwards:
server {
    listen 80 backlog 4096;
    # reload nginx after this change
 ...
}