Optimize Nginx for performance

There are many possible real life cases and not all optimization technics will be suitable for you but I hope it will be a good starting place.

Also you shouldn’t copy paste examples with faith that they will make your server fly 馃檪 You have to support your decisions with excessive tests and help of monitoring system (ex. Grafana).

Cache static and dynamic content

Setting caching static and dynamic content strategy may offload your server from additional load from repetitive downloads of same, rarely updated files. This will make your site to load faster for frequent visitors.

Example configuration:

location ~* ^.+\.(?:jpg|png|css|gif|jpeg|js|swf|m4v)$ {
    access_log off; log_not_found off;

    tcp_nodelay off;

    open_file_cache max=500 inactive=120s;
    open_file_cache_valid 45s;
    open_file_cache_min_uses 2;
    open_file_cache_errors off;

    expires max;
}

For additional performance gain, you may:

  • disable logging for static files,
  • disable tcp_nodelay option – it’s useful to send a lot of small files (ideally smaller than single TCP packet – 1,5Kb), but images are rather big files and sending them all together will gain better performance,
  • play with open_file_cache – it will take off some IO load,
  • add long long expires.

Caching dynamic content is harder case. There are articles that are rarely updated and they may lay in cache forever but other pages are pretty dynamic and shouldn’t be cached for long. Even if caching dynamic content sounds scary for you it’s not. So called micro caching (caching for short period of time, like 1s) – is great solution for digg effect or slashdotting.

Let say your page gets ten views per second and you will cache ever site for 1s, then you will be able to server 90% of requests from cache. Leaving precious CPU cycles for other tasks.

Compress data

On your page you should use filetypes that are efficiently compressed like: JPEG, PNG, MP3, etc. But all HTML, CSS, JS may be compressed too on the fly by web server, just enable options like that globally:

gzip on;
gzip_vary on;
gzip_disable "msie6";
gzip_comp_level 1;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_min_length 50;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/javascript application/atom+xml application/xml application/xml+rss text/xml image/x-icon text/x-js application/xhtml+xml image/svg+xml;

You may also precompress these files stronger during build/deploy process and use gzip_static module to serve them without additional overhead for compression. Ex.:

gzip_static on;

Then use script like this to compress files:

find /var/www -iname *.js -print0 |xargs -0 -I'{}' sh -c 'gzip -c9 "{}" > "{}.gz" && touch -r "{}" "{}.gz"'
find /var/www -iname *.css -print0 |xargs -0 -I'{}' sh -c 'gzip -c9 "{}" > "{}.gz" && touch -r "{}" "{}.gz"'

Files have to had same timestamp like original (not compressed) file to be used by Nginx.

Optimize SSL/TLS

New optimized versions of HTTP protocols like HTTP/2 or SPDY require HTTPS configuration (at least in browsers implementation). Then SSL/TLS high cost of every new HTTPS connection became crucial case for further optimizations.

There are few steps required for improved SSL/TLS performance.

Enable SSL session caching

Use ssl_session_cache directive to cache parameters used when securing each new connection, ex.:

ssl_session_cache builtin:1000 shared:SSL:10m;

Enable SSL session tickets

Tickets store information about specific SSL/TLS connection so connection may be reused without new handshake, ex.:

ssl_session_tickets on;

Configure OCSP stapling for SSL

This will lower handshaking time by caching SSL/TLS certificate informations. This is per site/certificate configuration, ex.:

  ssl_stapling on;
  ssl_stapling_verify on;
  ssl_certificate /etc/ssl/certs/my_site_cert.crt;
  ssl_certificate_key /etc/ssl/private/my_site_key.key;
  ssl_trusted_certificate /etc/ssl/certs/authority_cert.pem;

A ssl_trusted_certificate file have to point to trusted certificate chain file – root + intermediate certificates (this can be downloaded from your certificate provider site (sometimes you have to merge by yourself those files).

Excessive article in this topic could be found here: https://raymii.org/s/tutorials/OCSP_Stapling_on_nginx.html

Implement HTTP/2 or SPDY

If you have HTTPS configured the only thing you have to do is to add two options on listen directive, ex.:

listen 443 ssl http2; # currently http2 is preferred against spdy;

# on SSL enabled vhost
ssl on;

You may also advertise for HTTP connection that you have newer protocol available, for that on HTTP connections use this header:

add_header Alternate-Protocol 443:npn-spdy/3;

SPDY and HTTP/2 protocols use:

  • headers compression,
  • single, multiplexed connection (carrying pieces of multiple requests and responses at the same time) rather than multiple connection for every piece of web page.

After SPDY or HTTP/2 implementation you no longer need typical HTTP/1.1 optimizations like:

  • domain sharding,
  • resource (JS/CSS) merging,
  • image sprites.

Tune other nginx performance options

Access logs

Disable access logs were you don’t need them, ex.: for static files. You may also use buffer and flush options with access_log directive, ex.:

access_log /var/log/nginx/access.log buffer=1m flush=10s;

With buffer Nginx will hold that much data in memory before writing it to disk. flush tells Nginx how often it should write gathered logs to disk.

Proxy buffering

Turning proxy buffering may impact performance of your reverse proxy.

Normally when buffering is disabled, Nginx will pass response directly to client synchronously.

When buffering is enable it will store response in memory set by proxy_buffer_size option and if response is too big it will be stored in temporary file.

proxy_buffering on;
proxy_buffer_size 16k;

Keepalive for client and upstream connections]

Every new connection costs some time for handshake and will add latency to requests. By using keepalive connections will be reused without this overhead.

For client connections:

keepalive_timeout = 120s;

For upstream connections:

upstream web_backend {
    server 127.0.0.1:80;
    server 10.0.0.2:80;

    keepalive 32;
}

Limit connections to some resources

Some time users/bots overload your service by querying it to fast. You may limit allowed connections to protect your service in such case, ex.:

 limit_conn_zone $binary_remote_addr zone=owncloud:1m;

server {
    # ...
    limit_conn owncloud 10;
    # ...
}

Adjust woker count

Normally Nginx will start with only 1 worker process, you should adjust this variable to at the number of CPU’s, in case of quad core CPU use in main section:

worker_processes 4;

Use socket sharding

In latest kernel and Nginx versions (at least 1.9.1) there is new feature of sockets sharding. This will offload management of new connections to kernel. Each worker will create a socket listener and kernel will assign new connections to them as they become available.

listen 80 reuseport;

Thread pools

Thread pools are solution for mostly long blocking IO operations that may block whole Nginx event queue (ex. when used with big files or slow storage).

location / {
    root /storage;
    aio threads;
}

This will help a lot if you see many Nginx processes in D state, with high IO wait times.

Tune Linux for performance

Backlog queue

If you could see on your system connection that appear to be staling then you have to increase net.core.somaxconn. This system parameter describes the maximum number of backlogged sockets. Default is 128 so setting this to 1024 should be no big deal on any decent machine.

echo "net.core.somaxconn=1024" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf

File descriptors

If your system is serving a lot of connections you may get reach system wide open descriptor limit. Nginx uses up to two descriptors for each connection. Then you have to increase sys.fs.fs_max.

echo "sys.fs.fs_max=3191256" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf

Ephemeral ports

Nginx used as a proxy creates temporary (ephemeral) ports for each upstream server. On busy proxy servers this will result in many connection in TIME_WAIT state.
Solution for that is to increase range of available ports by setting net.ipv4.ip_local_port_range. You may also benefit from lowering net.ipv4.tcp_fin_timeout setting (connection will be released faster, but be careful with that).

Use reverse-proxy

This with microcaching technic is worth separate article, I will add link here when it will be ready.

Source:
http://www.fromdual.com/huge-amount-of-time-wait-connections
https://www.nginx.com/blog/10-tips-for-10x-application-performance/
https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
https://www.nginx.com/blog/thread-pools-boost-performance-9x/
https://tweaked.io/guide/kernel/
https://t37.net/nginx-optimization-understanding-sendfile-tcp_nodelay-and-tcp_nopush.html

fail2ban – block wp-login.php brute force attacks

Lately I had a lot of brute force attacks on my WordPress blog. I used basic auth to /wp-admin part in nginx configuration to block this and as a better solution I wan't to block source IPs at all on firewall.

To do this, place this filter code in /etc/fail2ban/filter.d/wp-login.conf:

# WordPress brute force wp-login.php filter:
#
# Block IPs trying to authenticate in WordPress blog
#
# Matches e.g.
# 178.218.54.109 - - [31/Dec/2015:10:39:34 +0100] "POST /wp-login.php HTTP/1.1" 401 188 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0"
#
[Definition]
failregex = ^<HOST> .* "POST /wp-login.php
ignoreregex =

Then edit your /etc/fail2ban/jail.local and add:

[wp-login]
enabled   = true
port      = http,https
filter    = wp-login
logpath   = /var/log/nginx/access.log
maxretry  = 3

Now restart fail2ban:

service fail2ban restart

All done 馃檪

Nginx – enabling SPDY with freeware certificate

I was thinking about allowing access to my website using SPDY protocol for better performance and security (and for fun of course 馃檪 ). But SPDY have one disadvantage – you need SSL certificate signed by known authority that will verfiy in common browsers. So you can’t use self signed certificates because everyone will see a warning entering your site. Certs are quite expensive so I started searching for free one and to my surprise I found such!

I found these two sites where you can generate freeware certificates for your website:

I wouldn’t trust these certification authorities enough to use it for: access my mail or other private data. But I’m fine with using it for my public websites (like my blog) to gain speed from SPDY.

Configuring cert

Fetch the Root CA and Class 1 Intermediate Server CA certificates:

wget http://www.startssl.com/certs/ca.pem
wget http://www.startssl.com/certs/sub.class1.server.ca.pem

Create a unified certificate from your certificate and the CA certificates:

cat ssl.crt sub.class1.server.ca.pem ca.pem > /etc/nginx/conf/ssl-unified.crt

Enable SPDY

Configure your nginx server to use the new key and certificate (in the global settings or a server section):

ssl on;
ssl_certificate /etc/nginx/conf/ssl-unified.crt;
ssl_certificate_key /etc/nginx/conf/ssl.key;

Then enable SPDY like that:

server {
listen your_ip:80;
listen your_id:443 default_server ssl spdy;

# other stuff
}

Advertise SPDY protocol

Now advertise SPDY with Alternate-Protocol header – add this clause in main location:

add_header Alternate-Protocol "443:npn-spdy/2";

Have fun with SPDY on your site 馃檪

Preparing video files for streaming on website in MP4 and WEBM format

Some time ago I prepared a PC that was responsible for batch encoding of movies to formats suitable for web players (such as. Video.js, JW Player, Flowplayer, etc.)

I used HandBrake for conversion to MP4 format (becase this soft was the fastest one) and ffmpeg (aka avconv in new version) for two pass encoding to WEBM.

Below are commands used by me for that conversion:

  • MP4
    HandBrakeCLI -e x264  -q 20.0 -a 1 -E faac -B 64 -6 mono -R 44.1 -D 0.0 -f mp4 --strict-anamorphic -m -x ref=1:weightp=1:subq=2:rc-lookahead=10:trellis=0:8x8dct=0 -O -i "input_file.avi" -o "output_file.mp4"
  • WEBM
    avconv -y -i "input_file.avi" -codec:v libvpx -b:v 600k -qmin 10 -qmax 42 -maxrate 500k -bufsize 1000k -threads 4 -an -pass 1 -f webm /dev/null
    avconv -y -i "input_file.avi" -codec:v libvpx -b:v 600k -qmin 10 -qmax 42 -maxrate 500k -bufsize 1000k -threads 4 -codec:a libvorbis -b:a 96k -pass 2 -f webm "output_file.webm"

Nginx configuration for MP4

I used configuration similar to that below for MP4 pseudostreaming and to protect direct urls to videos from linking on other sites (links will expire after sometime). There is also example usage of limit_rate clause that will slow down downloading of a file (it’s still two times bigger than video streaming speed so should be enough).

location ~ \.m(p4|4v)$ {
  ## This must match the URI part related to the MD5 hash and expiration time.
  secure_link $arg_ticket,$arg_e;

  ## The MD5 hash is built from our secret token, the URI($path in PHP) and our expiration time.
  secure_link_md5 somerandomtext$uri$arg_e;

  ## If the hash is incorrect then $secure_link is a null string.
  if ($secure_link = "") {
    return 403;
  }

  ## The current local time is greater than the specified expiration time.
  if ($secure_link = "0") {
    return 403;
  }

  ## If everything is ok $secure_link is 1.
  mp4;
  mp4_buffer_size     10m;
  mp4_max_buffer_size 1024m;

  limit_rate          1024k;
  limit_rate_after    5m;
}

Source:
http://nginx.org/en/docs/http/ngx_http_mp4_module.html
http://wiki.nginx.org/HttpSecureLinkModule

Nginx – przydatne rewrite’y i r贸偶ne sztuczki

Polubi艂em Nginx’a i wykorzystuj臋 go na coraz wi臋cej sposob贸w. Kilka rzeczy uda艂o mi si臋 ca艂kiem fajnie w nim skonfigurowa膰 i postanowi艂em zebra膰 te przyk艂ady by nast臋pnym razem gdy postanowi臋 do nich si臋gn膮膰 nie musie膰 wertowa膰 konfig贸w po serwerach 馃檪

S艂owo wst臋pu

Niekt贸re rewrite’y ko艅cz膮 si臋 znakiem ? – czemu?
Ot贸偶 Nginx pr贸buje automatycznie dodawa膰 parametry na ko艅cu przepisanego adresu. Je艣li jednak wykorzystamy zmienn膮 $request_uri to ona sama w sobie zawiera ju偶 parametry zapytania (czyli to co w URI znajduje si臋 po znaku ?) i w艂a艣nie dodanie pytajnika tu偶 za t膮 zmienn膮 powoduje 偶e argumenty nie s膮 dublowane.
Ma to te偶 zastosowanie gdy chcemy by rewrite kierowa艂 np. na g艂贸wn膮 stron臋 be偶 偶adnych dodatkowych argument贸w (zostan膮 one obci臋te).
Wi臋cej na ten temat mo偶na znale藕膰 w dokumentacji Nginx.

Inna warta wspomnienia uwaga dotyczy drobnej optymalizacji, o kt贸rej warto pami臋ta膰 na etapie tworzenia rewrite’贸w (mo偶na znale藕膰 mas臋 kiepskich przyk艂ad贸w w sieci): na pocz膮tku najlepiej jest stworzy膰 co艣 co dzia艂a (i przy ma艂ym ruchu mo偶e to by膰 wystarczaj膮ce) a p贸藕niej optymalizowa膰 – moje przyk艂ady stara艂em si臋 zoptymalizowa膰 wed艂ug zalecanych praktyk.
Dlatego zamiast pisa膰:

rewrite ^(.*)$ $scheme://www.domain.com$1 permanent;

lepiej napisa膰:

rewrite ^ $scheme://www.domain.com$request_uri? permanent;

(nie wykorzystujemy przechowywania warto艣ci dopasowania – mniejsze zu偶ycie pami臋ci i l偶ejsza interpretacja REGEXP’a).
A jeszcze lepiej napisa膰:

return 301 $scheme://www.domain.com$request_uri;

(w og贸le nie wykorzystujemy REGEXP’贸w praktycznie zerowy narzut na przetwarzanie) – dzi臋ki za uwag臋: lukasamd.

Przekierowanie starej domeny na now膮

server {
    listen 80;
    server_name old-domain.com www.old-domain.com;
    return 301 $scheme://www.new-domain.com$request_uri;
    # rewrite ^ $scheme://www.new-domain.com$request_uri? permanent;
    # or
    # rewrite ^ $scheme://www.new-domain.com? permanent;
}

Wykorzystanie return w tej sytuacji jest nieco bardziej optymalne gdy偶 nie anga偶uje w og贸le silnika REGEXP a w tej sytuacji jest wystarczaj膮ce.
Pierwsza linia z rewrite i $request_uri spowoduje przepisywanie te偶 parametr贸w wywo艂a艅 do nowej lokalizacji co jest jak najbardziej sensowne gdy pomimo domeny nie zmieni艂a si臋 zbytnio struktura strony.
Je艣li strona jednak si臋 zmieni艂a to mo偶emy zdecydowa膰 o przekierowaniu bez parametr贸w – po prostu na g艂贸wn膮 stron臋 – i to robi druga linia.
W obu przypadkach parametr permanent nakazuje u偶ycie kodu przekierowania HTTP 301 (Moved Permanently), co u艂atwi zorientowanie si臋 crawlerom 偶e ta zmiana jest ju偶 na sta艂e.

Dodanie WWW na pocz膮tku domeny

server {
    listen 80;
    server_name domaim.com;
    return 301 $scheme://www.domain.com$request_uri;
    #rewrite ^ $scheme://www.domain.com$request_uri? permanent;
    # or
    #rewrite ^(.*)$ $scheme://www.domain.com$1 permanent;
}

Przyk艂ad zakomentowany jest wed艂ug dokumentacji mniej optymalny ale r贸wnie偶 zadzia艂a. Reszta jest prosta i samoopisuj膮ca si臋 馃檪
A to jeszcze bardziej og贸lna wersja dla wielu domen:

server {
    listen 195.117.254.80:80;
    server_name domain.pl domain.eu domain.com;

    return 301 $scheme://www.$http_host$request_uri;
    #rewrite ^ $scheme://www.$http_host$request_uri? permanent;
}

Ta wersja wykorzystuje zmienn膮 $http_host do przekierowania na domen臋 z zapytania (zmienna ta zawiera te偶 numer portu je艣li jest niestandardowy np. 8080, w przeciwie艅stwie do zmiennej $host, kt贸ra zawiera tylko domen臋).

Usuni臋cie WWW z pocz膮tku domeny

server {
    listen 80;
    server_name www.domain.com;
    return 301 $scheme://domain.com$request_uri;
    #rewrite ^ $scheme://domain.com$request_uri? permanent;
}

Czasami mo偶e si臋 przyda膰 jeszcze inny kawa艂ek, gdy strona dzia艂a na wielu domenach i chcemy przekierowa膰 wszystkie:

server {
    server_name www.domain.com _ ;
    # server_name www.domain1.com www.domain2.com www.domain3.eu www.domain.etc.com;

    if ($host ~* www\.(.*)) {
        set $pure_host $1;
        return 301 $scheme://$pure_host$request_uri;
        #rewrite ^ $scheme://$pure_host$request_uri? permanent;
        #rewrite ^(.*)$ $scheme://$pure_host$1 permanent;
    }
}

Cho膰 to podej艣cie nie jest zalecane (pomimo zwi臋z艂o艣ci). Lepiej zdefiniowa膰 dwa bloki server z domenami www.* i domenami bez www na pocz膮tku. Ale z drugiej strony to cholernie wygodne… 馃槈

Przekierowanie “pozosta艂ych” zapyta艅 na domy艣ln膮 domen臋

server {
    listen 80 default;
    server_name _;
    rewrite ^ $scheme://www.domena.com;
    #rewrite ^ $scheme://www.domena.com/search/$host;
}

To bardzo przydatny przyk艂ad – czyli domy艣lny vhost, kt贸ry “przyjmie” wszystkie zapytania do domen nie zdefiniowanych w konfiguracji i przekieruje na nasz膮 “g艂贸wn膮 stron臋”.
Zakomentowany przyk艂ad jest nieco bardziej przekombinowany bo pr贸buje wykorzysta膰 wyszukiwark臋 na naszej stronie do wyszukania “czego艣” pomocnego – z tym przyk艂adem nale偶y uwa偶a膰 bo je艣li do serwera trafi du偶o b艂臋dnych zapyta艅 to mo偶e zosta膰 przeci膮偶ony “bzdetnymi” wyszukiwaniami.

Przekierowanie pewnych podstron po zmianie struktury strony

server {
    listen 80;
    server_name www.domain.com;

    location / {
        try_files $uri $uri/ @rewrites;
    }

    location @rewrites {
        rewrite /tag/something  $scheme://new.domain.com permanent;
        rewrite /category/hobby /category/painting permanent;
        # etc ...

        rewrite ^ /index.php last;
    }
}

Im starsza strona tym wi臋cej zbiera si臋 link贸w, kt贸rych po prostu nie mo偶na usun膮膰, a kt贸re z racji wprowadzonych zmian nie maj膮 prawa bytu w nowym uk艂adzie. Warto je przekierowa膰 w nowe miejsca, lub najbardziej odpowiadaj膮ce/bliskie tym starym. Problemem mo偶e si臋 wkr贸tce sta膰 du偶a lista przekierowa艅, kt贸ra zaciemni konfiguracj臋.
Powy偶szy spos贸b w do艣膰 optymalny spos贸b porz膮dkuje takie przekierowania – najpierw sprawdza czy przypadkiem nie pr贸bujemy pobra膰 istniej膮cych plik贸w, je艣li nie to wrzuca nas nas na list臋 przekierowa艅, a je艣li i tu nic nie znajdzie to zapytanie przekazywane jest do g艂贸wnego skryptu strony.

Przekierowanie w zale偶no艣ci od warto艣ci parametru w URI

if ($args ~ producent=toyota){
    rewrite ^ $scheme://toyota.domena.com$request_uri? permanent;
}

To rzadko stosowane przekierowanie a w dodatku ma艂o czytelnie i pono膰 ma艂o wydajne… Ale potrafi by膰 bardzo przydatne gdy chcemy przepisa膰 adres w zale偶no艣ci od warto艣ci parametru np. gdy pewna podstrona doczeka si臋 rozbudowy w zupe艂nie nowym serwisie lub gdy chcemy 艂adnie przekierowa膰 adresy ze starej strony na now膮.

Blokowanie dost臋pu do ukrytych plik贸w

location ~ /\. { access_log off; log_not_found off; deny all; }

Przyznaj臋 – to nie rewrite… Ale ta linijka jest r贸wnie przydatna – pozwala zablokowa膰 mo偶liwo艣膰 pobierania ukrytych plik贸w (np. .htaccess’贸w po konfiguracji z Apachego).

Wy艂膮czenie logowania dla robots.txt i favicon.ico

location = /favicon.ico { try_files /favicon.ico =204; access_log off; log_not_found off; }
location = /robots.txt  { try_files /robots.txt =204; access_log off; log_not_found off; }

To te偶 nie rewrite – ale bardzo fajnie obs艂uguje sytuacj臋 gdy mamy i gdy nie mamy powy偶szych dw贸ch pliczk贸w. Po pierwsze wy艂膮cza logowanie i serwuje je gdy s膮 dost臋pne. Gdy nie istniej膮 to serwuje puste pliki (kod 204) dzi臋ki czemu nie przeszkadzaj膮 nam 404-ki 馃檪

Blokowanie dost臋pu do obrazk贸w dla nieznanych refererer贸w

location ~* ^.+\.(?:jpg|png|css|gif|jpeg|js|swf)$ {
    # definiujemy poprawnych refererow
    valid_referers none blocked *.domain.com domain.com;
    if ($invalid_referer)  {
        return 444;
    }
    expires max;
    break;
}

Zabezpieczenie warte tyle co nic bo banalne do omini臋cia – ale je艣li zdarzy si臋 偶e kto艣 postanowi wykorzysta膰 grafik臋 z naszej stronki np. w aukcji na allegro czy w艂asnym sklepie to t膮 prost膮 sztuczk膮 mo偶emy go przyci膮膰 i przewa偶nie jest to wystarczaj膮ce.
Musz臋 te偶 zaznaczy膰 szczeg贸lne znaczenie warto艣ci kodu b艂臋du 444 w Nginx’ie – powoduje on zerwanie po艂膮czenia bez wysy艂ania jakiejkolwiek odpowiedzi. Je艣li nie chcemy by膰 tak okrutni to mo偶emy u偶y膰 innego kodu, np.: 403 albo 402 馃檪

Przekierowanie ciekawskich w “ciemn膮 dup臋”

location ~* ^/(wp-)?admin(istrator)?/?  {
    rewrite ^ http://whatismyipaddress.com/ip/$remote_addr redirect;
}

Ten prosty redirect odwodzi wielu amator贸w zbyt g艂臋bokiego penetrowania naszej strony… A pozosta艂ych na pewno rozbawi 馃檪

Inne przyk艂ady konfiguracji na mojej stronie:
Nginx – hide server version and name in Server header and error pages
Nginx 鈥 kompresowanie plik贸w dla gzip_static
Nginx 鈥 konfiguracja pod WordPress鈥檃
Nginx 鈥 ustawienie domy艣lnego vhosta
Nginx 鈥 m贸j domy艣lny config

殴r贸d艂a:
http://www.engineyard.com/blog/2011/useful-rewrites-for-nginx/
http://wiki.nginx.org/HttpRewriteModule