Nginx 提供三種分流模式,不同的模式都能設定權重來讓效能較好的 Server 負擔更多的連線數,種類如下
- round-robin — 以輪詢的方式(預設)
- least-connected — 會將新的連線分配到連線數較少的網站應用程式(Web Application)
- ip-hash — 根據客戶端的 IP 來分配
這邊先以我之前在[如何透過 Nginx 伺服器在 Ubuntu 上部屬 .Net Core 應用程式](https://linmasaki09.blogspot.com/2020/08/linux-ubuntu1604-nginx-aspnet-core.html)文章裡的`your_app_name.conf`設定檔為例,移除不相干的設定並修改簡化後如下
```nginx
server {
listen 80;
listen [::]:80;
server_name www.sample.com; # 你的網域名稱
location / {
proxy_pass http://localhost:5000; # 根據你使用的 Port 調整
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
##**情境一** 原先我們只在 **www.sample.com** 網域上部屬了一個網站應用程式,該應用程式位址為`localhost:5000`,這邊稱它為 Web A,現在我再增加一個新的網站應用程式並將它放在`localhost:5010`為 Web B,然後修改`your_app_name.conf`設定檔讓 Nginx 伺服器具有 Load Balancer 的功能 ```nginx # 加上 upstream 區段並自訂一個名稱 my_domain_com upstream my_domain_com { server localhost:5000 weight=2; # Web A server localhost:5010 weight=1; # Web B } server { listen 80; listen [::]:80; server_name www.sample.com; # 網域名稱 location / { proxy_pass http://my_domain_com; # my_domain_com 為對應上面 upstream 的名稱 proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` 上面有用到了 weight 來設定權重(不用也行),由於預設是採用 *round-robin* 輪詢的方式,代表每 3 個 Request 會有 2 個分配到 Web A,1 個則分配到 Web B;如果要換成 *least-connected* 或 *ip-hash* 的方式,只要在 upstream 裡面加上相關的關鍵字即可,如下所示 ###**♦ least-connected** ```nginx upstream my_domain_com { least_conn; # 加上這段 server localhost:5000 weight=2; server localhost:5010 weight=1; } server { listen 80; listen [::]:80; server_name www.sample.com; location / { proxy_pass http://my_domain_com; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` ###**♦ ip-hash** ```nginx upstream my_domain_com { ip_hash; # 加上這段 server localhost:5000 weight=2; server localhost:5010 weight=1; } server { listen 80; listen [::]:80; server_name www.sample.com; location / { proxy_pass http://my_domain_com; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` 最後透過 Web A 和 Web B 應用程式本身的 Log 即可發現確實有共同分擔外部來的 Request 連線
##**情境二** 當 Web A 和 Web B 有各自的網域或是在不同機器上時,兩者的設定檔如下所示 ```nginx # Web A server { listen 80; listen [::]:80; server_name a.sample.com; # 網域名稱 access_log /var/log/nginx/a.sample.com.access.log main; error_log /var/log/nginx/a.sample.com.error.log warn; location / { proxy_pass http://localhost:5000; # 根據 A 使用的 Port 調整 } } # Web B server { listen 80; listen [::]:80; server_name b.sample.com; # 網域名稱 access_log /var/log/nginx/b.sample.com.access.log main; error_log /var/log/nginx/b.sample.com.error.log warn; location / { proxy_pass http://localhost:5010; # 根據 B 使用的 Port 調整 } } ``` 然後做為 Load Balancer 的那台機器之設定檔如下(順便附上強制導向 https 的設定方式) ```nginx upstream my_domain_com { server a.sample.com weight=2; # Web A server b.sample.com weight=1; # Web B } server { listen 80; listen [::]:80; server_name www.sample.com; # 網域名稱 # 導向 https return 307 https://www.sample.com$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; ssl_certificate /etc/nginx/ssl/www_sample_com/ssl-bundle.crt; ssl_certificate_key /etc/nginx/ssl/www_sample_com/private.key; server_name www.sample.com; # 網域名稱 location / { proxy_pass http://my_domain_com; # my_domain_com 為對應上面 upstream 的名稱 proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` 其餘設定方式可直接參考情境一,最後透過兩者的`access.log`檔來驗證 Web A 和 Web B 是否會共同分擔外部來的 Request 連線即完成!
##參考資料 [\[Nginx\] Using nginx as HTTP load balancer](http://nginx.org/en/docs/http/load_balancing.html "Introduction load balancing methods") [\[未知\] 使用 Nginx 做 Load Balancer](https://blog.dtask.idv.tw/Nginx/2018-07-31)
##**情境一** 原先我們只在 **www.sample.com** 網域上部屬了一個網站應用程式,該應用程式位址為`localhost:5000`,這邊稱它為 Web A,現在我再增加一個新的網站應用程式並將它放在`localhost:5010`為 Web B,然後修改`your_app_name.conf`設定檔讓 Nginx 伺服器具有 Load Balancer 的功能 ```nginx # 加上 upstream 區段並自訂一個名稱 my_domain_com upstream my_domain_com { server localhost:5000 weight=2; # Web A server localhost:5010 weight=1; # Web B } server { listen 80; listen [::]:80; server_name www.sample.com; # 網域名稱 location / { proxy_pass http://my_domain_com; # my_domain_com 為對應上面 upstream 的名稱 proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` 上面有用到了 weight 來設定權重(不用也行),由於預設是採用 *round-robin* 輪詢的方式,代表每 3 個 Request 會有 2 個分配到 Web A,1 個則分配到 Web B;如果要換成 *least-connected* 或 *ip-hash* 的方式,只要在 upstream 裡面加上相關的關鍵字即可,如下所示 ###**♦ least-connected** ```nginx upstream my_domain_com { least_conn; # 加上這段 server localhost:5000 weight=2; server localhost:5010 weight=1; } server { listen 80; listen [::]:80; server_name www.sample.com; location / { proxy_pass http://my_domain_com; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` ###**♦ ip-hash** ```nginx upstream my_domain_com { ip_hash; # 加上這段 server localhost:5000 weight=2; server localhost:5010 weight=1; } server { listen 80; listen [::]:80; server_name www.sample.com; location / { proxy_pass http://my_domain_com; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` 最後透過 Web A 和 Web B 應用程式本身的 Log 即可發現確實有共同分擔外部來的 Request 連線
##**情境二** 當 Web A 和 Web B 有各自的網域或是在不同機器上時,兩者的設定檔如下所示 ```nginx # Web A server { listen 80; listen [::]:80; server_name a.sample.com; # 網域名稱 access_log /var/log/nginx/a.sample.com.access.log main; error_log /var/log/nginx/a.sample.com.error.log warn; location / { proxy_pass http://localhost:5000; # 根據 A 使用的 Port 調整 } } # Web B server { listen 80; listen [::]:80; server_name b.sample.com; # 網域名稱 access_log /var/log/nginx/b.sample.com.access.log main; error_log /var/log/nginx/b.sample.com.error.log warn; location / { proxy_pass http://localhost:5010; # 根據 B 使用的 Port 調整 } } ``` 然後做為 Load Balancer 的那台機器之設定檔如下(順便附上強制導向 https 的設定方式) ```nginx upstream my_domain_com { server a.sample.com weight=2; # Web A server b.sample.com weight=1; # Web B } server { listen 80; listen [::]:80; server_name www.sample.com; # 網域名稱 # 導向 https return 307 https://www.sample.com$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; ssl_certificate /etc/nginx/ssl/www_sample_com/ssl-bundle.crt; ssl_certificate_key /etc/nginx/ssl/www_sample_com/private.key; server_name www.sample.com; # 網域名稱 location / { proxy_pass http://my_domain_com; # my_domain_com 為對應上面 upstream 的名稱 proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` 其餘設定方式可直接參考情境一,最後透過兩者的`access.log`檔來驗證 Web A 和 Web B 是否會共同分擔外部來的 Request 連線即完成!
##參考資料 [\[Nginx\] Using nginx as HTTP load balancer](http://nginx.org/en/docs/http/load_balancing.html "Introduction load balancing methods") [\[未知\] 使用 Nginx 做 Load Balancer](https://blog.dtask.idv.tw/Nginx/2018-07-31)