Sat Feb 08 2020
Article size is 2.4 kB and is a 2 min read
I use Nginx a lot. One issue I've always never been able to figure out is when I use it as a proxy to some backend like Node or Java is that when I restart/redeploy those services Nginx thinks they are down for ~10 seconds.
Recently I realized that Nginx is creating an "upstream" declaration when you use proxy_pass.
View documentation for upstream here.
The thing about ngx_http_upstream_module is that it supports a configuration parameter called fail_timeout. What Nginx uses this for is if the upstream server is not available Nginx won't allow calls to it during this duration. The upstream module allows you to define many backends for a single proxy, meaning you basically create a load balancer.
When you use proxy_pass without explicitly using ngx_http_upstream_module you can think of that as creating a load balancer with only one available node.
So, if you restart your service and Nginx tries to proxy a request to it - and it fails - Nginx will consider that server unavailable for fail_timeout. And since you only have one server in your "load balancer" Nginx can't forward that request to anything. So it returns a 502.
What you can do is set fail_timeout to 0. Then Nginx will never consider the backend unavailable. I'd only do this if you have a single upstream server. Even then it is risky in a high load environment because then if the server becomes overwhelmed it might not have a grace period to recover.
However, if you're in a high load environment I trust you have more than one upstream node :).
So here's what you came for:
upstream backend { server localhost:3001 fail_timeout=0; } server { location / { proxy_pass http://backend; } }
from Hacker News https://ift.tt/2w2k92e
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.