Introduction
In many Linux-based environments, applications are deployed on separate virtual machines or run on different ports of the same VM. While this approach offers flexibility, it also introduces operational challenges. End users often need to access services using non-standard ports, internal IP addresses, or long URLs that are not suitable for production use.
For example, a web application might be running on http://10.0.0.15:8080, while an API service runs on http://10.0.0.16:9000. Exposing these endpoints directly to users is not only inconvenient but also raises security and scalability concerns.
This is where a reverse proxy becomes essential. A reverse proxy acts as an intermediary between clients and backend services, providing a single, clean entry point while forwarding requests to the appropriate internal service.
Why we need to do:
The root cause of the problem is direct service exposure. Without a reverse proxy, applications are typically accessed in one of the following suboptimal ways:
- Using IP addresses instead of domain names
- Exposing non-standard ports (e.g., :8080, :3000)
- Allowing backend services to be directly reachable from the internet
- Duplicating TLS/SSL configuration across multiple applications
These patterns create several risks and inefficiencies:
- Security risks
Backend services exposed directly are harder to protect. Each service needs its own firewall rules, TLS configuration, and monitoring.
- Operational complexity
Any change in backend IPs or ports requires client-side updates, which is not scalable.
- Poor user experience
Users expect clean URLs like https://app.example.com, not raw IPs and ports.
- Limited scalability
Load balancing, request routing, and traffic control become difficult without a centralized entry point.
A reverse proxy solves these issues by abstracting backend details from the client and centralizing traffic management.
How do we solve :
In this setup, we will configure NGINX as a reverse proxy on a Linux VM. NGINX will listen on standard HTTP/HTTPS ports and forward requests to internal backend services based on domain name or URL path.
Prerequisites
- A Linux VM (Ubuntu, RHEL, or similar)
- Root or sudo access
- Backend services already running (example ports used below)
- NGINX installed
Install NGINX:
yum install nginx
systemctl status nginx.service
Example Scenario
- Frontend application running on: 127.0.0.1:3000
- API service running on: 127.0.0.1:8080
- Domain: app.example.com
NGINX will:
- Accept requests on port 80
- Route / to the frontend service
- Route /api to the backend API service
NGINX Reverse Proxy Configuration
Create a new configuration file:
sudo vi /etc/nginx/conf.d/app.example.com.conf
Basic Reverse Proxy Configuration
server {
listen 80;
server_name app.example.com;
access_log /var/log/nginx/app_access.log;
error_log /var/log/nginx/app_error.log;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_headr X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api/ {
proxy_pass http://127.0.0.1:8080/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Explanation of Key Directives
- listen 80 – NGINX listens on standard HTTP port.
- server_name – Defines the domain mapped to this application.
- proxy_pass – Forwards incoming requests to the backend service.
- proxy_set_header – Preserves original client and protocol information, which is critical for logging, authentication, and application logic.
Validate and Reload NGINX
Before applying changes, validate the configuration:
nginx –t && systemctl reload nginx
Conclusion
Configuring a reverse proxy on a Linux VM using NGINX is a foundational practice for modern infrastructure. It simplifies application access, improves security, and provides flexibility to scale services without impacting users.
By introducing NGINX as a reverse proxy:
- Backend services remain isolated and protected
- URLs become clean and user-friendly
- TLS, logging, and routing are centralized
- Infrastructure changes are easier to manage
Whether you are running a single application or multiple microservices, a properly configured reverse proxy is not optional—it is a best practice that significantly improves reliability and maintainability.