Why Hostinger for Web App Hosting in 2026
The hosting landscape has shifted dramatically. While platforms like Vercel and Netlify offer frictionless deploys for frontend applications, they quickly become expensive at scale. If your Next.js app serves 500,000+ page views per month, or if you need backend services, databases, and cron jobs alongside your frontend, a Hostinger VPS deployment gives you full control at a fraction of the cost.
Hostinger's KVM-based VPS plans start at $9.99/month for 2 vCPUs, 8 GB RAM, and 100 GB NVMe storage. That is more compute than most startups need for their entire stack, and it comes with a dedicated IPv4 address, root SSH access, and data centers across North America, Europe, Asia, and South America.
Here is why agencies and startups are choosing Hostinger VPS in 2026:
- Cost predictability -- flat monthly pricing, no per-request or per-bandwidth billing surprises
- Full root access -- install Docker, configure firewalls, run multiple apps on one server
- KVM virtualization -- guaranteed resources, not shared containers
- Snapshot backups -- one-click server snapshots before risky deployments
- API access -- Hostinger now offers an API for DNS, VPS, and domain management, enabling Terraform-like automation
Deployment Methods: Choosing Your Path
Hostinger supports several deployment strategies depending on your project requirements:
1. Static Hosting (Web Hosting Plans)
For purely static sites built with Next.js static export, Astro, or plain HTML/CSS/JS. Hostinger's shared hosting plans start at $2.99/month and include a free domain, SSL, and email. Best for marketing sites, landing pages, and documentation portals. Simply upload your out/ directory via Git integration or the file manager.
2. Node.js Hosting (Business and Cloud Plans)
Hostinger's business hosting plans support Node.js applications natively. You can deploy SSR Next.js apps directly through their hPanel interface. This works well for smaller apps but has limitations: no root access, restricted port binding, and limited process control.
3. VPS with Docker (Recommended for Production)
This is the approach we recommend for any serious production deployment. A Hostinger VPS gives you a blank Ubuntu or AlmaLinux server where you can install Docker, configure Nginx, set up SSL with Let's Encrypt, and run multiple applications. This is the method we will focus on in this guide.
Pro tip: If you are deploying a Next.js app that uses server-side rendering, API routes, or middleware, you need either the Node.js hosting or VPS approach. Static export will not work for dynamic features.
Step-by-Step: Deploy Next.js on Hostinger VPS
Let us walk through deploying a production-grade Next.js application on a Hostinger VPS. We will use Next.js standalone output mode, PM2 for process management, and Nginx as a reverse proxy with SSL.
Step 1: Prepare Your Next.js App for Standalone Output
Next.js standalone mode generates a minimal, self-contained Node.js server that does not require the full node_modules directory. This dramatically reduces deployment size and startup time.
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'standalone',
images: {
domains: ['your-cdn.com'],
},
// Enable compression at the app level
compress: true,
}
module.exports = nextConfig
After building with npm run build, your deployment artifacts live in .next/standalone/. Copy the public/ and .next/static/ directories into the standalone folder before deploying.
Step 2: Configure PM2 Process Manager
PM2 keeps your Node.js application running, restarts it on crashes, and handles log rotation. Create an ecosystem file:
module.exports = {
apps: [
{
name: 'nextjs-app',
script: 'server.js',
cwd: '/var/www/myapp/.next/standalone',
instances: 'max',
exec_mode: 'cluster',
env: {
NODE_ENV: 'production',
PORT: 3000,
HOSTNAME: '0.0.0.0',
},
max_memory_restart: '500M',
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
error_file: '/var/log/pm2/nextjs-error.log',
out_file: '/var/log/pm2/nextjs-out.log',
merge_logs: true,
},
],
}
The cluster mode with instances: 'max' spawns one worker per CPU core, maximizing throughput on multi-core VPS plans. On a Hostinger KVM 2 plan (2 vCPUs), this means two workers handling requests in parallel.
Step 3: Configure Nginx Reverse Proxy
Nginx sits in front of your Node.js app, handling SSL termination, static file serving, gzip compression, and load balancing.
upstream nextjs_upstream {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
# SSL managed by Certbot
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Strict-Transport-Security "max-age=63072000" always;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json
application/javascript text/xml application/xml
image/svg+xml;
# Serve static files directly
location /_next/static {
alias /var/www/myapp/.next/static;
expires 365d;
access_log off;
add_header Cache-Control "public, immutable";
}
location /public {
alias /var/www/myapp/public;
expires 30d;
access_log off;
}
# Proxy to Next.js
location / {
proxy_pass http://nextjs_upstream;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Important: Run sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com to obtain free SSL certificates from Let's Encrypt before enabling the HTTPS server block. Certbot will auto-renew certificates every 90 days.
Introduction to Terraform for Infrastructure as Code
Manually SSHing into a server and running commands is fine for a single deployment. But what happens when you need to manage staging and production environments, replicate infrastructure across regions, or onboard new team members? This is where Terraform infrastructure as code transforms your workflow.
Terraform by HashiCorp lets you define your infrastructure in declarative configuration files. Instead of a runbook of manual steps, you write code that describes your desired state, and Terraform figures out how to get there.
Core Terraform Concepts
- Providers -- plugins that interact with cloud APIs (AWS, DigitalOcean, Cloudflare, or generic SSH/HTTP providers)
- Resources -- infrastructure components like servers, DNS records, firewalls, and load balancers
- State -- Terraform tracks what it has created in a state file, enabling incremental updates
- Modules -- reusable packages of Terraform configuration for common patterns
- Plan/Apply --
terraform planshows what will change;terraform applyexecutes the changes
State Management Best Practices
Never commit your Terraform state file to version control. It contains sensitive information like IP addresses, credentials, and resource IDs. Use a remote backend like an S3 bucket or Terraform Cloud for team environments:
terraform {
backend "s3" {
bucket = "mycompany-terraform-state"
key = "hostinger/production/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-lock"
}
}
Sample Terraform Configuration for Server Provisioning
While Hostinger does not have a native Terraform provider, you can use the generic SSH provisioner along with Cloudflare or other DNS providers to automate the full setup. Here is a Terraform configuration that provisions and configures a server:
terraform {
required_version = ">= 1.7.0"
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 4.0"
}
null = {
source = "hashicorp/null"
version = "~> 3.2"
}
}
}
variable "server_ip" {
description = "Hostinger VPS public IP address"
type = string
sensitive = true
}
variable "ssh_private_key_path" {
description = "Path to SSH private key"
type = string
default = "~/.ssh/hostinger_vps"
}
variable "domain" {
description = "Primary domain name"
type = string
}
variable "cloudflare_zone_id" {
description = "Cloudflare zone ID for DNS"
type = string
}
# DNS record pointing to VPS
resource "cloudflare_record" "app" {
zone_id = var.cloudflare_zone_id
name = var.domain
content = var.server_ip
type = "A"
ttl = 300
proxied = true
}
# Server provisioning via SSH
resource "null_resource" "server_setup" {
depends_on = [cloudflare_record.app]
connection {
type = "ssh"
host = var.server_ip
user = "root"
private_key = file(var.ssh_private_key_path)
timeout = "5m"
}
# Install dependencies
provisioner "remote-exec" {
inline = [
"apt-get update -y",
"apt-get install -y nginx certbot python3-certbot-nginx",
"curl -fsSL https://deb.nodesource.com/setup_22.x | bash -",
"apt-get install -y nodejs",
"npm install -g pm2",
"mkdir -p /var/www/myapp /var/log/pm2",
"ufw allow 'Nginx Full'",
"ufw allow OpenSSH",
"ufw --force enable",
]
}
# Copy Nginx configuration
provisioner "file" {
source = "./nginx/myapp.conf"
destination = "/etc/nginx/sites-available/myapp"
}
# Enable site and restart Nginx
provisioner "remote-exec" {
inline = [
"ln -sf /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/",
"rm -f /etc/nginx/sites-enabled/default",
"nginx -t && systemctl reload nginx",
"pm2 startup systemd -u root --hp /root",
]
}
}
output "app_url" {
value = "https://${var.domain}"
}
output "server_ip" {
value = var.server_ip
sensitive = true
}
Note: Hostinger's API now supports VPS management, DNS records, and firewall configuration. A community Terraform provider for Hostinger may emerge in 2026. In the meantime, the null_resource + SSH approach works reliably for server provisioning.
CI/CD Pipeline with GitHub Actions
Automating deployments eliminates human error and ensures every push to your main branch goes through build, test, and deploy stages. Here is a production-ready GitHub Actions workflow for deploying to your Hostinger VPS:
name: Deploy to Hostinger VPS
on:
push:
branches: [main]
workflow_dispatch:
env:
NODE_VERSION: '22'
APP_DIR: /var/www/myapp
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Build Next.js
run: npm run build
- name: Prepare standalone package
run: |
cp -r public .next/standalone/public
cp -r .next/static .next/standalone/.next/static
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: standalone-build
path: .next/standalone
retention-days: 1
deploy:
needs: build
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Download artifact
uses: actions/download-artifact@v4
with:
name: standalone-build
path: standalone
- name: Deploy to VPS
uses: appleboy/scp-action@v0.1.7
with:
host: ${{ secrets.VPS_HOST }}
username: ${{ secrets.VPS_USER }}
key: ${{ secrets.VPS_SSH_KEY }}
source: "standalone/*"
target: ${{ env.APP_DIR }}
strip_components: 1
- name: Restart application
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.VPS_HOST }}
username: ${{ secrets.VPS_USER }}
key: ${{ secrets.VPS_SSH_KEY }}
script: |
cd ${{ env.APP_DIR }}
pm2 reload ecosystem.config.js --env production
pm2 save
echo "Deployment complete: $(date)"
This workflow builds your Next.js app in GitHub's cloud runners, uploads the standalone artifact, copies it to your Hostinger VPS via SCP, and performs a zero-downtime reload using PM2's cluster mode.
Monitoring and Scaling Considerations
Deploying is only half the battle. Production applications need observability and a scaling strategy.
Monitoring Setup
- PM2 Monitoring --
pm2 monitprovides real-time CPU, memory, and event loop metrics. For remote monitoring, use PM2 Plus ($14/month) or self-hosted alternatives. - Nginx access logs -- pipe to GoAccess for real-time web analytics from your terminal, or forward to a centralized logging stack.
- Uptime monitoring -- use UptimeRobot (free for 50 monitors) or Better Stack for HTTP health checks and alerting.
- Server metrics -- install Netdata (open source) for beautiful real-time dashboards showing CPU, memory, disk I/O, and network traffic.
Scaling Strategy
Hostinger VPS plans scale vertically. If your 2-vCPU plan is not enough, upgrade to 4 or 8 vCPUs through their panel with minimal downtime. For horizontal scaling, consider:
- Cloudflare CDN -- cache static assets and full pages at the edge, reducing server load by 60-80%
- Database separation -- move PostgreSQL to a managed service (Neon, Supabase, or a second VPS) to isolate compute
- Queue workers -- offload heavy tasks to background workers using BullMQ or similar, running on the same VPS under a separate PM2 process
- Load balancing -- for high-availability setups, deploy multiple VPS instances behind Cloudflare Load Balancing
Hostinger vs Vercel vs AWS: Cost Comparison
Here is a realistic cost breakdown for a Next.js application serving 500,000 page views per month with SSR, API routes, and image optimization:
| Feature | Hostinger VPS | Vercel Pro | AWS (EC2 + CloudFront) |
|---|---|---|---|
| Monthly cost | $9.99 - $15.99 | $20 + overages | $45 - $120 |
| Bandwidth included | 2 - 8 TB | 1 TB ($40/TB extra) | 100 GB ($0.09/GB extra) |
| SSR/API compute | Unlimited (VPS resources) | 100 GB-hours ($0.18/GB-hr extra) | Unlimited (EC2 instance) |
| SSL certificate | Free (Let's Encrypt) | Included | Free (ACM) |
| Custom server config | Full root access | Limited | Full access |
| Database hosting | Self-hosted (free) | Separate ($20+/mo) | RDS ($30+/mo) |
| Deploy complexity | Medium (one-time setup) | Low (git push) | High |
| Annual cost (est.) | $120 - $192 | $240 - $500+ | $540 - $1,440+ |
For startups and small-to-medium businesses, Hostinger VPS deployment offers the best value. You get predictable costs, full control, and enough performance for most applications. Vercel wins on developer experience if budget is not a constraint. AWS makes sense only for enterprise-scale applications that need global edge infrastructure.
Wrapping Up
Deploying web applications in 2026 does not have to mean choosing between expensive managed platforms and complex cloud setups. Hostinger VPS provides a middle ground: affordable, powerful, and fully customizable infrastructure that you can manage with modern DevOps practices.
By combining Terraform infrastructure as code with a well-structured CI/CD pipeline, you get the best of both worlds -- the cost savings of self-hosted infrastructure with the automation and reliability of platform-as-a-service offerings.
To recap the key steps:
- Build your Next.js app in standalone mode for minimal deployment footprint
- Use PM2 in cluster mode for zero-downtime restarts and process management
- Configure Nginx as a reverse proxy with SSL, compression, and static file caching
- Define your infrastructure with Terraform for repeatable, version-controlled provisioning
- Automate deployments with GitHub Actions for consistent, tested releases
- Monitor with PM2 Plus, Netdata, and UptimeRobot for production peace of mind