Rewriting nginx.conf when converting docker-compose to Kubernetes using Kompose? - docker

I am quite new to Kubernetes and I have been struggling to migrate my current docker-compose environment to Kubernetes...
I converted my docker-compose.yml to Kubernetes manifests using kompose.
So far, I can access each pod individually but it seems like I have some issues to get those pods to communicate each other.. My Nginx pod can not access my app pod
My docker-compose.yml is something like below
version: '3.3'
services:
myapp:
image: my-app
build: ./docker/
restart: unless-stopped
container_name: app
stdin_open: true
volumes:
- mnt:/mnt
env_file:
- .env
mynginx:
image: nginx:latest
build: ./docker/
container_name: nginx
ports:
- 80:80
stdin_open: true
restart: unless-stopped
user: root
My Nginx.conf is something like below
server{
listen 80;
index index.html index.htm;
root /mnt/volumes/statics/;
location /myapp {
proxy_pass http://myapp/index;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
I understand that docker-compose enables containers to communicate each other through service names (myapp and mynginx in this case). Could somebody tell me what I need to do to achieve the same thing in Kubernetes?

Kompose did create services for me. It turned out that what I missed was docker-compose.overwrite file (apparently kompose just ignores overwrite.yml).

Related

Nginx deployed in docker container doesn't expose nuxtjs deployed in another docker container (502 Bad Gateway)

I'm trying to run nuxtjs application using nginx as proxy server in docker containers. So, I have 2 containers: nginx and nuxt
here is how I'm building nuxt application
FROM node:11.15
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
ADD . ${APP_ROOT}
RUN npm install
RUN npm run build
ENV host 0.0.0.0
The result seems to be fine
Next is nginx config
server {
listen 80;
server_name dev.iceik.com.ua;
location / {
proxy_pass http://nuxt:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Also I've tried this nginx config
upstream nuxt {
server nuxt:3000;
}
server {
listen 80;
server_name dev.iceik.com.ua;
location / {
proxy_pass http://nuxt;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And finally my docker-compose file
version: "3"
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
nginx:
image: nginx:1.17
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
I can ping nuxt container from nginx container
Also here are opened ports
So, the expected result is that I can access my nuxt application.
However I'm getting 502 Bad Gateway
Do you have any ideas why nginx doesn't expose my nuxt application?
Thank you for any suggestions!
Nodejs is exposed localhost:3000 instead of 0.0.0.0:3000
Please correct it. It will work
Always good that your containers put into a network if they need to talk each other, other way is to use Host network(only works in linux). Try below docker-compose.yml they should be able to talk each other from the container names.
version: "3"
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
networks:
- my_net
nginx:
image: nginx:1.17
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
networks:
- my_net
networks:
my_net:
driver: "bridge"

NGINX and Docker-Compose: host not found in upstream

I'm trying to get docker-compose to run an NGINX reverse-proxy and I'm running into an issue. I know that what I am attempting appears possible as it is outlined here:
https://dev.to/domysee/setting-up-a-reverse-proxy-with-nginx-and-docker-compose-29jg
and here:
https://www.digitalocean.com/community/tutorials/how-to-secure-a-containerized-node-js-application-with-nginx-let-s-encrypt-and-docker-compose#step-2-%E2%80%94-defining-the-web-server-configuration
My application is very simple - it has a front end and a back end (nextjs and nodejs), which I've put in docker-compose along with an nginx instance.
Here is the docker-compose file:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
container_name: nodejs
restart: unless-stopped
nextjs:
build:
context: ../.
dockerfile: Dockerfile
ports:
- "3000:3000"
container_name: nextjs
restart: unless-stopped
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
depends_on:
- nodejs
- nextjs
networks:
- app-network
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /
o: bind
networks:
app-network:
driver: bridge
And here is the nginx file:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name patientplatypus.com www.patientplatypus.com localhost;
location /back {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://nodejs:8000;
}
location / {
proxy_pass http://nextjs:3000;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
Both of these are very similar to the digitalOcean example and I can't think of how they would be different enough to cause errors. I run it with a simple docker-compose up -d --build.
When I go to localhost:80 I get page could not be found, and here is the result of my docker logs -
patientplatypus:~/Documents/patientplatypus.com/forum/back:10:03:32$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9c2e4e25e6d9 nginx:mainline-alpine "nginx -g 'daemon of…" 2 minutes ago Restarting (1) 14 seconds ago webserver
213e73495381 back_nodejs "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:8000->8000/tcp nodejs
03b6ae8f0ad4 back_nextjs "npm start" 2 minutes ago Up 2 minutes 0.0.0.0:3000->3000/tcp nextjs
patientplatypus:~/Documents/patientplatypus.com/forum/back:10:05:41$docker logs 9c2e4e25e6d9
2019/04/10 15:03:32 [emerg] 1#1: host not found in upstream "nodejs" in /etc/nginx/conf.d/nginx.conf:20
I'm pretty lost as to what could be going wrong. If anyone has any ideas please let me know. Thank you.
EDIT: SEE SOLUTION BELOW
The nginx webserver is on the network app-network which is a different network than the other two services which don't have a network defined. When no network is defined docker-compose will create a default network for them to share.
Either copy the network setting to both of the other services or remove the network setting from the webserver service.

Nginx upstream doesn't work with docker deploy stack

I'm trying to deploy a stack with docker.
Here is how my stack works:
nginx-proxy (redirect user requests to the good container)
website (simple nginx serving a website)
api (django application, launch with gunicorn)
nginx-api (serving static files and uploaded files and redirect to the API container if it is an endpoint)
This is my docker-compose.yml:
version: '3.2'
services:
website:
container_name: nyl2pronos-website
image: nyl2pronos-website
restart: always
build:
context: nyl2pronos_webapp
dockerfile: Dockerfile
volumes:
- ./logs/nginx-website:/var/log/nginx
expose:
- "80"
deploy:
replicas: 10
update_config:
parallelism: 5
delay: 10s
api:
container_name: nyl2pronos-api
build:
context: nyl2pronos_api
dockerfile: Dockerfile
image: nyl2pronos-api
restart: always
ports:
- 8001:80
expose:
- "80"
depends_on:
- db
- memcached
environment:
- DJANGO_PRODUCTION=1
volumes:
- ./data/api/uploads:/code/uploads
- ./data/api/static:/code/static
nginx-api:
image: nginx:latest
container_name: nyl2pronos-nginx-api
restart: always
expose:
- "80"
volumes:
- ./data/api/uploads:/uploads
- ./data/api/static:/static
- ./nyl2pronos_api/config:/etc/nginx/conf.d
- ./logs/nginx-api:/var/log/nginx
depends_on:
- api
nginx-proxy:
image: nginx:latest
container_name: nyl2pronos-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./proxy:/etc/nginx/conf.d
- /etc/letsencrypt:/etc/letsencrypt
- ./logs/nginx-proxy:/var/log/nginx
deploy:
placement:
constraints: [node.role == manager]
depends_on:
- nginx-api
- website
When I use docker-compose up everything works fine.
But when I try to deploy with docker stack deploy --compose-file=docker-compose.yml prod. My nginx config files can't find the different upstreams.
This is the error provided by my service nginx-api:
2019/03/23 17:32:41 [emerg] 1#1: host not found in upstream "api" in /etc/nginx/conf.d/nginx.conf:2
See below my nginx.conf:
upstream docker-api {
server api;
}
server {
listen 80;
server_name xxxxxxxxxxxxxx;
location /static {
autoindex on;
alias /static/;
}
location /uploads {
autoindex on;
alias /uploads/;
}
location / {
proxy_pass http://docker-api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
If you see something wrong in my configuration or something I can do better, let me know!
this is happening because nginx-api service is up before the api service.
but i added the depends_on option?
you are right, and this option should work for a docker-compose up case.
but unfortunately not on docker stack deploy, or, as the docs put it:
The depends_on option is ignored when deploying a stack in swarm mode
with a version 3 Compose file.
ok, so what can i do now?
nothing. its actually not a bug-
docker swarm nodes (your stack services) are supposed to recover automatically on error. (thats why you define the restart: always option). so it should work for you anyway.
if you are using the compose file only for deploying the stack and not on a docker-compose up - you may remove the depends_on option completely, it means nothing to docker stack.

Docker Nginx reverse proxy to separate container

I'm having trouble creating a reverse proxy and having it point at apps that are in other containers.
What I have now is a docker-compose for Nginx, and then I want to have separate docker-containers for several different apps and have Nginx direct traffic to those apps.
My Nginx docker-compose is:
version: "3"
services:
nginx:
image: nginx:alpine
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
My default.conf is:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name www.mydomain.com;
location /confluence {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://192.168.1.50:8090/confluence;
}
}
I can access confluence directly at: http://192.168.1.50:8090/confluence
My compose for confluence is:
version: "3"
services:
db:
image: postgres:9.6
container_name: pg_confluence
env_file:
- env.list
ports:
- "5434:5432"
volumes:
- ./pg_conf.sql:/docker-entrypoint-initdb.d/pg_conf.sql
- dbdata:/var/lib/postgresql/data
confluence:
image: my_custom_image/confluence:6.11.0
container_name: confluence
volumes:
- confluencedata:/var/atlassian/application-data/confluence
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
environment:
- JVM_MAXIMUM_MEMORY=2g
ports:
- "8090:8090"
depends_on:
- db
volumes:
confluencedata:
dbdata:
I am able to see the Nginx "Welcome" screen when I hit mydomain.com but if I hit mydomain.com/confluence it gives a not found.
So it looks like Nginx is running, just not sending the traffic to the other container properly.
========================
=== Update With Solution ===
========================
I ended up switching to Traefik instead of Nginx. When I take the next step and start learning k8s this will help as well.
Although these network settings are what you need even if you stick with Nginx, I just didn't test them against Nginx, so hopefully they are helpful no matter which one you end up using.
For the confluence docker-compose.yml I added:
networks:
proxy:
external: true
internal:
external: false
services:
confluence:
...
networks:
- internal
- proxy
db:
...
networks:
- internal
And for the traefik docker-compose.yml I added:
networks:
proxy:
external: true
services:
reverse-proxy:
networks:
- proxy
I had to create the network manually with:
docker network create proxy
It is not really how to use docker the correct way.
If you are in a production environment, use a real orchestration tools (nowaday Kubernetes is the way to go)
If you are on you computer, you can reference a name of a container (or an alias) only if you use the same network AND this network is not the default one.
A way is to have only one docker-compose file.
Another way is to use the same network across your docker-compose.
Create a network docker network create --driver bridge my_network
use it on each docker-compose you have:
networks:
default:
external:
name: my_network

Multiple docker-compose sharing network with a not yet known host for nginx

I use multiple docker-compose files :
one for running on the same network : postgres and nginx
=> this containers collection is supposed to be always running
one for each asp core web site (each one on a specific port)
=> this containers are updated through a CI/CD pipeline (VSTS)
Because Nginx needs to know the hostname when defining the upstream, if the asp core container is not running then it's hostname is not known, then nginx throws an error on docker-compose up command :
nginx | 2018/01/04 15:59:17 [emerg] 1#1: host not found in upstream
"webportalstage:5001" in /etc/nginx/nginx.conf:9
nginx | nginx: [emerg] host not found in upstream
"webportalstage:5001" in /etc/nginx/nginx.conf:9
nginx exited with code 1
And obviously if the asp core container is running before, then nginx knows the hostname webportalstage and everything works fine. But the starting sequence is not what I expect.
is there any solution to start nginx with a not yet known hostname in the upstream ?
Here is my nginx.conf file :
worker_processes 4;
events { worker_connections 1024; }
http {
sendfile on;
upstream webportalstage {
server webportalstage:5001;
}
server {
listen 80;
location / {
proxy_pass http://webportalstage;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
And both docker-compose files :
Nginx + Postgres :
version: "3"
services:
proxy:
image: myPrivateRepo:latest
ports:
- "80:80"
container_name: nginx
networks:
aspcore:
aliases:
- nginx
postgres:
image: postgres:latest
environment:
- POSTGRES_PASSWORD=myPWD
- POSTGRES_USER=postgres
ports:
- "5432:5432"
container_name: postgres
networks:
aspcore:
aliases:
- postgres
networks:
aspcore:
driver: bridge
One of my asp core web site :
version: "3"
services:
webportal:
image: myPrivateRepo:latest
environment:
- ASPNETCORE_ENVIRONMENT=Staging
container_name: webportal
networks:
common_aspcore:
aliases:
- webportal
networks:
common_aspcore:
external: true
Well, I use the following hack in similar situation:
location / {
set $docker_host "webportalstage";
proxy_pass http://$docker_host:5001;
...
}
I'm not sure if it works with upstream, probably it should.
I know, this is not the best solution, but I didn't find any better.
I finally used extra_host feature to define static IP within my nginx+postgres docker-compose.yml file :
extra_hosts:
webportalstage: 10.5.0.20
And setting the same static IP to my asp core docker-compose file.
It works but it's not as generic as I would like

Resources