Unable to build docker images in Jenlins pipeline; docker not found [duplicate] - docker

I have a jenkins running inside a docker container and a docker running inside a different docker container.
I have map the /var/run/docker.sock file of local machine to the docker container and am able to execute docker commands inside docker container. Both the docker container and jenkins container are on the same network. But while connecting to docker container from jenkins, am getting connection refused. I have given 666 permissions to /var/run/docker.sock file but yet am not able to connect between the two. Both the container can ping each other sucessfully.

TL;DR
You could connect to the Docker in Docker environment over tcp or by sharing the docker socket between the containers.
This example is with everything in docker orchestrated using docker-compose.
.
├── docker-compose.yaml
├── Dockerfile
├── etc
│   └── nginx
│   └── conf.d
│   └── default.conf
└── plugins.txt
The docker-compose.yaml sets up jenkins behind nginx and a docker:20.10.5-dind service.
tcp
version: '3.7'
services:
nginx:
image: 'nginx:1.19'
container_name: 'nginx'
restart: 'always'
depends_on:
- 'jenkins'
ports:
- '80:80'
volumes:
- 'jenkins:/var/jenkins_home'
- './etc/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf'
jenkins:
build:
context: '.'
container_name: 'jenkins'
restart: 'always'
expose:
- '50000'
- '8080'
environment:
- 'DOCKER_HOST=tcp://docker:2376'
- 'DOCKER_CERT_PATH=/certs/client'
- 'DOCKER_TLS_VERIFY=1'
volumes:
- 'jenkins:/var/jenkins_home'
- 'certs:/certs:ro'
docker:
image: 'docker:20.10.5-dind'
container_name: 'docker'
privileged: true
volumes:
- 'certs:/certs'
volumes:
jenkins:
certs:
Note: the docker client certificates are shared between the docker and the jenkins containers and the environment is set in the jenkins container to connect to the docker service.
The nginx config is slightly modified from the doc:
upstream jenkins {
keepalive 32;
server jenkins:8080 max_fails=3;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen *:80;
listen [::]:80;
server_name _;
charset utf-8;
ignore_invalid_headers off;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ "^/static/[0-9a-fA-F]{8}\/(.*)$" {
rewrite "^/static/[0-9a-fA-F]{8}\/(.*)" /$1 last;
}
location /userContent {
root /var/jenkins_home/;
if (!-f $request_filename){
rewrite (.*) /$1 last;
break;
}
sendfile on;
}
location / {
sendfile off;
proxy_pass http://jenkins;
proxy_redirect default;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_request_buffering off;
proxy_set_header Connection "";
}
}
The jenkins service is a custom built image pre-baked with the docker client and the default suggested jenkins plugins plus the Docker and Docker Pipeline plugins:
FROM docker:20.10.5-dind as docker
FROM jenkins/jenkins:alpine
USER root
COPY --from=docker /usr/local/bin/docker /usr/local/bin/docker
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
USER jenkins
github:1.33.1
pipeline-model-api:1.8.4
scm-api:2.6.4
mailer:1.32.1
workflow-support:3.8
font-awesome-api:5.15.2-2
pipeline-milestone-step:1.3.2
git:4.6.0
plain-credentials:1.7
resource-disposer:0.15
jackson2-api:2.12.1
jquery3-api:3.5.1-3
gradle:1.36
credentials:2.3.15
docker-workflow:1.26
workflow-scm-step:2.12
display-url-api:2.3.4
bootstrap4-api:4.6.0-2
antisamy-markup-formatter:2.1
command-launcher:1.5
pipeline-stage-tags-metadata:1.8.4
snakeyaml-api:1.27.0
pipeline-stage-view:2.19
script-security:1.76
okhttp-api:3.14.9
pipeline-stage-step:2.5
workflow-step-api:2.23
timestamper:1.11.8
pipeline-github-lib:1.0
token-macro:2.13
pam-auth:1.6
workflow-cps-global-lib:2.18
ws-cleanup:0.39
pipeline-model-definition:1.8.4
workflow-aggregator:2.6
jsch:0.1.55.2
matrix-auth:2.6.5
ssh-credentials:1.18.1
ant:1.11
jjwt-api:0.11.2-9.c8b45b8bb173
momentjs:1.1.1
trilead-api:1.0.13
durable-task:1.35
workflow-job:2.40
git-server:1.9
ssh-slaves:1.31.5
plugin-util-api:2.0.0
git-client:3.6.0
lockable-resources:2.10
checks-api:1.5.0
pipeline-input-step:2.12
cloudbees-folder:6.15
pipeline-build-step:2.13
popper-api:1.16.1-2
pipeline-graph-analysis:1.10
matrix-project:1.18
workflow-api:2.41
github-branch-source:2.9.7
workflow-basic-steps:2.23
apache-httpcomponents-client-4-api:4.5.13-1.0
workflow-multibranch:2.22
workflow-cps:2.90
ldap:1.26
build-timeout:1.20
echarts-api:5.0.1-1
pipeline-model-extensions:1.8.4
structs:1.22
junit:1.48
docker-java-api:3.1.5.2
docker-plugin:1.2.2
workflow-durable-task-step:2.38
credentials-binding:1.24
jdk-tool:1.5
bouncycastle-api:2.20
docker-commons:1.17
github-api:1.123
authentication-tokens:1.4
email-ext:2.82
branch-api:2.6.2
pipeline-rest-api:2.19
ace-editor:1.1
handlebars:1.1.1
After the initial jenkins setup, create the X.509 Client Certificate Server Credentials then configure the Docker Cloud with the docker service using tcp.
Note: you can get the client cert, client key and server ca cert for creating the X.509 Client Certificate Server Credentials using the below commands:
docker exec docker cat /certs/client/key.pem
docker exec docker cat /certs/client/cert.pem
docker exec docker cat /certs/server/ca.pem
socket
version: '3.7'
services:
nginx:
image: 'nginx:1.19'
container_name: 'nginx'
restart: 'always'
depends_on:
- 'jenkins'
ports:
- '80:80'
volumes:
- 'jenkins:/var/jenkins_home'
- './etc/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf'
jenkins:
build:
context: '.'
container_name: 'jenkins'
restart: 'always'
expose:
- '50000'
- '8080'
volumes:
- 'jenkins:/var/jenkins_home'
- 'socket:/var/run'
docker:
image: 'docker:20.10.5-dind'
container_name: 'docker'
privileged: true
volumes:
- 'socket:/var/run'
volumes:
jenkins:
socket:
Note: the docker socket is shared between the docker and the jenkins containers in the socket volume.
By default the docker socket is owned by root:root, the jenkins user is not able to connect to the shared socket, you can change the sockets group ownership to the GID of the jenkins user: docker exec docker chown 0:1000 /var/run/docker.sock.
After the initial jenkins setup configure the Docker Cloud with the docker service using the shared unix socket.

Related

Nginx docker container exits with error “fopen:No such file or directory:fopen('/etc/nginx/ssl/live/test.example.dev/fullchain.pem'” in Ubuntu 20.04

Okay so I'm learning Docker and I am trying to deploy a test app with a subdomain (who's domain was bought from another provider) which is pointing to my server. The server already has non-dockerized Nginx setup which serves couple of other non-dockerized apps perfectly. And that part means Nginx is already using port 80 and 443. It's also worth mentioning that the subdomain's main domain (example.dev) has a non-dockerized app with active SSL cert from Let's Encrypt already running in the server. And now the subdomain (test.example.dev) somehow shows Nginx default page when visited. This is my server situation. Now let me explain what happens with Nginx and Certbot in a dockerized app.
The app is using 4 images to create 4 containers: Nodejs, Mongodb, Nginx and Certbot(for SSL). Before adding Certbot, I could perfectly access the app with :. But now I need to attach that subdomain (test.example.dev) to my app with Let's Encrypt SSL certificates.
So after the build is done with Docker Compose, I see that Nginx and Certbot is exited with errors.
This is my nginx/default.conf file:
server {
listen 80;
listen [::]:80;
server_name test.example.dev;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://test.example.dev$request_uri;
}
}
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name test.example.dev;
ssl_certificate /etc/nginx/ssl/live/test.example.dev/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/test.example.dev/privkey.pem;
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://practice-app:3050;
proxy_redirect off;
}
}
And here’s my docker-compose.yml file:
version: '3'
services:
practice-app:
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
command: node index.js
depends_on:
- mongo
nginx:
image: nginx:stable-alpine
ports:
- "4088:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./certbot/www:/var/www/certbot/:ro
- ./certbot/conf/:/etc/nginx/ssl/:ro
certbot:
image: certbot/certbot:latest
volumes:
- ./certbot/www/:/var/www/certbot/:rw
- ./certbot/conf/:/etc/letsencrypt/:rw
depends_on:
- nginx
mongo:
image: mongo:4.4.6
environment:
- MONGO_INITDB_ROOT_USERNAME=test
- MONGO_INITDB_ROOT_PASSWORD=test
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
Nginx logs says:
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/31 13:42:28 [emerg] 1#1: cannot load certificate "/etc/nginx/ssl/live/test.example.dev/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/ssl/live/test.example.dev/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx: [emerg] cannot load certificate "/etc/nginx/ssl/live/test.example.dev/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/ssl/live/test.example.dev/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
And Certbot logs says:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Certbot doesn't know how to automatically configure the web server on this system. However, it can still get a certificate for you. Please run "certbot certonly" to do so. You'll need to manually configure your web server to use the resulting certificate.
But after adding the following code:
command: certonly --webroot -w /var/www/certbot --force-renewal --email example#gmail.com -d test.example.dev --agree-tos
under certbot service, the log changed to this:
[17:00] [server1.com test] # docker logs test_certbot_1
Requesting a certificate for test.example.dev
Certbot failed to authenticate some domains (authenticator: webroot). The Certificate Authority reported these problems:
Domain: test.example.dev
Type: unauthorized
Detail: Invalid response from http://test.example.dev/.well-known/acme-challenge/HCFXwB1BXb-provr8lr6mJCDG9LRoGbVV0e9BWiiwAo [63.250.33.76]: "<html>\r\n<head><title>404 Not Found</title></head>\r\n<body>\r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>nginx</center>\r\n"
Hint: The Certificate Authority failed to download the temporary challenge files created by Certbot. Ensure that the listed domains serve their content from the provided --webroot-path/-w and that files created there can be downloaded from the internet.
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Some challenges have failed.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
What am I doing wrong here? Please give me a beginner friendly solution as I am new to DevOps.
You have some mistakes in your docker-compose file. Your nginx should be linked with Practice_app not on nginx and your practice app should open the port 3050 in here.
version: '3'
services:
practice-app:
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
command: node index.js
ports:
- "3050:3050"
depends_on:
- mongo
nginx:
image: nginx:stable-alpine
ports:
- "4088:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./certbot/www:/var/www/certbot/:ro
- ./certbot/conf/:/etc/nginx/ssl/:ro
links:
- practice-app
certbot:
image: certbot/certbot:latest
volumes:
- ./certbot/www/:/var/www/certbot/:rw
- ./certbot/conf/:/etc/letsencrypt/:rw
depends_on:
- nginx
mongo:
image: mongo:4.4.6
environment:
- MONGO_INITDB_ROOT_USERNAME=test
- MONGO_INITDB_ROOT_PASSWORD=test
volumes:
- mongo-db:/data/db
volumes:
mongo-db:

Docker Nginx on a non-docker host application

I have many docker containers which pass through docker nginx combo (docker-compose.yml outlined below) and they work very well. I want docker nginx to do the same to a non-docker app thats running on localhost:8080, that is I want docker nginx container to run connections to example.com to 127.0.0.1:8080 where 127.0.0.1:8080 is ran by a non-docker app (code-server do be specific but that shouldn't matter)
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /apps/proxy/conf:/etc/nginx/conf.d
- /apps/proxy/vhost:/etc/nginx/vhost.d
- /apps/proxy/html:/usr/share/nginx/html
- /apps/proxy/dhparam:/etc/nginx/dhparam
- /apps/proxy/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-proxy-le
depends_on:
- nginx-proxy
volumes:
- /apps/proxy/vhost:/etc/nginx/vhost.d
- /apps/proxy/html:/usr/share/nginx/html
- /apps/proxy/dhparam:/etc/nginx/dhparam:ro
- /apps/proxy/certs:/etc/nginx/certs
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
restart: always
networks:
default:
external:
name: nginx-proxy
and its running well on docker containers, the moment i include "nginx-proxy" so that it can detect them, fantastic tool. I cant simply paste something like this into default.conf (conf of nginx )
server {
listen 80;
listen [::]:80;
server_name example.com;
location / {
proxy_pass http://localhost:8080/;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Accept-Encoding gzip;
}
}
My guess is that what is "localhost" for me is not "localhost" for nginx given that it is inside a docker (im new to this docker stuff, so I might be talking shit). I saw the issue where they mentioned many approaches, none of them worked for me. In particular I tried running a dummy docker as CWempe suggested
docker run -d \
-e VIRTUAL_HOST=foo.bar.com \
-e VIRTUAL_PORT=8080 \
-e UPSTREAM_NAME=webserver.local \
--rm \
cwempe/docker-dummy:latest
This didnt't work, nginx didnt even detect it which made me think its probably because its not on the nginx-network so i added that (and turned it into docker-compose for convenience)
version: '3.3'
services:
docker-dummy:
environment:
- VIRTUAL_HOST=example.com
- VIRTUAL_PORT=8080
- UPSTREAM_NAME=127.0.0.1
image: 'cwempe/docker-dummy:latest'
networks:
default:
external:
name: nginx-proxy
Then looking at default.conf i get
# mydomain.com
upstream mydomain.com {
## Can be connected with "nginx-proxy" network
# code-server_docker-dummy_1
server 172.25.0.7 down;
}
server {
server_name mydomain.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
include /etc/nginx/vhost.d/default;
location / {
proxy_pass http://example.com;
}
}
server {
server_name example.com;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
return 500;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
}
So sure it has seen it but it also believes it is down and doesn't include the VIRTUAL_PORT at all and obviously the 127.25.0.7 IP doesn't make sense to me either. Changing 127.25.0.7 -> 127.0.0.1:8080 does nothing. Any idea how I can remedy this ? Thank you for your input in advance.

Docker compose of nginx, express, letsencrypt SSL get 502 Bad gateway

I am trying to find a way to publish nginx, express, and letsencrypt's ssl all together using docker-compose. There are many documents about this, so I referenced these and tried to make my own configuration, I succeed to configure nginx + ssl from this https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
So now I want to put sample nodejs express app into nginx + ssl docker-compose. But I don't know why, I get 502 Bad Gateway from nginx rather than express's initial page.
I am testing this app with my left domain, and on aws ec2 ubuntu16. I think there is no problem about domain dns and security rules settings. All of 80, 443, 3000 ports opened already. and When I tested it without express app it shows well nginx default page.
nginx conf in /etc/nginx/conf.d
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
version: '3'
services:
app:
container_name: express
restart: always
build: .
ports:
- '3000:3000'
nginx:
container_name: nginx
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
Dockerfile of express
FROM node:12.2-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I think SSL works fine, but there are some problems between express app and nginx. How can I fix this?
proxy_pass http://localhost:3000
is proxying the request to the 3000 port on the container that is running nginx. What you instead want is to connect to the 3000 port of the container running express. For that, we need to do two things.
First, we make the express container visible to nginx container at a predefined hostname. We can use links in docker-compose.
nginx:
links:
- "app:expressapp"
Alternatively, since links are now considered a legacy feature, a better way is to use a user defined network. Define a network of your own with
docker network create my-network
and then connect your containers to that network in compose file by adding the following at the top level:
networks:
default:
external:
name: my-network
All the services connected to a user defined network can access each other via name without explicitly setting up links.
Then in the nginx.conf, we proxy to the express container using that hostname:
location / {
proxy_pass http://app:3000
}
Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.
Define networks in your docker-compose.yml and configure your services with the appropriate network:
version: '3'
services:
app:
restart: always
build: .
networks:
- backend
expose:
- "3000"
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
depends_on:
- app
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
networks:
- frontend
- backend
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks:
frontend:
backend:
Note: the app service no longer publish's it's ports to the host it only exposes port 3000 (ref. exposing and publishing ports), it is only available to services connected to the backend network. The nginx service has a foot in both the backend and frontend network to accept incoming traffic from the frontend and proxy the connections to the app in the backend (ref. multi-host networking).
With user-defined networks you can resolve the service name:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream app {
server app:3000 max_fails=3;
}
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
}
Removing the container_name from your services makes it possible to scale the services: docker-compose up -d --scale nginx=1 app=3 - nginx will load balance the traffic in round-robin to the 3 app containers.
I think maybe a source of confusion here is the way the "localhost" designation behaves among running services in docker-compose. The way docker-compose orchestrates your containers, each of the containers understands itself to be "localhost", so "localhost" does not refer to the host machine (and if I'm not mistaken, there is no way for a container running on the host to access a service exposed on a host port, apart from maybe some security exploits). To demonstrate:
services:
app:
container_name: express
restart: always
build: .
ports:
- '2999:3000' # expose app's port on host's 2999
Rebuild
docker-compose build
docker-compose up
Tell container running the express app to curl against its own running service on port 3000:
$ docker-compose exec app /bin/bash -c "curl http://localhost:3000"
<!DOCTYPE html>
<html>
<head>
<title>Express</title>
<link rel='stylesheet' href='/stylesheets/style.css' />
</head>
<body>
<h1>Express</h1>
<p>Welcome to Express</p>
</body>
</html>
Tell app to try to that same service which we exposed on port 2999 on the host machine:
$ docker-compose exec app /bin/bash -c "curl http://localhost:2999"
curl: (7) Failed to connect to localhost port 2999: Connection refused
We will of course see this same behavior between running containers as well, so in your setup nginx was trying to proxy it's own service running on localhost:3000 (but there wasn't one, as you know).
Tasks
build NodeJS app
add SSL functionality from the box (that can work automatically)
Solution
https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
/ {path_to_the_project} /Docker-compose.yml
version: '3.7'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
restart: always
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
- ./vhost.d:/etc/nginx/vhost.d
- ./html:/usr/share/nginx/html
- ./conf.d:/etc/nginx/conf.d
ports:
- "443:443"
- "80:80"
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./certs:/etc/nginx/certs:rw
- ./vhost.d:/etc/nginx/vhost.d:rw
- ./html:/usr/share/nginx/html:rw
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
api:
container_name: ${APP_NAME}
build:
context: .
dockerfile: Dockerfile
command: npm start --port ${APP_PORT}
expose:
- ${APP_PORT}
# ports:
# - ${APP_PORT}:${APP_PORT}
restart: always
environment:
VIRTUAL_PORT: ${APP_PORT}
VIRTUAL_HOST: ${DOMAIN}
LETSENCRYPT_HOST: ${DOMAIN}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
NODE_ENV: production
PORT: ${APP_PORT}
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
/ {path_to_the_project} /.env
APP_NAME=best_api
APP_PORT=3000
DOMAIN=api.site.com
LETSENCRYPT_EMAIL=myemail#gmail.com
Do not forget to connect DOMAIN to you server before you will run container there.
How it works?
just run docker-compose up --build -d

NGINX and Docker-Compose: host not found in upstream

I'm trying to get docker-compose to run an NGINX reverse-proxy and I'm running into an issue. I know that what I am attempting appears possible as it is outlined here:
https://dev.to/domysee/setting-up-a-reverse-proxy-with-nginx-and-docker-compose-29jg
and here:
https://www.digitalocean.com/community/tutorials/how-to-secure-a-containerized-node-js-application-with-nginx-let-s-encrypt-and-docker-compose#step-2-%E2%80%94-defining-the-web-server-configuration
My application is very simple - it has a front end and a back end (nextjs and nodejs), which I've put in docker-compose along with an nginx instance.
Here is the docker-compose file:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
container_name: nodejs
restart: unless-stopped
nextjs:
build:
context: ../.
dockerfile: Dockerfile
ports:
- "3000:3000"
container_name: nextjs
restart: unless-stopped
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
depends_on:
- nodejs
- nextjs
networks:
- app-network
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /
o: bind
networks:
app-network:
driver: bridge
And here is the nginx file:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name patientplatypus.com www.patientplatypus.com localhost;
location /back {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://nodejs:8000;
}
location / {
proxy_pass http://nextjs:3000;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
Both of these are very similar to the digitalOcean example and I can't think of how they would be different enough to cause errors. I run it with a simple docker-compose up -d --build.
When I go to localhost:80 I get page could not be found, and here is the result of my docker logs -
patientplatypus:~/Documents/patientplatypus.com/forum/back:10:03:32$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9c2e4e25e6d9 nginx:mainline-alpine "nginx -g 'daemon of…" 2 minutes ago Restarting (1) 14 seconds ago webserver
213e73495381 back_nodejs "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:8000->8000/tcp nodejs
03b6ae8f0ad4 back_nextjs "npm start" 2 minutes ago Up 2 minutes 0.0.0.0:3000->3000/tcp nextjs
patientplatypus:~/Documents/patientplatypus.com/forum/back:10:05:41$docker logs 9c2e4e25e6d9
2019/04/10 15:03:32 [emerg] 1#1: host not found in upstream "nodejs" in /etc/nginx/conf.d/nginx.conf:20
I'm pretty lost as to what could be going wrong. If anyone has any ideas please let me know. Thank you.
EDIT: SEE SOLUTION BELOW
The nginx webserver is on the network app-network which is a different network than the other two services which don't have a network defined. When no network is defined docker-compose will create a default network for them to share.
Either copy the network setting to both of the other services or remove the network setting from the webserver service.

Multiple docker-compose sharing network with a not yet known host for nginx

I use multiple docker-compose files :
one for running on the same network : postgres and nginx
=> this containers collection is supposed to be always running
one for each asp core web site (each one on a specific port)
=> this containers are updated through a CI/CD pipeline (VSTS)
Because Nginx needs to know the hostname when defining the upstream, if the asp core container is not running then it's hostname is not known, then nginx throws an error on docker-compose up command :
nginx | 2018/01/04 15:59:17 [emerg] 1#1: host not found in upstream
"webportalstage:5001" in /etc/nginx/nginx.conf:9
nginx | nginx: [emerg] host not found in upstream
"webportalstage:5001" in /etc/nginx/nginx.conf:9
nginx exited with code 1
And obviously if the asp core container is running before, then nginx knows the hostname webportalstage and everything works fine. But the starting sequence is not what I expect.
is there any solution to start nginx with a not yet known hostname in the upstream ?
Here is my nginx.conf file :
worker_processes 4;
events { worker_connections 1024; }
http {
sendfile on;
upstream webportalstage {
server webportalstage:5001;
}
server {
listen 80;
location / {
proxy_pass http://webportalstage;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
And both docker-compose files :
Nginx + Postgres :
version: "3"
services:
proxy:
image: myPrivateRepo:latest
ports:
- "80:80"
container_name: nginx
networks:
aspcore:
aliases:
- nginx
postgres:
image: postgres:latest
environment:
- POSTGRES_PASSWORD=myPWD
- POSTGRES_USER=postgres
ports:
- "5432:5432"
container_name: postgres
networks:
aspcore:
aliases:
- postgres
networks:
aspcore:
driver: bridge
One of my asp core web site :
version: "3"
services:
webportal:
image: myPrivateRepo:latest
environment:
- ASPNETCORE_ENVIRONMENT=Staging
container_name: webportal
networks:
common_aspcore:
aliases:
- webportal
networks:
common_aspcore:
external: true
Well, I use the following hack in similar situation:
location / {
set $docker_host "webportalstage";
proxy_pass http://$docker_host:5001;
...
}
I'm not sure if it works with upstream, probably it should.
I know, this is not the best solution, but I didn't find any better.
I finally used extra_host feature to define static IP within my nginx+postgres docker-compose.yml file :
extra_hosts:
webportalstage: 10.5.0.20
And setting the same static IP to my asp core docker-compose file.
It works but it's not as generic as I would like

Resources