I am trying to copy a set of files (etc/nginx/*) from Nginx container to myhost (./nginx) but don't work.
Every time I run my docker-compose, the folder on the host (/ usr / development / nginx) is empty.
How can I do this?
Environment
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.4 LTS"
Docker Compose
docker-compose version 1.25.5, build 8a1c60f6
Docker
Docker version 19.03.6, build 369ce74a3c
My docker-compose.yml
version: '3.7'
services:
nginx:
container_name: 'nginx'
hostname: nginx
image: nginx
ports:
- "80:80"
- "443:443"
restart: unless-stopped
command: /bin/sh -c "while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g 'daemon off;'"
volumes:
- ./nginx:/etc/nginx
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
networks:
- docker-network
portainer:
image: portainer/portainer
container_name: portainer
hostname: portainer
ports:
- 9000:9000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./portainer/data:/data
networks:
- docker-network
certbot:
container_name: 'certbot'
hostname: certbot
image: certbot/certbot
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
restart: unless-stopped
entrypoint: /bin/sh -c "trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;"
depends_on:
- nginx
networks:
- docker-network
networks:
docker-network:
name: docker-network
driver: bridge
Related
I am dockerizing my existing application. But there's a strange issue. When i start my application with
docker-compose up
each service in the docker-compose runs successfully with no issues. But there are some services which i don't want to run sometimes (celery, celerybeat etc). For that i run
docker-compose run nginx
the above command should run nginx, web, db services as configured with docker-compose.yml but it only runs web and db not nginx.
Here's my yml file
docker-compose.yml
version: '3'
services:
db:
image: postgres:12
env_file: .env
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- "5431:5432"
volumes:
- dbdata:/var/lib/postgresql/data
nginx:
image: nginx:1.14
ports:
- "443:443"
- "80:80"
volumes:
- ./config/nginx/:/etc/nginx/conf.d
- ./MyAPP/static:/var/www/MyAPP.me/static/
depends_on:
- web
web:
restart: always
build: ./MyAPP
command: bash -c "
python manage.py collectstatic --noinput
&& python manage.py makemigrations
&& python manage.py migrate
&& gunicorn --certfile=/etc/certs/localhost.crt --keyfile=/etc/certs /localhost.key MyAPP.wsgi:application --bind 0.0.0.0:443 --reload"
expose:
- "443"
depends_on:
- db
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
- ./config/nginx/certs/:/etc/certs
- ./MyAPP/static:/var/www/MyAPP.me/static/
broker:
image: redis:alpine
expose:
- "6379"
celery:
build: ./MyAPP
command: celery -A MyAPP worker -l info
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
depends_on:
- broker
- db
celery-beat:
build: ./MyAPP
command: celery -A MyAPP beat -l info
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
depends_on:
- broker
- db
comment-classifier:
image: codait/max-toxic-comment-classifier
volumes:
dbdata:
TL;dr: docker-compose up nginx
There's a distinct difference between docker-compose up and docker-compose run. The first builds, (re)creates, starts, and attaches to containers for a service. The second runs a one-time command against a service. When you do docker-compose run, it starts db and web because nginx depends on them, then it runs a single command on nginx and exits. So you have to use docker-compose up nginx in order to get what you want.
I am following the digital ocean tutorial to install wordpress via docker
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-with-docker-compose
It says if the certbot is other than 0 I get the following error, there are no log files where I it says to look. Newish to docker thanks for helping all!
Edit: I’m noting none of the volumes that this docker-compose were created on the host
Name Command State Ports
-------------------------------------------------------------------------
certbot certbot certonly --webroot ... Exit 1
db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp
webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp
wordpress docker-entrypoint.sh php-fpm Up 9000/tcp
Docker-compose.yml here
version: '3'
services:
db:
image: mysql:8.0
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MYSQL_DATABASE=wordpress
volumes:
- dbdata:/var/lib/mysql
command: '--default-authentication-plugin=mysql_native_password'
networks:
- app-network
wordpress:
depends_on:
- db
image: wordpress:5.1.1-fpm-alpine
container_name: wordpress
restart: unless-stopped
env_file: .env
environment:
- WORDPRESS_DB_HOST=db:3306
- WORDPRESS_DB_USER=$MYSQL_USER
- WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
- WORDPRESS_DB_NAME=wordpress
volumes:
- wordpress:/var/www/html
networks:
- app-network
webserver:
depends_on:
- wordpress
image: nginx:1.15.12-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- wordpress:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- app-network
certbot:
depends_on:
- webserver
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- wordpress:/var/www/html
command: certonly --webroot --webroot-path=/var/www/html --email sammy#example.com --agree-tos --no-eff-email --staging -d example.com -d www.example.com
volumes:
certbot-etc:
wordpress:
dbdata:
networks:
app-network:
driver: bridge
The volumes being created here are named volumes.
To check named volumes run:
docker-compose volume ls
Also, per the comment above, you could check certbot logs with:
docker-compose logs certbot
The volumes and container logs won’t show up using docker unless you use the specific container and volume names which you can find with:
docker-compose ls and docker-compose volume ls
Or use the docker-compose variants above
i have several flask applications which i want to run on a server as separate docker containers. on the server i already have several websites running with a reverse proxy and the letsencrypt-nginx-proxy-companion. unfortunately i can't get the containers to run. I think it is because of the port mapping. When I start the containers on port 80, I get the following error message "[ERROR] Can't connect to ('', 80)" from gunicorn. On all other ports it starts successfully, but then I can't access it from outside.
what am I doing wrong?
docker-compose.yml
version: '3'
services:
db:
image: "mysql/mysql-server:5.7"
env_file: .env-mysql
restart: always
app:
build: .
env_file: .env
expose:
- "8001"
environment:
- VIRTUAL_HOST:example.com
- VIRTUAL_PORT:'8001'
- LETSENCRYPT_HOST:example.com
- LETSENCRYPT_EMAIL:foo#example.com
links:
- db:dbserver
restart: always
networks:
default:
external:
name: nginx-proxy
Dockerfile
FROM python:3.6-alpine
ARG CONTAINER_USER='flask-user'
ENV FLASK_APP run.py
ENV FLASK_CONFIG docker
RUN adduser -D ${CONTAINER_USER}
USER ${CONTAINER_USER}
WORKDIR /home/${CONTAINER_USER}
COPY requirements requirements
RUN python -m venv venv
RUN venv/bin/pip install -r requirements/docker.txt
COPY app app
COPY migrations migrations
COPY run.py config.py entrypoint.sh ./
# runtime configuration
EXPOSE 8001
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh
#!/bin/sh
source venv/bin/activate
flask deploy
exec gunicorn -b :8001 --access-logfile - --error-logfile - run:app
reverse-proxy/docker-compose.yml
version: '3'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:ro
nginx-gen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
container_name: nginx-gen
restart: always
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- /srv/www/nginx-proxy/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: always
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
DEBUG: "true"
networks:
default:
external:
name: nginx-proxy
I'm trying to setup a Spark development environment with Zeppelin on Docker, but I'm having trouble connecting the Zeppelin and Spark containers.
I'm deploying a Docker Stack, with the current docker-compose
version: '3'
services:
spark-master:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.master.Master -h spark-master
hostname: spark-master
environment:
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: 10.129.34.90
volumes:
- spark-master-volume:/conf
- spark-master-volume:/tmp/data
ports:
- 8000:8080
spark-worker:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.worker.Worker spark://spark-master:7077
hostname: spark-worker
environment:
SPARK_MASTER_URL: spark-master:7077
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: 10.129.34.90
SPARK_WORKER_CORES: 2
SPARK_WORKER_MEMORY: 2g
volumes:
- spark-worker-volume:/conf
- spark-worker-volume:/tmp/data
ports:
- "8081-8100:8081-8100"
zeppelin:
image: apache/zeppelin:0.8.0
ports:
- 8080:8080
- 8443:8443
volumes:
- spark-master-volume:/opt/zeppelin/logs
- spark-master-volume:/opt/zeppelin/notebookcd
environment:
MASTER: "spark://spark-master:7077"
SPARK_MASTER: "spark://spark-master:7077"
SPARK_HOME: /usr/spark-2.4.1
depends_on:
- spark-master
volumes:
spark-master-volume:
driver: local
spark-worker-volume:
driver: local
It builds normally, but when I try to run Spark on Zeppelin, it throws me:
java.lang.RuntimeException: /zeppelin/bin/interpreter.sh: line 231: /usr/spark-2.4.1/bin/spark-submit: No such file or directory
I think the problem is in the volumes, but I can't get how to do it right.
You need to install spark on your zeppelin docker instance to use spark-submit and update the spark interpreter config to point it to your spark cluster
zeppelin_notebook_server:
container_name: zeppelin_notebook_server
build:
context: zeppelin/
restart: unless-stopped
volumes:
- ./zeppelin/config/interpreter.json:/zeppelin/conf/interpreter.json:rw
- ./zeppelin/notebooks:/zeppelin/notebook
- ../sample-data:/sample-data:ro
ports:
- "8085:8080"
networks:
- general
labels:
container_group: "notebook"
spark_base:
container_name: spark-base
build:
context: spark/base
image: spark-base:latest
spark_master:
container_name: spark-master
build:
context: spark/master/
networks:
- general
hostname: spark-master
ports:
- "3030:8080"
- "7077:7077"
environment:
- "SPARK_LOCAL_IP=spark-master"
depends_on:
- spark_base
volumes:
- ./spark/apps/jars:/opt/spark-apps
- ./spark/apps/data:/opt/spark-data
- ../sample-data:/sample-data:ro
spark_worker_1:
container_name: spark-worker-1
build:
context: spark/worker/
networks:
- general
hostname: spark-worker-1
ports:
- "3031:8081"
env_file: spark/spark-worker-env.sh
environment:
- "SPARK_LOCAL_IP=spark-worker-1"
depends_on:
- spark_master
volumes:
- ./spark/apps/jars:/opt/spark-apps
- ./spark/apps/data:/opt/spark-data
- ../sample-data:/sample-data:ro
spark_worker_2:
container_name: spark-worker-2
build:
context: spark/worker/
networks:
- general
hostname: spark-worker-2
ports:
- "3032:8082"
env_file: spark/spark-worker-env.sh
environment:
- "SPARK_LOCAL_IP=spark-worker-2"
depends_on:
- spark_master
volumes:
- ./spark/apps/jars:/opt/spark-apps
- ./spark/apps/data:/opt/spark-data
- ../sample-data:/sample-data:ro
Zeppelin docker file:
FROM "apache/zeppelin:0.8.1"
RUN wget http://apache.mirror.iphh.net/spark/spark-2.4.3/spark-2.4.3-bin-hadoop2.7.tgz --progress=bar:force && \
tar xvf spark-2.4.3-bin-hadoop2.7.tgz && \
mkdir -p /usr/local/spark && \
mv spark-2.4.3-bin-hadoop2.7/* /usr/local/spark/. && \
mkdir -p /sample-data
ENV SPARK_HOME "/usr/local/spark/"
Make sure your zeppelin spark interpreter config is same as:
Build a Dockerfile with content
FROM gettyimages/spark
ENV APACHE_SPARK_VERSION 2.4.1
ENV APACHE_HADOOP_VERSION 2.8.0
ENV ZEPPELIN_VERSION 0.8.1
RUN apt-get update
RUN set -x \
&& curl -fSL "http://www-eu.apache.org/dist/zeppelin/zeppelin-0.8.1/zeppelin-0.8.1-bin-all.tgz" -o /tmp/zeppelin.tgz \
&& tar -xzvf /tmp/zeppelin.tgz -C /opt/ \
&& mv /opt/zeppelin-* /opt/zeppelin \
&& rm /tmp/zeppelin.tgz
ENV SPARK_SUBMIT_OPTIONS "--jars /opt/zeppelin/sansa-examples-spark-2016-12.jar"
ENV SPARK_HOME "/usr/spark-2.4.1/"
WORKDIR /opt/zeppelin
CMD ["/opt/zeppelin/bin/zeppelin.sh"]
and then define your service within your docker-compose.yml file with prefix
version: '3'
services:
zeppelin:
build: ./zeppelin
image: zeppelin:0.8.1-hadoop-2.8.0-spark-2.4.1
...
Finally, use docker-compose -f docker-compose.yml build to build the customised image before docker stack deploy
Below is my docker-compose.yml file having 3 services.
version: '3.0'
services:
mongodb:
build: ./mongodb
image: mongo:1.0
container_name: mongodb
ports:
- "27017:27017"
networks:
- appNetwork
node-service:
build: ./node-service
image: node-service-live:1.0
container_name: node-service
command:
sh -c "dockerize -wait tcp://mongodb:27017 -timeout 1m && npm start"
expose:
- "3031"
ports:
- "3031:3031"
networks:
- appNetwork
angular-app:
build: ./angular-app
image: angular-app-live:1.0
container_name: angular-app
command: ng serve --host 0.0.0.0 --port 4201 --disable-host-check
ports:
- "4201:4201"
networks:
- appNetwork
networks:
appNetwork:
external: true
When I execute docker-compose up, node-service is able to connect to mongodb service using this link: mongodb://mongodb:27017/DBName?authMechanism=DEFAULT.
But angular-app service can't communicate with node-service despite being in same network. In angular-app I am using following link(http://node-service:3031) to connect to node-service.
What am I missing??