Here is the compose file that I am using. It consist one postgres db container and one redis container. On top of them I have a gunicorn-django-python web server(docker image:python-3.5). There is one nginx proxy server which is linked to web container.
version: '2'
services:
nginx:
build: ./nginx/
ports:
- 80:80
volumes:
- /usr/src/app/static
links:
- web
web:
build: ./web
ports:
- 8000:8000
volumes:
- /usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
SECRET_KEY: 5(15ds+i2+%ik6z&!yer+ga9m=e%jcqiz_5wszg)r-z!2--b2d
DB_NAME: postgres
DB_USER: postgres
DB_PASS: postgres
DB_SERVICE: postgres
DB_PORT: 5432
command: /usr/local/bin/gunicorn messanger.wsgi:application -w 2 -b :8000
links:
- redis
- postgres
postgres:
image: postgres
ports:
- 5432
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
image: redis
ports:
- 6379
volumes:
- redisdata:/data
I am facing issue while starting docker containers via
convox start -f docker-compose.yml
Problem is , ideally first postgres/redis servers to be started, then the web server and then nginx server in the end according to their linking order. But actually web server is getting started first due to which it fails without db/cache. See error logs below:
web │ running: docker run -i --rm --name python-app-web -e DB_NAME=postgres -e DB_USER=postgres -e DB_PASS=postgres -e DB_SERVICE=postgres -e DB_PORT=5432 -e DEBUG=true -e SECRET_KEY=5(15ds+i2+%ik6z&!yer+ga9m=e%jcqiz_5wszg)r-z!2--b2d --add-host redis: -e REDIS_SCHEME=tcp -e REDIS_HOST= -e REDIS_PORT=
-e REDIS_PATH= -e REDIS_USERNAME= -e REDIS_PASSWORD= -e REDIS_URL=tcp://:%0A --add-host postgres: -e POSTGRES_SCHEME=tcp -e POSTGRES_HOST= -e POSTGRES_PORT=
-e POSTGRES_PATH= -e POSTGRES_USERNAME= -e POSTGRES_PASSWORD= -e POSTGRES_URL=tcp://:%0A -p 0:8000 -v /Users/gaurav/.convox/volumes/python-app/web/usr/src/app/static:/usr/src/app/static python-app/web sh -c /usr/local/bin/gunicorn messanger.wsgi:application -w 2 -b :8000
web │ invalid argument "redis:" for --add-host: invalid IP address in add-host: ""
web │ See 'docker run --help'.
postgres │ running: docker run -i --rm --name python-app-postgres -p 5432 -v pgdata:/var/lib/postgresql/data/ python-app/postgres
redis │ running: docker run -i --rm --name python-app-redis -p 6379 -v redisdata:/data python-app/redis
But it works fine when I completely removes the nginx server, in that case web server is being kicked off after postgres/redis.
I am not able to understand the actual error.
Complete code can be found here at github.
[Note] Found very strange thing I never expected this. Problem was with the name of the container. If I rename the 'ngnix' container to anything like server/webserver/xyz/myngnix/ngnixxxx etc it all worked as expected and in order. But not working with the name ngnix! strange isn't it.
Just add depends_on directive to docker-compose.yml
version: '2'
services:
depends_on:
- web
nginx:
build: ./nginx/
ports:
- 80:80
volumes:
- /usr/src/app/static
web:
build: ./web
depends_on:
- postrges
- redis
ports:
- 8000:8000
volumes:
- /usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
SECRET_KEY: 5(15ds+i2+%ik6z&!yer+ga9m=e%jcqiz_5wszg)r-z!2--b2d
DB_NAME: postgres
DB_USER: postgres
DB_PASS: postgres
DB_SERVICE: postgres
DB_PORT: 5432
command: /usr/local/bin/gunicorn messanger.wsgi:application -w 2 -b :8000
postgres:
image: postgres
ports:
- 5432
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
image: redis
ports:
- 6379
volumes:
- redisdata:/data
Order of running containers will be ok. But it doesnt mean that when web app will try to connect to redis, this container will be available. If you want reach this fetures you need something like whait for it or docker-compose health check
Related
due to hardware problem, I had to replace my small home server with a new one.
Several self-hosted services with Docker were running on the server for which I tried to backup the volumes following the instructions on the official Docker website and those present in this YouTube video and in this cheat-sheet. Now, following the same documentation, I am trying to restore the backups but without success. The first one I'm trying for is a stack for Nginx Proxy Manager built with Docker compose with this docker-compose.yaml file:
version: "3.6"
services:
app:
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- "80:80"
- "443:443"
- "81:81"
environment:
DB_MYSQL_HOST: "db"
DB_MYSQL_PORT: 3306
DB_MYSQL_NAME: "db_name"
DB_MYSQL_USER: "db_user"
DB_MYSQL_PASSWORD: "db_password"
volumes:
- data:/data
- ./letsencrypt:/etc/letsencrypt
depends_on:
- db
db:
image: jc21/mariadb-aria:latest
restart: always
environment:
MYSQL_DATABASE: "db_name"
MYSQL_ROOT_PASSWORD: "root_password"
MYSQL_USER: "db_user"
MYSQL_PASSWORD: "db_password"
volumes:
- db:/var/lib/mysql
volumes:
db:
data:
After starting the stack with docker compose up -d command I'm trying to restore db and data volumes with:
docker run --rm --volumes-from nginx-proxy-manager-db-1 -v $(pwd):/backup ubuntu bash -c "cd / && tar xvf /backup/nginx-proxy-manager_db_20220717-082200.tar --strip 1"
docker run --rm --volumes-from nginx-proxy-manager-app-1 -v $(pwd):/backup ubuntu bash -c "cd / && tar xvf /backup/nginx-proxy-manager_data_20220717-082200.tar --strip 1"
What's wrong?
I am trying to make a quick connection setup using the fallowing setup
Copy & Paste to recreate the issue
docker rm -f mariadb && docker run --detach --name mariadb --env MARIADB_USER=user --env MARIADB_PASSWORD=secret --env MARIADB_ROOT_PASSWORD=secret -p 3306:3306 mariadb:latest
docker rm -f phpmyadd && docker run --name phpmyadd -d -e PMA_HOST=host -e PMA_PORT=3306 -p 8080:80 phpmyadmin
docker exec -it mariadb bash
I can login in to mariadb container and access mariadb with
mysql -uroot -psecret
I can also access phpmyadmin container at http://localhost:8080
However when i try to login to mariadb through phpmyadmin i get the fallowing:
It shows that the port is exposed but I can not access it with telnet..
Any idea what is missing here?
For 2 containers to be able to talk to each other, you would have to setup a docker-compose instead. Something like this should work
version: '3.8'
volumes:
mariadb:
driver: local
services:
mariadb:
image: mariadb:10.6
restart: always
environment:
MYSQL_ROOT_PASSWORD: YOUR_ROOT_PASSWORD_HERE
MYSQL_USER: YOUR_MYSQL_USER_HERE
MYSQL_PASSWORD: YOUR_USER_PW_HERE
ports:
- "40000:3306"
volumes:
- mariadb:/var/lib/mysql
phpmyadmin:
image: phpmyadmin
restart: always
ports:
- "40001:80"
environment:
- PMA_HOST=mariadb
- PMA_PORT=3306
And you would start everything using docker-compose up
I am attempting to use the docker-compose.yml from the Rails example on the Docker site. This is a Windows (WSL2/Ubuntu/Docker Desktop) machine, so any files created in the docker container are owned by root. I am trying to pass my user id and group id as args, but I can't figure out a syntax that will let me:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build:
context: .
args:
- USER_ID=$(id -u)
- GROUP_ID=$(id -g)
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
I try i setup a Shopware Docker Container for development. I setup a Dockerfile for the Shopware initialize process but every time i run the build process shopware return this error message:
mysql -u 'root' -p'root' -h 'dbs' --port='3306' -e "DROP DATABASE IF EXISTS `shopware6dev`"
ERROR 2005 (HY000): Unknown MySQL server host 'dbs' (-2)
i think docker setup the default network after all build processes are done but i need to connect before all containers are ready. The depends_on option brings nothing. I hope anyone have a idea to solve this problem.
This is my docker-compose file:
version: '3'
services:
shopwaredev:
build:
context: ./docker/web
dockerfile: Dockerfile
volumes:
- ./log:/var/log/apache2
environment:
- VIRTUAL_HOST=shopware6dev.test,www.shopware6dev.test
- HTTPS_METHOD=noredirect
restart: on-failure:10
depends_on:
- dbs
adminer:
image: adminer
restart: on-failure:10
ports:
- 8080:8080
dbs:
image: "mysql:5.7"
volumes:
- ./mysql57:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=shopware6dev
restart: on-failure:10
nginx-proxy:
image: solution360/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./ssl:/etc/nginx/certs
restart: on-failure:10
and this is my dockerfile for web shopwaredev container:
FROM solution360/apache24-php74-shopware6
WORKDIR /var/www/html
RUN rm index.html
RUN git clone https://github.com/shopware/development.git .
RUN cp .psh.yaml.dist .psh.yaml
RUN sed -i 's|DB_USER: "app"|DB_USER: "root"|g' .psh.yaml
RUN sed -i 's|DB_PASSWORD: "app"|DB_PASSWORD: "root"|g' .psh.yaml
RUN sed -i 's|DB_HOST: "mysql"|DB_HOST: "dbs"|g' .psh.yaml
RUN sed -i 's|DB_NAME: "shopware"|DB_NAME: "shopware6dev"|g' .psh.yaml
RUN sed -i 's|APP_URL: "http://localhost:8000"|APP_URL: "http://shopware6dev.test"|g' .psh.yaml
RUN ./psh.phar install
I started my keycloak service using the command:
docker run -d -p 8180:8080 -e KEYCLOAK_USER=admin -e \
KEYCLOAK_PASSWORD=admin -v $(pwd):/tmp --name kc \
jboss/keycloak:8.0.2
I created a new realm on keycloak only giving it a name, nothing else. Exported it running the command:
docker exec -it kc keycloak/bin/standalone.sh \
-Djboss.socket.binding.port-offset=100 -Dkeycloak.migration.action=export \
-Dkeycloak.migration.provider=singleFile \
-Dkeycloak.migration.realmName=my_realm \
-Dkeycloak.migration.usersExportStrategy=REALM_FILE \
-Dkeycloak.migration.file=/tmp/my_realm.json
I now have the realm in the my_realm.json file. I then start a new keycloak using docker compose to set up my entire test environment. I build a new docker image with the this DockerFile:
FROM jboss/keycloak:8.0.2
COPY my_realm.json /tmp/my_realm.json
ENV KEYCLOAK_IMPORT /tmp/my_realm.json
ENV KEYCLOAK_MIGRATION_STRATEGY OVERWRITE_EXISTING
Docker compose:
version: '3.4'
volumes:
postgres_kc_data:
driver: local
services:
kc_postgresql:
image: postgres:11.5
volumes:
- postgres_kc_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: kc
POSTGRES_USER: kc
POSTGRES_DB: kcdb
ports:
- 50009:5432
keycloak:
build: "./keycloak/" # ref to folder with above DockerFile
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
DB_VENDOR: POSTGRES
DB_ADDR: kc_postgresql
DB_DATABASE: kcdb
DB_SCHEMA: public
DB_USER: kc
DB_PASSWORD: kc
depends_on:
- kc_postgresql
ports:
- 8080:8080
The log output from running docker compose indicates that it is not able to import realm, and suggests something about validating the clients. I added no clients, so these are the default ones.
08:39:46,713 WARN [org.keycloak.services] (ServerService Thread Pool -- 67) KC-SERVICES0005: Unable to import realm Demo from file /tmp/my_realm.json.: java.lang.NullPointerException
at org.keycloak.keycloak-services#8.0.2//org.keycloak.url.DefaultHostnameProvider.resolveUri(DefaultHostnameProvider.java:83)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.url.DefaultHostnameProvider.getScheme(DefaultHostnameProvider.java:38)
at org.keycloak.keycloak-server-spi#8.0.2//org.keycloak.models.KeycloakUriInfo.<init>(KeycloakUriInfo.java:46)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.services.DefaultKeycloakContext.getUri(DefaultKeycloakContext.java:79)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.services.util.ResolveRelative.resolveRootUrl(ResolveRelative.java:45)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.validation.DefaultClientValidationProvider.validate(DefaultClientValidationProvider.java:44)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.validation.DefaultClientValidationProvider.validate(DefaultClientValidationProvider.java:37)
at org.keycloak.keycloak-server-spi-private#8.0.2//org.keycloak.validation.ClientValidationUtil.validate(ClientValidationUtil.java:30)
at org.keycloak.keycloak-server-spi-private#8.0.2//org.keycloak.models.utils.RepresentationToModel.createClients(RepresentationToModel.java:1224)
at org.keycloak.keycloak-server-spi-private#8.0.2//org.keycloak.models.utils.RepresentationToModel.importRealm(RepresentationToModel.java:362)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.services.managers.RealmManager.importRealm(RealmManager.java:506)
Any pointers are welcome!
Upgrading to 9.0.0 fixes it for me.