How to push docker compose to docker hub - docker

i have serveral services in my docker-compose file, it looks like this:
version: '3.7'
services:
web:
build: ./
command: gunicorn --bind 0.0.0.0:5000 --workers 2 --worker-connections 5000 --timeout 6000 manage:app
volumes:
- ./:/usr/src/app/
- static_volume:/usr/src/app/static_files
expose:
- 5000
env_file:
- ./.env.prod
depends_on:
- mongodb
mongodb:
image: mongo:4.4.1
restart: unless-stopped
command: mongod
ports:
- '27017:27017'
environment:
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
MONGO_INITDB_ROOT_USERNAME:
MONGO_INITDB_ROOT_PASSWORD:
volumes:
- mongodbdata:/data/db
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/app/static_files
ports:
- 5001:8000
depends_on:
- web
volumes:
mongodbdata:
static_volume:
and i have public repository on my docker hub account, i want to push all images in my app to that repo, anyone can help?

You should add image names to your services, including your docker hub id, e.g.:
services:
web:
build: ./
image: docker-hub-id/web:latest
...
Now, you can just call docker-compose push.
See docker-compose push

Related

Docker Compose not reading multiple files

Using the below docker compose files, i am unable to bring up my app correctly. Docker says my LAPIS_ENV environment variable is not set, but i am setting it in my second compose file which I am expecting to be merged into the first one. I have tried including them in reverse order to no avail.
version: '2.4'
services:
backend:
mem_limit: 50mb
memswap_limit: 50mb
build:
context: ./backend
dockerfile: Dockerfile
depends_on:
- postgres
volumes:
- ./backend:/var/www
- ./data:/var/data
restart: unless-stopped
command: bash -c "/usr/local/bin/docker-entrypoint.sh ${LAPIS_ENV}"
postgres:
build:
context: ./postgres
dockerfile: Dockerfile
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- postgres:/var/lib/postgresql/data
- ./postgres/pg_hba.conf:/var/lib/postgres/data/pg_hba.conf
- ./data/backup:/pgbackup
restart: unless-stopped
volumes:
postgres:
version: '2.4'
services:
backend:
environment:
LAPIS_ENV: development
ports:
- 8080:80
#!/usr/bin/env bash
docker compose -f docker-compose.yml -f docker-compose.dev.yml up

Docker Volume is Empty after Mounting

I try to set up a Docker-compose for my application(s) including a service based on the nginx image. I want to have the possibility to simply access the config from my Host. But when i mount the volume with
volumes:
- ./nginxConf:/etc/nginx
this volume is empty and the container crashes.
Full docker-compose.yml
version: '3'
services:
frontend:
image: myFrontend
restart: always
environment:
- API_URL=http://localhost:3000/api/v1
ports:
- "80:80"
- "443:443"
depends_on:
- "api"
volumes:
- ./nginxConf:/etc/nginx
api:
image: myApi
restart: always
command: bash -c "npm run build && npm run start"
ports:
- "3000:3000"
links:
- mongo
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"

docker container port map with remote IP (54.xxx.xxx.23) on local development

I'm working as a DevOps for some of the Projects Where I am facing an issue,
I have one docker-compose.yml which is working fine with local IP like 192.168.0.38 but I want to map it with my AWS IP (54.xxx.xxx.23) instead of local host IP.
version: '3'
services:
api:
build: ./api
image: api
environment:
- PYTHONUNBUFFERED=1
expose:
- ${scikiqapiport}
ports:
- ${scikiqapiport}:${scikiqapiport}
command:
"python3 manage.py makemigrations"
command:
"chmod -R 777 ./scikiq/scikiq/static:rw"
command:
"python3 manage.py migrate"
command: "gunicorn --workers=3 --bind=0.0.0.0:${scikiqapiport} wsgi"
restart: on-failure
depends_on:
- base
volumes:
- "../compressfile:/home/data/arun/compressfile"
- "static:/home/data/arun/scikiq/scikiq/static:rw"
scikiqweb:
build: ./web
image: web
ports:
- ${scikiqwebport}
command:
"gunicorn --workers=3 --bind=0.0.0.0:${scikiqwebport} wsgi"
restart: on-failure
depends_on:
- base
nginx:
image: nginx
ports:
- ${scikiqwebport}:80
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- scikiqweb1
base:
build: ./base-image
image: scikiq_base
volumes:
compressfile:
static:
Your help will be appreciated.
Thank You
Put the public IP where is used local IP its working

Why Dockerfile doesn't run multiple commands

I want use Docker run my project(react+nodejs+mongodb),
Dockerfile:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
CMD nohup sh -c 'npm start && node ./server/server.js'
docker-compose.yml:
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
run docker-compose up --build, the 3000 port is worked, but the 8080 port dies
localhost:3000
localhost:8080
I would suggest create a container for the server and have it seperate from the "chat" container. Its best to have each container do one thing and one thing only (almost like the philosophy behind unix commands)
In any case here is some modifications that I would make to the compose file.
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
# You don't need to expose this port to the outside world. Because you linked the two containers the chat app
# will be able to connect to mongodb using hostname mongodb inside the container network.
# ports:
# - "27017:27017"
Btw what happens if you run:
$ docker-compose down
and then
$ docker-compose up
$ docker ps
can you see the ports exposed in docker ps output?
your chat service depends on mongo so you also need to have this in your chat
depends_on:
- mongo
This docker-compose file works for me. Note that i am saving the data from the database to a local directory. You should add this directory to gitignore.
version: "3.2"
services:
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- NODE_ENV=production
ports:
- "28017:27017"
expose:
- 28017 # you can connect to this mongodb with studio3t
volumes:
- ./mongodb-data:/data/db
restart: always
networks:
- docker-network
express:
container_name: express
environment:
- NODE_ENV=development
restart: always
build:
context: .
args:
buildno: 1
expose:
- 3000
ports:
- "3000:3000"
links:
- mongo # link this service to the database service
depends_on:
- mongo
command: "npm start" # override the default command to use nodemon in dev
networks:
- docker-network
networks:
docker-network:
driver: bridge
You may also find that using node you have to wait for the mongodb container to be ready before you can connect to the database.

Multiple services with different ports and the same domain using jwilder/nginx-proxy

I have some services in docker-compose:
version: "3"
services:
site:
volumes:
- .:/app
build:
dockerfile: Dockerfile.dev
context: docker
ports:
- "80:80"
webpack:
image: node:6.12.0
ports:
- "8080:8080"
volumes:
- .:/app
working_dir: /app
command: bash -c "yarn install; yarn run gulp server"
db:
image: mysql:5.7.20
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: ${DB_NAME}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
And I can connect to exposed ports of services:
Site -- localhost:80
Webpack -- localhost:8080
MySQL: -- localhost:3306
How can I use nginx-proxy to expose multiple ports of different servers on the same domain (?):
Site -- example.dev:80
Webpack -- example.dev:8080
MySQL: -- example.dev:3306
This works:
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
site:
volumes:
- .:/app
build:
dockerfile: Dockerfile.dev
context: docker
expose:
- 80
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST}
But this is not:
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
site:
volumes:
- .:/app
build:
dockerfile: Dockerfile.dev
context: docker
expose:
- 80
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST}
webpack:
image: node:6.12.0
expose:
- 8080
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST}
VIRTUAL_PORT: 8080
volumes:
- .:/app
working_dir: /app
command: bash -c "yarn install; yarn run gulp server"
What am I do wrong? How can I solve this problem?
//Sorry for my worst English. Hope you'll understand me
Update:
This is just an example. In the future I'll make proxy as external network and will connect services to it. And I wont to run two docker-compose "files" on the same host (VPS). Purpose: production and test versions on the same host, that use same ports BUT different domains. For example:
example.com -- Web Site
example.com:81 -- PhpMyAdmin
test.example.com -- Web Site for testing
test.example.com:81 -- PhpMyAdmin for testing

Resources