How can I implement the below docker-compose code, but using the docker run command? I am specifically interested in the depends_on part.
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
depends_on: doesn't map to a docker run option. When you have your two docker run commands you need to make sure you put them in the right order.
docker build -t web_image .
docker network create some_network
docker run --name db --net some_network postgres
# because this depends_on: [db] it must be second
docker run --name web --net some_network ... web_image ...
depends-on mean :
Compose implementations MUST guarantee dependency services have been started before starting a dependent service. Compose implementations MAY wait for dependency services to be “ready” before starting a dependent service.
Hence the depends on is not only an order of running
and you can use docker-compose instead of docker run and every option in docker run can be in the docker-compose file
Related
I'm taking over a website https://www.funfun.io. Unfortunately, I cannot reach the previous developer anymore.
This is a AngularJS+Node+Express+MongoDB application. He decided to use bitnami+docker+nginx in the server. Here is docker-compose.yml:
version: "3"
services:
funfun-node:
image: funfun
restart: always
build: .
environment:
- MONGODB_URI=mongodb://mongodb:27017/news
env_file:
- ./.env
depends_on:
- mongodb
funfun-nginx:
image: funfun-nginx
restart: always
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "3000:8443"
depends_on:
- funfun-node
mongodb:
image: mongo:3.4
restart: always
volumes:
- "10studio-mongo:/data/db"
ports:
- "27018:27017"
networks:
default:
external:
name: 10studio
volumes:
10studio-mongo:
driver: local
Dockerfile.nginx:
FROM bitnami/nginx:1.16
COPY ./funfun.io /opt/bitnami/nginx/conf/server_blocks/default.conf
COPY ./ssl/MyCompanyLocalhost.cer /opt/MyCompanyLocalhost.cer
COPY ./ssl/MyCompanyLocalhost.pvk /opt/MyCompanyLocalhost.pvk
Dockerfile:
FROM node:12
RUN npm install -g yarn nrm --registry=https://registry.npm.taobao.org && nrm use cnpm
COPY ./package.json /opt/funfun/package.json
WORKDIR /opt/funfun
RUN yarn
COPY ./ /opt/funfun/
CMD yarn start
In my local machine, I could use npm start to test the website in a web browser.
I have access to the Ubuntu server. But I'm new to bitnami+docker+nginx, I have the following questions:
In the command line of Ubuntu server, how could I check if the service is running (besides launching the website in a browser)?
How could I shut down and restart the service?
Previously, without docker, we could start mongodb by sudo systemctl enable mongod. Now, with docker, how could we start mongodb?
First of all, to deploy the services mentioned in the compose file locally, you should run the below command
docker-compose up
docker-compose up -d # in the background
After running the above command docker containers will be created and available on your machine.
To list the running containers
docker ps
docker-compose ps
To stop containers
docker stop ${container name}
docker-compose stop
mongodb is part of the docker-compose file and it will be running once you start other services. It will also be restarted automatically in case it crashes or you restarted your machine.
One final note, since you are using external networks you may need to create the network before starting the services.
1.
docker-compose ps will give you the state of your containers
2.
docker-compose stop will stop your containers, keeping their state then you may start them as their are using docker-compose up
docker-compose kill will delete your containers
docker-compose restart will restart your containers
3.
By declaring your mongodb using an official mongo image your container start when you do docker-compose up without any other intervention.
Or you can add command: mongod --auth directly into your docker-compose.yml
the official documentation of docker is very detailed and help a lot for all of this, keep looking on it https://docs.docker.com/compose/
I have a docker compose file that links my server to a redis image:
version: '3'
services:
api:
build: .
command: npm run dev
environment:
NODE_ENV: development
volumes:
- .:/home/node/code
- /home/node/code/node_modules
- /home/node/code/build/Release
ports:
- "1389:1389"
depends_on:
- redis
redis:
image: redis:alpine
I am wondering how could I open a redis-cli against the Redis container started by docker-compose to directly modify ke/value pairs. I tried with docker attach but it does not open any shell.
Use docker exec -it your_container_name /bin/bash to enter into redis container, then execute redis-cli to modify key-value pair.
See https://docs.docker.com/engine/reference/commandline/exec/
Install the Redis CLI on your host. Edit the YAML file to publish Redis's port
services:
redis:
image: redis:alpine
ports: ["6379:6379"]
Then run docker-compose up to redeploy the container, and you can run redis-cli from the host without needing to directly interact with Docker.
Using /bin/bash as the command (as suggested in the accepted solution) doesn't work for me with the latest redis:alpine image on Linux.
Instead, this worked:
docker exec -it your_container_name redis-cli
I am quite new to Docker and I need to run 8 apache2.0 servers on different docker containers and give each container a port number using compose.
I found apache2.0 image and I created a container through this command:
docker create -t -i lamsley/apache2.0
How can I create many webservers and give each one a port number in way I can access it through the internet ?
With just Docker you can run:
docker run --name server1 -d -p 8000:80 lamsley/apache2.0
docker run --name server2 -d -p 8001:80 lamsley/apache2.0
...
It's easier with Docker Compose:
version: '2'
services:
httpd1:
image: lamsley/apache2.0
container_name: httpd1
ports:
- "8000:80"
httpd2:
image: lamsley/apache2.0
container_name: httpd1
ports:
- "8000:80"
...
But I strongly suggest you learn Docker first because these snippets are simplistic. You need to know about volumes to pass the content to be served, etc. Why use lamsley/apache2.0 when you can use the official httpd image? You can build your own custom image. The possibilities are endless and it is fun.
To learn about Docker Compose:
https://docs.docker.com/compose/
To learn about volumes:
https://docs.docker.com/engine/tutorials/dockervolumes/
I am trying to use Docker Compose (with Docker Machine on Windows) to launch a group of Docker containers.
My docker-compose.yml:
version: '2'
services:
postgres:
build: ./postgres
environment:
- POSTGRES_PASSWORD=mysecretpassword
frontend:
build: ./frontend
ports:
- "4567:4567"
depends_on:
- postgres
backend:
build: ./backend
ports:
- "5000:5000"
depends_on:
- postgres
docker-compose build runs successfully. When I run docker-compose start I get the following output:
Starting postgres ... done
Starting frontend ... done
Starting backend ... done
ERROR: No containers to start
I did confirm that the docker containers are not running. How do I get my containers to start?
The issue here is that you haven't actually created the containers. You will have to create these containers before running them. You could use the docker-compose up instead, that will create the containers and then start them.
Or you could run docker-compose create to create the containers and then run the docker-compose start to start them.
The reason why you saw the error is that docker-compose start and docker-compose restart assume that the containers already exist.
If you want to build and start containers, use
docker-compose up
If you only want to build the containers, use
docker-compose up --no-start
Afterwards, docker-compose {start,restart,stop} should work as expected.
There used to be a docker-compose create command, but it is now deprecated in favor of docker-compose up --no-start.
How do I run Celery and RabbitMQ in a docker container? Can you point me to sample dockerfile or compose files?
This is what I have:
Dockerfile:
FROM python:3.4
ENV PYTHONBUFFERED 1
WORKDIR /tasker
ADD requirements.txt /tasker/
RUN pip install -r requirements.txt
ADD . /tasker/
docker-compose.yml
rabbitmq:
image: tutum/rabbitmq
environment:
- RABBITMQ_PASS=mypass
ports:
- "5672:5672"
- "15672:15672"
celery:
build: .
command: celery worker --app=tasker.tasks
volumes:
- .:/tasker
links:
- rabbitmq:rabbit
The issue I'm having is I cant get Celery to stay alive or running. It keeps exiting.
I have similar Celery exiting problem while dockerizing the application. You should use rabbit service name ( in your case it's rabbitmq) as host name in your celery configuration.That is, use broker_url = 'amqp://guest:guest#rabbitmq:5672//' instead of broker_url = 'amqp://guest:guest#localhost:5672//' . In my case, major components are Flask, Celery and Redis.My problem is HERE please check the link, you may find it useful.
Update 2018, as commented below by Floran Gmehlin, The celery image is now officially deprecated in favor of the official python image.
As commented in celery/issue 1:
Using this image seems ridiculous. If you have an application container, as you usually have with Django, you need all dependencies (things you import in tasks.py) installed in this container again.
That's why other projects (e.g. cookiecutter-django) reuse the application container for Celery, and only run a different command (command: celery ... worker) against it with docker-compose.
Note, now the docker-compose.yml is called local.yml and use start.sh.
Original answer:
You can try and emulate the official celery Dockerfile, which does a bit more setup before the CMD ["celery", "worker"].
See the usage of that image to run it properly.
start a celery worker (RabbitMQ Broker)
$ docker run --link some-rabbit:rabbit --name some-celery -d celery
check the status of the cluster
$ docker run --link some-rabbit:rabbit --rm celery celery status
If you can use that image in your docker-compose, then you can try building your own starting FROM celery instead of FROM python.
something I used in my docker-compose.yml. it works for me. check the details in this medium
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit