I'm taking over a website https://www.funfun.io. Unfortunately, I cannot reach the previous developer anymore.
This is a AngularJS+Node+Express+MongoDB application. He decided to use bitnami+docker+nginx in the server. Here is docker-compose.yml:
version: "3"
services:
funfun-node:
image: funfun
restart: always
build: .
environment:
- MONGODB_URI=mongodb://mongodb:27017/news
env_file:
- ./.env
depends_on:
- mongodb
funfun-nginx:
image: funfun-nginx
restart: always
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "3000:8443"
depends_on:
- funfun-node
mongodb:
image: mongo:3.4
restart: always
volumes:
- "10studio-mongo:/data/db"
ports:
- "27018:27017"
networks:
default:
external:
name: 10studio
volumes:
10studio-mongo:
driver: local
Dockerfile.nginx:
FROM bitnami/nginx:1.16
COPY ./funfun.io /opt/bitnami/nginx/conf/server_blocks/default.conf
COPY ./ssl/MyCompanyLocalhost.cer /opt/MyCompanyLocalhost.cer
COPY ./ssl/MyCompanyLocalhost.pvk /opt/MyCompanyLocalhost.pvk
Dockerfile:
FROM node:12
RUN npm install -g yarn nrm --registry=https://registry.npm.taobao.org && nrm use cnpm
COPY ./package.json /opt/funfun/package.json
WORKDIR /opt/funfun
RUN yarn
COPY ./ /opt/funfun/
CMD yarn start
In my local machine, I could use npm start to test the website in a web browser.
I have access to the Ubuntu server. But I'm new to bitnami+docker+nginx, I have the following questions:
In the command line of Ubuntu server, how could I check if the service is running (besides launching the website in a browser)?
How could I shut down and restart the service?
Previously, without docker, we could start mongodb by sudo systemctl enable mongod. Now, with docker, how could we start mongodb?
First of all, to deploy the services mentioned in the compose file locally, you should run the below command
docker-compose up
docker-compose up -d # in the background
After running the above command docker containers will be created and available on your machine.
To list the running containers
docker ps
docker-compose ps
To stop containers
docker stop ${container name}
docker-compose stop
mongodb is part of the docker-compose file and it will be running once you start other services. It will also be restarted automatically in case it crashes or you restarted your machine.
One final note, since you are using external networks you may need to create the network before starting the services.
1.
docker-compose ps will give you the state of your containers
2.
docker-compose stop will stop your containers, keeping their state then you may start them as their are using docker-compose up
docker-compose kill will delete your containers
docker-compose restart will restart your containers
3.
By declaring your mongodb using an official mongo image your container start when you do docker-compose up without any other intervention.
Or you can add command: mongod --auth directly into your docker-compose.yml
the official documentation of docker is very detailed and help a lot for all of this, keep looking on it https://docs.docker.com/compose/
Related
I am working on my django + celery + docker-compose project.
Problem
I changed django code
Update is working only after docker-compose up --build
How can I enable code update without rebuild?
I found this answer Developing with celery and docker but didn't understand how to apply it
docker-compose.yml
version: '3.9'
services:
django:
build: ./project # path to Dockerfile
command: sh -c "
gunicorn --bind 0.0.0.0:8000 core_app.wsgi"
volumes:
- ./project:/project
- ./project/static:/project/static
- media-volume:/project/media
expose:
- 8000
celery:
build: ./project
command: celery -A documents_app worker --loglevel=info
volumes:
- ./project:/usr/src/app
- media-volume:/project/media
depends_on:
- django
- redis
.........
volumes:
pg_data:
static:
media-volume:
Code update without rebuild is achievable and best practice when working with containers otherwise it takes too much time and effort creating a new image every time you change the code.
The most popular way of doing this is to mount your code directory into the container using one of the two methods below.
In your docker-compose.yml
services:
web:
volumes:
- ./codedir:/app/codedir # while 'codedir' is your code directory
In CLI starting a new container
$ docker run -it --mount "type=bind,source=$(pwd)/codedir,target=/app/codedir" celery bash
So you're effectively mounting the directory that your code lives in on your computer inside of the /opt/ dir of the Celery container. Now you can change your code and...
the local directory overwrites the one from the image when the container is started. You only need to build the image once and use it until the installed dependencies or OS-level package versions need to be changed. Not every time your code is modified. - Quoted from this awesome article
Here is my docker-compose file, mysql.yml:
# Use root/example as user/password credentials
version: '3'
services:
db:
image: mysql
tty: true
stdin_open: true
command: --default-authentication-plugin=mysql_native_password
container_name: db
restart: always
networks:
- db
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: example1
command: bash -c "apt update"
adminer:
image: adminer
restart: always
container_name: web
networks:
- db
ports:
- 8080:8080
volumes:
- ./data/db:/var/lib/mysql
networks:
db:
external: true
When I run this file as "docker-compose -f mysql.yml up -d" it starts working, but after 5 or 10 seconds it dies with 0 exit code. Then, it restarts due to "restart: always" parameter.
I search on the internet about my problem and got some solutions:
First one,
tty: true
std_in_open: true
parameters, but they are not working. The container dies anyway.
Second one,
entrypoint:
- bash
- -c
command:
- |
tail -f /dev/null
This solution is working, but it overrides the default entrypoint, and, so my MySQL service does not work at the end.
Yes, I can concatenate entrypoints or create a Dockerfile(I actually want to complete all this in a single file), but I think it' s not the right way and I need some advice.
Thanks in advance!
When your Compose setup says:
command: bash -c "apt update"
This is the only thing the container does; this runs instead of the normal container process. Once that command completes (successfully) the container will exit (with status code 0).
In normal operation you shouldn't need to specify the command: for a container; the Dockerfile will have a CMD line that provides a useful default. (The notable exception is a setup where you have both a Web server and a background worker sharing substantial code, so you can set CMD to run, say, the Flask application but override command: to run a Celery worker.)
Many of the other options you include in the docker-compose.yml file are unnecessary. You can safely remove tty:, stdin_open:, container_name:, and networks: with no ill effects. (You can configure the Compose-provided default network if you specifically need containers running on a pre-created network.)
The comments hint at trying to run package updates at container startup time. I'd echo #xdhmoore's comment here: you should only run APT or similar package managers during an image build, never on a running container. (You don't want your application startup to fail because a Debian mirror is down, or because an incompatible update has gotten deployed.)
For the standard Docker Hub images, in general they update somewhat frequently, especially if you're not pinning to a specific patch release. If you run
docker-compose pull
docker-compose up
it will ask Docker Hub for a newer version of the image, and recreate the container on it if needed.
The standard Docker Hub packages also frequently download and install the thing they're packaging outside their distribution's package manager system, so running an upgrade isn't necessarily useful.
If you must, though, the best way to do this is to write a minimal Dockerfile
FROM mysql
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get upgrade --assume-yes
and reference it in the docker-compose.yml file
services:
db:
build: .
# replacing the image: line
# do NOT leave `image: mysql` behind
I have a very basic node/express app with a dockerfile and a docker-compose file. When I run the docker container using
docker run -p 3000:3000 service:0.0.1 npm run dev
I can go to localhost:3000 and see my service. However, when I do:
docker-compose run server npm run dev
I can't see anything on localhost:3000, below are my files:
Dockerfile
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
docker-compose.yml
version: "3.7"
services:
server:
build: .
ports:
- "3000:3000"
image: service:0.0.1
environment:
- LOGLEVEL=debug
depends_on:
- db
db:
container_name: "website_service__db"
image: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=website_service
also, everything is working fine from the terminal/docker side - no errors and services are running fine, i just cant access the node endpoints
tl;dr
docker-compose run --service-ports server npm run dev
// the part that changed is the new '--service-ports' argument
the issue was a missing docker-compose run argument --service-ports:
from these docs:
The second difference is that the docker-compose run command does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag:
I am trying to use Docker Compose (with Docker Machine on Windows) to launch a group of Docker containers.
My docker-compose.yml:
version: '2'
services:
postgres:
build: ./postgres
environment:
- POSTGRES_PASSWORD=mysecretpassword
frontend:
build: ./frontend
ports:
- "4567:4567"
depends_on:
- postgres
backend:
build: ./backend
ports:
- "5000:5000"
depends_on:
- postgres
docker-compose build runs successfully. When I run docker-compose start I get the following output:
Starting postgres ... done
Starting frontend ... done
Starting backend ... done
ERROR: No containers to start
I did confirm that the docker containers are not running. How do I get my containers to start?
The issue here is that you haven't actually created the containers. You will have to create these containers before running them. You could use the docker-compose up instead, that will create the containers and then start them.
Or you could run docker-compose create to create the containers and then run the docker-compose start to start them.
The reason why you saw the error is that docker-compose start and docker-compose restart assume that the containers already exist.
If you want to build and start containers, use
docker-compose up
If you only want to build the containers, use
docker-compose up --no-start
Afterwards, docker-compose {start,restart,stop} should work as expected.
There used to be a docker-compose create command, but it is now deprecated in favor of docker-compose up --no-start.
How do I run Celery and RabbitMQ in a docker container? Can you point me to sample dockerfile or compose files?
This is what I have:
Dockerfile:
FROM python:3.4
ENV PYTHONBUFFERED 1
WORKDIR /tasker
ADD requirements.txt /tasker/
RUN pip install -r requirements.txt
ADD . /tasker/
docker-compose.yml
rabbitmq:
image: tutum/rabbitmq
environment:
- RABBITMQ_PASS=mypass
ports:
- "5672:5672"
- "15672:15672"
celery:
build: .
command: celery worker --app=tasker.tasks
volumes:
- .:/tasker
links:
- rabbitmq:rabbit
The issue I'm having is I cant get Celery to stay alive or running. It keeps exiting.
I have similar Celery exiting problem while dockerizing the application. You should use rabbit service name ( in your case it's rabbitmq) as host name in your celery configuration.That is, use broker_url = 'amqp://guest:guest#rabbitmq:5672//' instead of broker_url = 'amqp://guest:guest#localhost:5672//' . In my case, major components are Flask, Celery and Redis.My problem is HERE please check the link, you may find it useful.
Update 2018, as commented below by Floran Gmehlin, The celery image is now officially deprecated in favor of the official python image.
As commented in celery/issue 1:
Using this image seems ridiculous. If you have an application container, as you usually have with Django, you need all dependencies (things you import in tasks.py) installed in this container again.
That's why other projects (e.g. cookiecutter-django) reuse the application container for Celery, and only run a different command (command: celery ... worker) against it with docker-compose.
Note, now the docker-compose.yml is called local.yml and use start.sh.
Original answer:
You can try and emulate the official celery Dockerfile, which does a bit more setup before the CMD ["celery", "worker"].
See the usage of that image to run it properly.
start a celery worker (RabbitMQ Broker)
$ docker run --link some-rabbit:rabbit --name some-celery -d celery
check the status of the cluster
$ docker run --link some-rabbit:rabbit --rm celery celery status
If you can use that image in your docker-compose, then you can try building your own starting FROM celery instead of FROM python.
something I used in my docker-compose.yml. it works for me. check the details in this medium
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit