docker-compose as a production environment without internet - docker

I'm a beginner with docker and I created a docker-compose file that can provide our production environment and I want to use it for our client servers for production environment also I want to use it locally and without internet.
Now, I have binaries of docker and docker compose and saved images that I want to load to a server without internet. this is my init bash script on Linux :
#!/bin/sh -e
#docker
tar xzvf docker-18.09.0.tgz
sudo cp docker/* /usr/bin/
sudo dockerd &
#docker-compose
cp docker-compose-Linux-x86_64 /ussr/local/bin/docker-compose
chmod +x /ussr/local/bin/docker-compose
#load images
docker load --input images.tar
my structure :
code/*
nginx/
site.conf
logs/
phpfpm/
postgres/
data/
custom.ini
.env
docker-compose.yml
docker-compose file:
version: '3'
services:
web:
image: nginx:1.15.6
ports:
- "8080:80"
volumes:
- ./code:/code
- ./nginx/site.conf:/etc/nginx/conf.d/default.conf
- ./nginx/logs:/var/log/nginx
restart: always
depends_on:
- php
php:
build: ./phpfpm
restart: always
volumes:
- ./phpfpm/custom.ini:/opt/bitnami/php/etc/conf.d/custom.ini
- ./code:/code
db:
image: postgres:10.1
volumes:
- ./postgres/data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- 5400:5432
There are some questions :
Why docker doesn't exist in Linux services? but when I install docker by apt-get it goes to Linux services list. How can I set docker as a service and enable it for loading on startup?
How can I set docker-compose in Linux services to run when system startup?

Install docker with package sudo dpkg -i /path/to/package.deb that you can download from https://download.docker.com/linux/ubuntu/dists/.
Then do post install, sudo systemctl enable docker. This will start docker at system boots, combined with restart: always your previous compose will be restarted automatically.

I think that dockerd is creating a daemon, but you have to enable it.
$ sudo systemctl enable docker
Add restart: always to your db container.
How the docker restart policies work

Related

Issue with docker not acknowledging docker-compose.override.yml

I'm particularly new to Docker. I was trying to containerize a project for development and production versions. I came up with a very basic docker-compose configuration and then tried the override feature which doesn't seem to work.
I added overrides for volumes to web and celery services which do not actually mount to the container, can confirm the same by looking at the inspect log of both the containers.
Contents of compose files:-
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- redis
redis:
image: redis:5.0.9-alpine
celery:
build: .
command: celery worker -A facedetect.celeryapp -l INFO --concurrency=1 --without-gossip --without-heartbeat
depends_on:
- redis
environment:
- C_FORCE_ROOT=true
docker-compose.override.yml
version: '3'
services:
web:
volumes:
- .:/code
ports:
- "8000:8000"
celery:
volumes:
- .:/code
I use Docker with Pycharm on Windows 10.
Command executed to deploy the compose configuration:-
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full-path>/docker-compose.yml up -d
Command executed to inspect one of the containers:-
docker container inspect <container_id>
Any help would be appreciated! :)
Just figured out that I had provided the docker-compose.yml file explicitly to the Run Configuration created in Pycharm as it was mandatory to provide at least one of these.
The command used by Pycharm explicitly mentions the .yml files using -f option when running the configuration. Adding the docker-compose.override.yml file to the Run Configuration changed the command to
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full_path>\docker-compose.yml -f <full_path>/docker-compose.override.yml up -d
This solved the issue. Thanks to Exadra37 directing to look out for the command that was being executed.

How do I run a website in bitnami+docker+nginx

I'm taking over a website https://www.funfun.io. Unfortunately, I cannot reach the previous developer anymore.
This is a AngularJS+Node+Express+MongoDB application. He decided to use bitnami+docker+nginx in the server. Here is docker-compose.yml:
version: "3"
services:
funfun-node:
image: funfun
restart: always
build: .
environment:
- MONGODB_URI=mongodb://mongodb:27017/news
env_file:
- ./.env
depends_on:
- mongodb
funfun-nginx:
image: funfun-nginx
restart: always
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "3000:8443"
depends_on:
- funfun-node
mongodb:
image: mongo:3.4
restart: always
volumes:
- "10studio-mongo:/data/db"
ports:
- "27018:27017"
networks:
default:
external:
name: 10studio
volumes:
10studio-mongo:
driver: local
Dockerfile.nginx:
FROM bitnami/nginx:1.16
COPY ./funfun.io /opt/bitnami/nginx/conf/server_blocks/default.conf
COPY ./ssl/MyCompanyLocalhost.cer /opt/MyCompanyLocalhost.cer
COPY ./ssl/MyCompanyLocalhost.pvk /opt/MyCompanyLocalhost.pvk
Dockerfile:
FROM node:12
RUN npm install -g yarn nrm --registry=https://registry.npm.taobao.org && nrm use cnpm
COPY ./package.json /opt/funfun/package.json
WORKDIR /opt/funfun
RUN yarn
COPY ./ /opt/funfun/
CMD yarn start
In my local machine, I could use npm start to test the website in a web browser.
I have access to the Ubuntu server. But I'm new to bitnami+docker+nginx, I have the following questions:
In the command line of Ubuntu server, how could I check if the service is running (besides launching the website in a browser)?
How could I shut down and restart the service?
Previously, without docker, we could start mongodb by sudo systemctl enable mongod. Now, with docker, how could we start mongodb?
First of all, to deploy the services mentioned in the compose file locally, you should run the below command
docker-compose up
docker-compose up -d # in the background
After running the above command docker containers will be created and available on your machine.
To list the running containers
docker ps
docker-compose ps
To stop containers
docker stop ${container name}
docker-compose stop
mongodb is part of the docker-compose file and it will be running once you start other services. It will also be restarted automatically in case it crashes or you restarted your machine.
One final note, since you are using external networks you may need to create the network before starting the services.
1.
docker-compose ps will give you the state of your containers
2.
docker-compose stop will stop your containers, keeping their state then you may start them as their are using docker-compose up
docker-compose kill will delete your containers
docker-compose restart will restart your containers
3.
By declaring your mongodb using an official mongo image your container start when you do docker-compose up without any other intervention.
Or you can add command: mongod --auth directly into your docker-compose.yml
the official documentation of docker is very detailed and help a lot for all of this, keep looking on it https://docs.docker.com/compose/

How to upload extra modules to odoo container in digital ocean

I have a docker-compose.yml with the configuration for Odoo container, and I have some custom modules.
version: '2'
services:
web:
image: odoo:11.0
restart: always
depends_on:
- db
ports:
- "8069:8069"
volumes:
- ./custom-modules:/mnt/extra-addons
db:
image: postgres:10
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
And I want to deploy this containers to digital ocean, so I create a docker droplet
$ docker-machine create --driver=digitalocean --digitalocean-access-token=$DO_TOKEN --digitalocean-size=s-1vcpu-1gb odoo
$ eval $(docker-machine env odoo)
$ docker-compose up -d
And i was expecting something like docker uploads my custom-modules/ or something like that but the folder is not available in the docker machine. Any idea on how to do this? Of course I know how to install odoo from scratch in a normal ubuntu droptlet but I want to do this with Docker, but I am new with this technology
Have you added the relative paths of your extra module in the addons_path of your odoo.conf configuration file ?
Your .yml file looks correct.
Don't forget to update the list of applications in the 'applications' app, and install/update your custo module...

Error starting docker postgres on travis-

I'm having an issue with my travis-ci before_script while trying to connect to my docker postgres container:
Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
I've seen this problem raised but never fully addressed around SO and github issues, and i'm not clear whether it is specific to docker or travis. One linked issue (below) works around it by using 5433 as the host postgres address but i'd like to know for sure what is going on before i jump into something.
my travis.yml:
sudo: required
services:
- docker
env:
DOCKER_COMPOSE_VERSION: 1.7.1
DOCKER_VERSION: 1.11.1-0~trusty
before_install:
# list docker-engine versions
- apt-cache madison docker-engine
# upgrade docker-engine to specific version
- sudo apt-get -o Dpkg::Options::="--force-confnew" install -y docker-engine=${DOCKER_VERSION}
# upgrade docker-compose
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
before_script:
- echo "Before Script:"
- docker-compose -f docker-compose.ci.yml build
- docker-compose -f docker-compose.ci.yml run app rake db:setup
- docker-compose -f docker-compose.ci.yml run app /bin/sh
script:
- echo "Running Specs:"
- rake spec
my docker-compose.yml for ci:
postgres:
image: postgres:9.4.5
environment:
POSTGRES_USER: web
POSTGRES_PASSWORD: yourpassword
expose:
- '5432' # added this as an attempt to open the port
ports:
- '5432:5432'
volumes:
- web-postgres:/var/lib/postgresql/data
redis:
image: redis:3.0.5
ports:
- '6379:6379'
volumes:
- web-redis:/var/lib/redis/data
web:
build: .
links:
- postgres
- redis
volumes:
- ./code:/app
ports:
- '8000:8000'
# env_file: # setting these directly in the environment
# - .docker.env # (they work fine locally)
sidekiq:
build: .
command: bundle exec sidekiq -C code/config/sidekiq.yml
links:
- postgres
- redis
volumes:
- ./code:/app
Docker & Postgres: Failed to bind tcp 0.0.0.0:5432 address already in use
How to get Docker host IP on Travis CI?
It seems that Postgres service is enabled by default in Travis CI.
So you could :
Try to disable the Postgres service in your Travis config. See How to stop services on Travis CI running by default?. See also https://docs.travis-ci.com/user/database-setup/#PostgreSQL .
Or
Map your postgres container to another host port (!= 5432). Like -p 5455:5432.
It could also be useful to check if the service is already running : Check If a Particular Service Is Running on Ubuntu
Do you use Travis' Postgres?
services:
- postgresql
Would be easier if you provide travis.yml

Docker Compose for Rails

I'm trying to replicate this docker command in a docker-compose.yml file
docker run --name rails -d -p 80:3000 -v "$PWD"/app:/www -w /www -ti rails
My docker-compose.yml file:
rails:
image: rails
container_name: rails
ports:
- 80:3000
volumes:
- ./app:/wwww
When I'm doing docker-compose up -d, the container is created but it does not strat.
When I'm adding tty: true to my docker docker-compose.yml file, the container start well but my volume is not mounted.
How can I replicate excatly my docker command in a docker-compose.yml?
There are some ways to solve your problem.
Solution 1: If you want to use the rails image in your docker-compose.yml, you need to set the command and working directory for it like
rails:
image: rails
container_name: rails
command: bash -c "bundle install && rails server -b 0.0.0.0"
ports:
- 80:3000
volumes:
- ./app:/www
working_dir: /www
This will create a new container from the rails image every time you run docker-compose up.
Solution 2: Move your docker-compose.yml to the same directory with Gemfile, and create Dockerfile in that directory in order to build a docker container in advance (to avoid running bundle installevery time)
#Dockerfile
FROM rails:onbuild
I use rails:onbuild here for simplicity reasons (about the differences between rails:onbuild and rails:<version>, please see the documentation).
After that, modify the docker-compose.yml to
rails:
build: .
container_name: rails
ports:
- 80:3000
volumes:
- .:/www
working_dir: /www
Run docker-compose up and this should work!
If you modify your Gemfile, you may also need to rebuild your container by docker-compose build before running docker-compose up.
Thanks for your answer. It helped me to find the solutions.
It was actually a volume problem. I wanted to mount the volume with the directory /www. But it was not possible.
So I used the directory used by default with the rails images:
/usr/src/app
rails:
image: rails
container_name: rails
ports:
- 80:3000
working_dir: /usr/src/app
volumes:
- ./app:/usr/src/app
tty: true
Now my docker-compose up -d command works

Resources