I have a Django app that I want to move to docker. A redis dump.rdb file is in the root directory of the project, and contains data needed for the app to work. I normally start that by running redis-server while in the same directory. How can I move this configuration to docker? I know I can use volumes and suspect I need to mount my code folder as one, but will that cause other issues? Here is my current docker setup:
Dockerfile
FROM python:2.7.14
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD . /code/
ADD requirements /requirements
RUN pip install -r /requirements/local.txt
docker-compose.yml
version: '3'
services:
db:
image: postgres:9.6.3
expose:
- "5432"
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:3.2.6
expose:
- "6379"
volumes:
- ./code
redis_cache:
image: redis:3.2.6
expose:
- "6379"
elasticsearch:
image: elasticsearch:5.6.6
expose:
- "9200"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://postgres#db/postgres
- ENVIRONMENT=development
- REDIS_URL=redis://redis:6379
- REDIS_CACHE_URL=redis://redis_cache:6379
- ELASTIC_ENDPOINT=elasticsearch:9200
env_file: docker.env
depends_on:
- db
- redis
- elasticsearch
volumes:
- .:/code
volumes:
pgdata: {}
There are several ways. What do you need to prefer depends on your project and what kind of information stored in dump.rds file.
you can create your custom redis image with dump.rds file inside. Then you need to push it to your repository.
You can, as you mention above, mount volume from source code. But I prefer mount not whole code directory but mount only redis directory which stores data intended for Redis.
Also, you can create some migration script in web container. It may create some data in redis container as well as in db container.
Related
I am trying to start three docker containers using docker-compose; a postgresql database, a rest api and a frontend web app.
Previously I had the rest api and the database working perfectly (ran migrations and started the server) but when I wanted to send requests to it from my web app I had trouble connecting to the docker network. All the discussion on the internet was centered around connecting a web app in a container to an api in another container and I didn't find any promising method to connect to it (besides enabling port forwarding in the kernel and exposing myself to the network?) so I just decided to package the web app in a container as well.
My directory structure:
ProjectName
|-> projectapi
|-> |-> api.docker
|-> |-> api_start.sh
|-> projectapp
|-> |-> front.docker
|-> |-> front_start.sh
|-> docker-compose.yml
The problem is running these startup scripts in the right context (tbh I only need to run a npm start and a python manage.py runserver).
I can't think of anything I haven't tried, but most of my efforts have been centered around mucking around with paths because for the longest time the issue was not finding the file which I think I have fixed.
This is the docker-compose file, I have tried having all sorts of command entries in here to run the desired startup script. I have also used entrypoint.
version: '3.7'
services:
db:
container_name: projectdb
image: postgres:9.6-alpine
restart: always
volumes:
- projectdb:/var/lib/postgresql/data/
environment:
POSTGRES_DB: projectdb
POSTGRES_PASSWORD: root
ports:
- "8001:5432"
api:
container_name: projectapi
build:
context: projectapi/
dockerfile: api.docker
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
restart: always
environment:
POSTGRES_DB: 'projectdb'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'root'
POSTGRES_HOST: 'db'
front:
container_name: projectapp
build:
context: projectapp/
dockerfile: front.docker
ports:
- "3000:3000"
restart:
always
volumes:
projectdb:
Then there are the Dockerfiles:
front
FROM node:8
RUN mkdir /projectapp
COPY $HOSTDIR/package*.json /projectapp/
RUN npm install /projectapp
COPY $HOSTDIR/* /projectapp/
ENTRYPOINT ["npm", "start", "/projectapp"]
back
FROM python:3.6-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
RUN mkdir /projectapi
COPY $HOSTDIR/requirements.txt /projectapi/
RUN pip install -r /projectapi/requirements.txt
COPY $HOSTDIR/* /projectapi/
CMD ["python", "manage.py", "migrate"]
That last line could be runserver as well. These are just some examples of the permutations I've gone through but at this point I feel the problem is some conceptual misunderstanding, I've read the docs.
The error messages are all different permutations of not finding the startup script. I think there was a point where I managed to run a startup script and the error became it couldn't find manage.py, at which point I started to look into how to write the script better than just: python manage.py runserver but didn't get very far.
Try using a WORKDIR. Your current run path doesn't have manage.py -
I tried changing it momentarily -
WORKDIR /projectapi
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver"]
I have solved this issue but still haven't gotten the network to work, however that's another question. While I solved the problem with the startup scripts it turned out not to be needed, here's the current state of things:
Directory structure is the same as before.
docker-compose.yml
version: '3.7'
services:
db:
container_name: compdb
image: postgres:9.6-alpine
restart: always
volumes:
- compdb:/var/lib/postgresql/data/
environment:
POSTGRES_DB: compdb
POSTGRES_PASSWORD: root
networks:
- internal
ports:
- "8001:5432"
api:
container_name: back
build:
context: back/
dockerfile: api.docker
entrypoint: ["python", "/back/manage.py", "runserver", "0.0.0.0:8000"]
networks:
- internal
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
restart: always
environment:
POSTGRES_DB: 'compdb'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'root'
POSTGRES_HOST: 'db'
front:
container_name: front
build:
context: front/
dockerfile: front.docker
entrypoint: ["npm","start", "--prefix", "/front/"]
networks:
- internal
ports:
- "3000:3000"
expose:
- "3000"
depends_on:
- api
restart:
always
staff:
container_name: staff
build:
context: staff/
dockerfile: staff.docker
entrypoint: ["npm","start","--prefix","/staff/"]
networks:
- internal
ports:
- "3006:3006"
expose:
- "3006"
depends_on:
- api
restart:
always
volumes:
compdb:
networks:
internal:
front
FROM node:8
RUN mkdir /front
COPY package*.json /front/
RUN npm install /front
COPY . /front/
back
FROM python:3.6-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
RUN mkdir /back
COPY requirements.txt /back/
RUN pip install -r /back/requirements.txt
COPY . /back/
staff is similar to front.
The problem was solved by moving the build context into each directory with docker compose. Running startup scripts can be done by changing the entrypoint, however for local development attaching to the container to run migrations or similar is more convenient.
I'm using nestjs for my backend and using typeorm as ORM.
I tried to define my database and my application in an docker-compose file.
If I'm running my database as a container and my application from my local machine it works well. My program connects and creates the tables etc.
But if I try to connect the database from within my container or to start the container with docker-compose up it fails.
Always get an ECONNREFUSED Error.
Where is my mistake ?
docker-compose.yml
version: '3.1'
volumes:
dbdata:
services:
db:
image: postgres:10
volumes:
- ./dbData/:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_PASSWORD=${TYPEORM_PASSWORD}
- POSTGRES_USER=${TYPEORM_USERNAME}
- POSTGRES_DB=${TYPEORM_DATABASE}
ports:
- ${TYPEORM_PORT}:5432
backend:
build: .
ports:
- "3001:3000"
command: npm run start
volumes:
- .:/src
Dockerfile
FROM node:10.5
WORKDIR /home
# Bundle app source
COPY . /home
# Install app dependencies
#RUN npm install -g nodemon
# If you are building your code for production
# RUN npm install --only=production
RUN npm i -g #nestjs/cli
RUN npm install
EXPOSE 3000
.env
# .env
HOST=localhost
PORT=3000
NODE_ENV=development
LOG_LEVEL=debug
TYPEORM_CONNECTION=postgres
TYPEORM_HOST=localhost
TYPEORM_USERNAME=postgres
TYPEORM_PASSWORD=postgres
TYPEORM_DATABASE=mariokart
TYPEORM_PORT=5432
TYPEORM_SYNCHRONIZE=true
TYPEORM_DROP_SCHEMA=true
TYPEORM_LOGGING=all
TYPEORM_ENTITIES=src/database/entity/*.ts
TYPEORM_MIGRATIONS=src/database/migrations/**/*.ts
TYPEORM_SUBSCRIBERS=src/database/subscribers/**/*.ts
I tried to use links but it don't work in the container.
Take a look at your /etc/hosts inside the backend container. You will see
192.0.18.1 dir_db_1
or something like that. The IP will be different and dir will represent the dir you're in. Therefore, you must change TYPEORM_HOST=localhost to TYPEORM_HOST=dir_db_1.
Although, I suggest you set static names to your containers.
services:
db:
container_name: project_db
...
backend:
container_name: project_backend
In this case you can always be sure, that your container will have a static name and you can set TYPEORM_HOST=project_db and never worry about the name ever again.
You can create a network and share among two services.
Create network for db and backend services:
networks:
common-net: {}
and add the network to these two services. So your .yml file would like below after edit:
version: '3.1'
volumes:
dbdata:
services:
db:
image: postgres:10
volumes:
- ./dbData/:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_PASSWORD=${TYPEORM_PASSWORD}
- POSTGRES_USER=${TYPEORM_USERNAME}
- POSTGRES_DB=${TYPEORM_DATABASE}
ports:
- ${TYPEORM_PORT}:5432
networks:
- common-net
backend:
build: .
ports:
- "3001:3000"
command: npm run start
volumes:
- .:/src
networks:
- common-net
networks:
common-net: {}
Note1: After this change, there is no need to expose the Postgres port externally unless you have a reason for it. You can remove that section.
Note2: TYPEORM_HOST should be renamed to db. Docker would resolve the IP address of db service by itself.
In my docker-compose.yml I placed init.sql into volumen.
version: '3'
services:
mysqldb:
image: mysql:5.7.22
container_name: mysql
restart: always
ports:
- "3306:3306"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/1-init.sql
I know that I should run this script via Dockerfile file. How can I achieve this?
The official Docker mysql image will run everything present in /docker-entrypoint-initdb.d when the database is first initialized (see "Initializing a fresh instance" on that page). Since you're injecting it into the container using a volume, if the database doesn't already exist, your script will be run automatically as you have it.
That page also suggests creating a custom Docker image. The Dockerfile would be very short
FROM mysql:5.7.22
COPY init.sql /docker/entrypoint-initdb.d/1-init.sql
and then once you built the modified image you wouldn't need a copy of the script locally to have it run at first start.
If you want to run a init script every time you run the container, You could write it as below::
services:
mysqldb:
image: mysql:5.7.22
container_name: mysql
restart: always
command: --init-file /docker-entrypoint-initdb.d/1-init.sql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=homestead
- MYSQL_USER=root
- MYSQL_PASSWORD=secret
volumes:
- dbdata:/var/lib/mysql`
`
I'm currently attempting to use Docker to make our local dev experience involving two services easier, but I'm struggling to use host and container ports in the right way. Here's the situation:
One repo containing a Rails API, running on 127.0.0.1:3000 (lets call this backend)
One repo containing an isomorphic React/Redux frontend app, running on 127.0.0.1:8080 (lets call this frontend)
Both have their own Dockerfile and docker-compose.yml files as they are in separate repos, and both start with docker-compose up fine.
Currently not using Docker at all for CI or deployment, planning to in the future.
The issue I'm having is that in local development the frontend app is looking for the API backend on 127.0.0.1:3000 from within the frontend container, which isn't there - it's only available to the host and the backend container actually running the Rails app.
Is it possible to forward the backend container's 3000 port to the frontend container? Or at the very least the host's 3000 port as I can see the Rails app on localhost on my computer. I've tried 127.0.0.1:3000:3000 within the frontend docker-compose but I can't do that while running the Rails app as the port is in use and fails to connect. I'm thinking maybe I've misunderstood the point or am missing something obvious?
Files:
frontend Dockerfile
FROM node:8.7.0
RUN npm install --global --silent webpack yarn
RUN mkdir /app
WORKDIR /app
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN yarn install
COPY . /app
frontend docker-compose.yml
version: '3'
services:
web:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000' # rails backend exposed to localhost within container
backend Dockerfile
FROM ruby:2.4.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN bundle install
COPY . /app
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
You have to unite the containers in one network. Do it in your docker-compose.yml files.
Check this docs to learn about networks in docker.
frontend docker-compose.yml
version: '3'
services:
gui:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000'
networks:
- webnet
networks:
webnet:
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
back:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
networks:
- webnet
networks:
webnet:
Docker has its own DNS resolution, so after you do this you will be able to connect to your backend by setting the address to: http://back:3000
Managed to solve this using external links in the frontend app to link to the default network of the backend app like so:
version: '3'
services:
web:
build: .
command: yarn start:dev
environment:
- API_HOST=http://backend_web_1:3000
external_links:
- backend_default
networks:
- default
- backend_default
ports:
- '8080:8080'
volumes:
- .:/app
networks:
backend_default: # share with backend app
external: true
I am new to docker and developing a project using docker compose. From the documentation I have learned that I should be using data only containers to keep data persistant but I am unable to do so using docker-compose.
Whenever I do docker-compose down it removes the the data from db but by doing docker-compose stop the data is not removed. May be this is because that I am not creating named data volume and docker-compose down hardly removes all the containers. So I tried naming the container but it threw me errors.
Please have a look at my yml file:
version: '2'
services:
data_container:
build: ./data
#volumes:
# - dataVolume:/data
db:
build: ./db
ports:
- "5445:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
# - PGDATA=/var/lib/postgresql/data/pgdata
volumes_from:
# - container:db_bus
- data_container
geoserver:
build: ./geoserver
depends_on:
- db
ports:
- "8004:8080"
volumes:
- ./geoserver/data:/opt/geoserverdata_dir
web:
build: ./web
volumes:
- ./web:/code
ports:
- "8000:8000"
depends_on:
- db
command: python manage.py runserver 0.0.0.0:8000
nginx:
build: ./nginx
ports:
- "83:80"
depends_on:
- web
The Docker file for the data_container is:
FROM stackbrew/busybox:latest
MAINTAINER Tom Offermann <tom#offermann.us>
# Create data directory
RUN mkdir /data
# Create /data volume
VOLUME /data
I tried this but by doing docker-compose down, the data is lost. I tried naming the data_container as you can see the commented line, it threw me this error:
ERROR: Named volume "dataVolume:/data:rw" is used in service "data_container" but no declaration was found in the volumes section.
So right now what I am doing is I created a stand alone data only named container and put that in the volumes_from value of the db. It worked fine and didn't remove any data even after doing docker-compose down.
My queries:
What is the best approach to make containers that can store database's data using the docker-compose and to use them properly ?
My conscious is not agreeing with me on approach that I have opted, the one by creating a stand alone data container. Any thoughts?
docker-compose down
does the following
Stops containers and removes containers, networks, volumes, and images
created by up
So the behaviour you are experiencing is expected.
Use docker-compose stop to shutdown containers created with the docker-compose file but not remove their volumes.
Secondly you don't need the data-container pattern in version 2 of docker compose. So remove that and just use
db:
...
volumes:
- /var/lib/postgresql/data
docker-compose down stops containers but also removes them (with everything: networks, ...).
Use docker-compose stop instead.
I think the best approach to make containers that can store database's data with docker-compose is to use named volumes:
version: '2'
services:
db: #https://hub.docker.com/_/mysql/
image: mysql
volumes:
- "wp-db:/var/lib/mysql:rw"
env_file:
- "./conf/db/mysql.env"
volumes:
wp-db: {}
Here, it will create a named volume called "wp-db" (if it doesn't exist) and mount it in /var/lib/mysql (in read-write mode, the default). This is where the database stores its data (for the mysql image).
If the named volume already exists, it will be used without creating it.
When starting, the mysql image look if there are databases in /var/lib/mysql (your volume) in order to use them.
You can have more information with the docker-compose file reference here:
https://docs.docker.com/compose/compose-file/#/volumes-volume-driver
To store database data make sure your docker-compose.yml will look like
if you want to use Dockerfile
version: '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
mysql-data:
your docker-compose.yml will looks like
if you want to use your image instead of Dockerfile
version: '3.1'
services:
php:
image: php:7.4-apache
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
if you want to store or preserve data of mysql then
must remember to add two lines in your docker-compose.yml
volumes:
- mysql-data:/var/lib/mysql
and
volumes:
mysql-data:
after that use this command
docker-compose up -d
now your data will persistent and will not be deleted even after using this command
docker-compose down
extra:- but if you want to delete all data then you will use
docker-compose down -v
to verify or check database data list by using this command
docker volume ls
DRIVER VOLUME NAME
local 35c819179d883cf8a4355ae2ce391844fcaa534cb71dc9a3fd5c6a4ed862b0d4
local 133db2cc48919575fc35457d104cb126b1e7eb3792b8e69249c1cfd20826aac4
local 483d7b8fe09d9e96b483295c6e7e4a9d58443b2321e0862818159ba8cf0e1d39
local 725aa19ad0e864688788576c5f46e1f62dfc8cdf154f243d68fa186da04bc5ec
local de265ce8fc271fc0ae49850650f9d3bf0492b6f58162698c26fce35694e6231c
local phphelloworld_mysql-data