I have a Rails app configured in Docker for development. When I run docker-compose up, the container starts and all the associated ones specified in my docker-compose.yml file do as well. It all works fine, apart from the fact that my app container takes ages to start. Once started it is perfect. I'm rather unsure where to start to find out what is causing this delay. It could be a Rails OR Docker issue. I do not have this issue in other Docker/Rails applications, Just this one.
I know this isn't much to go on, but i'm hoping people could give me some pointers as to where to look to try to find where this delay is coming from or what's happening in that time and then I can post more information to help narrow it down.
Thanks
Dockerfile:
FROM starefossen/ruby-node:2-8-stretch
RUN apt-get update && apt-get install -y build-essential
WORKDIR /app
COPY Gemfile* ./
RUN bundle install
COPY . .
CMD ["rails", "s", "-b", "0.0.0.0"]
docker-compose.yml:
version: '3.7'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- 3000:3000
- 35729:35729
- 5000:5000
- 9200:9200
env_file:
- '.env'
volumes:
- .:/app
- type: tmpfs
target: /app/tmp/pids/
depends_on:
- database
- elasticsearch
database:
image: postgres:9.6-alpine
volumes:
- pg-data:/var/lib/postgresql/data
webpacker:
build: .
command: ./bin/webpack-dev-server
volumes:
- .:/app
ports:
- '3035:3035'
adminer:
image: adminer
restart: always
ports:
- "8080:8080"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.1
volumes:
- es-data:/usr/share/elasticsearch/data
volumes:
pg-data:
es-data:
Related
Hello I have a project in elixir, but I have doubts about how I can make a link whenever I update my files locally update the docker,
so you don't have to use docker-compose up every time something is updated.
my docker file :
FROM elixir:alpine
RUN apk add --update --no-cache curl py-pip
RUN apk add --no-cache build-base git
WORKDIR /app
RUN mix local.hex --force && \
mix local.rebar --force
COPY mix.exs mix.lock ./
COPY config config
RUN mix do deps.get, deps.compile
COPY priv priv
COPY lib lib
COPY numbers.csv numbers.csv
COPY docker-entrypoint.sh docker-entrypoint.sh
EXPOSE 4000
docker-compose:
version: "3.7"
services:
app:
restart: on-failure
build: .
command: /bin/sh docker-entrypoint.sh
ports:
- "4000:4000"
depends_on:
- postgres-db
links:
- postgres-db
env_file:
- .env
postgres-db:
image: "postgres:12"
restart: always
container_name: "postgres-db"
environment:
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_USER: ${DB_USER}
POSTGRES_DB: ${DB_NAME}
ports:
- "5432:5432"
folder structure:
You should have another docker-compose file called docker-compose.override.yml which is your setup for local development. In that file, you can use volumes to get local file updates being reflected in the docker container (while it's running):
It will look something like this (look at the volumes part):
version: "3.8"
services:
db:
image: postgres:13.0
env_file:
- ./docker/dev.env
restart: always
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
spiritpay:
image: spiritpay:local
build:
context: .
dockerfile: ./Dockerfile
depends_on:
- db
stdin_open: true
tty: true
env_file:
- ./docker/dev.env
ports:
- "4000:4000"
- "4002:4002"
volumes:
- /opt/spiritpay/assets/node_modules
- ./assets:/opt/spiritpay/assets
- ./config:/opt/spiritpay/config:ro
- ./lib:/opt/spiritpay/lib:ro
- ./priv:/opt/spiritpay/priv
- ./test:/opt/spiritpay/test:ro
- ./mix.exs:/opt/spiritpay/mix.exs:ro
- ./mix.lock:/opt/spiritpay/mix.lock:ro
volumes:
db-data:
I was using docker-compose, but when I tried to build it again, this error shows, I have build this docker-compose multiple times:
ERROR: Service 'api' failed to build: max depth exceeded
I tried to execute docker system prune to clean my containers, but it didn't work.
docker-compose.yml
version: "3"
services:
client:
container_name: my_client
image: mhart/alpine-node:12
build: ./client
restart: always
ports:
- "3000:3000"
working_dir: /client
volumes:
- ./client:/client
entrypoint: ["npm", "start"]
links:
- api
networks:
- my_network
api:
container_name: my_api
build: ./api
restart: always
ports:
- "9000:9000"
environment:
DB_HOSTNAME: mysql
working_dir: /api
volumes:
- ./api:/api
depends_on:
- mysql
networks:
- my_network
mysql:
container_name: my_mysql
build: ./db
restart: always
volumes:
- /var/lib/mysql
- ./db:/db
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=n
- MYSQL_USER=n
- MYSQL_PASSWORD=n
- MYSQL_DATABASE=n
networks:
- my_network
command: '--default-authentication-plugin=mysql_native_password'
networks:
my_network:
driver: bridge
this is the Dockerfile:
FROM mhart/alpine-node:12
WORKDIR /api
COPY package*.json /api/
RUN npm i -G nodemon
RUN npm install
COPY . /api/
EXPOSE 9000
CMD ["npm", "run", "dev"]
any help is appreciated.
So, I figure out, I just needed to execute docker system prune -a to remove any stopped container. Now --build is working again.
This command deleted all my local docker images related to my dockerfile. After building it so many times my local storage has reached a limited, thus the error max depth exceeded.
Max depth doesn't indicate an out-of-storage-capacity error (though a prune could accidentally fix it).
Rather it indicates that the api image that you were building had too many layers.
A plausible theory is that you have a recursion caused by having this in your compose file:
image: mhart/alpine-node:12
build: ./client
and this in a Dockerfile
FROM mhart/alpine-node:12
(I'm assuming the Dockerfile in ./client is also FROM the same image).
Your build is essentially adding a few layers onto your local mhart/alpine-node:12 image every time you run it (you can confirm by running docker history mhart/alpine-node:12).
If so, you should probably rename the image in your compose file.
I am trying to start three docker containers using docker-compose; a postgresql database, a rest api and a frontend web app.
Previously I had the rest api and the database working perfectly (ran migrations and started the server) but when I wanted to send requests to it from my web app I had trouble connecting to the docker network. All the discussion on the internet was centered around connecting a web app in a container to an api in another container and I didn't find any promising method to connect to it (besides enabling port forwarding in the kernel and exposing myself to the network?) so I just decided to package the web app in a container as well.
My directory structure:
ProjectName
|-> projectapi
|-> |-> api.docker
|-> |-> api_start.sh
|-> projectapp
|-> |-> front.docker
|-> |-> front_start.sh
|-> docker-compose.yml
The problem is running these startup scripts in the right context (tbh I only need to run a npm start and a python manage.py runserver).
I can't think of anything I haven't tried, but most of my efforts have been centered around mucking around with paths because for the longest time the issue was not finding the file which I think I have fixed.
This is the docker-compose file, I have tried having all sorts of command entries in here to run the desired startup script. I have also used entrypoint.
version: '3.7'
services:
db:
container_name: projectdb
image: postgres:9.6-alpine
restart: always
volumes:
- projectdb:/var/lib/postgresql/data/
environment:
POSTGRES_DB: projectdb
POSTGRES_PASSWORD: root
ports:
- "8001:5432"
api:
container_name: projectapi
build:
context: projectapi/
dockerfile: api.docker
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
restart: always
environment:
POSTGRES_DB: 'projectdb'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'root'
POSTGRES_HOST: 'db'
front:
container_name: projectapp
build:
context: projectapp/
dockerfile: front.docker
ports:
- "3000:3000"
restart:
always
volumes:
projectdb:
Then there are the Dockerfiles:
front
FROM node:8
RUN mkdir /projectapp
COPY $HOSTDIR/package*.json /projectapp/
RUN npm install /projectapp
COPY $HOSTDIR/* /projectapp/
ENTRYPOINT ["npm", "start", "/projectapp"]
back
FROM python:3.6-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
RUN mkdir /projectapi
COPY $HOSTDIR/requirements.txt /projectapi/
RUN pip install -r /projectapi/requirements.txt
COPY $HOSTDIR/* /projectapi/
CMD ["python", "manage.py", "migrate"]
That last line could be runserver as well. These are just some examples of the permutations I've gone through but at this point I feel the problem is some conceptual misunderstanding, I've read the docs.
The error messages are all different permutations of not finding the startup script. I think there was a point where I managed to run a startup script and the error became it couldn't find manage.py, at which point I started to look into how to write the script better than just: python manage.py runserver but didn't get very far.
Try using a WORKDIR. Your current run path doesn't have manage.py -
I tried changing it momentarily -
WORKDIR /projectapi
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver"]
I have solved this issue but still haven't gotten the network to work, however that's another question. While I solved the problem with the startup scripts it turned out not to be needed, here's the current state of things:
Directory structure is the same as before.
docker-compose.yml
version: '3.7'
services:
db:
container_name: compdb
image: postgres:9.6-alpine
restart: always
volumes:
- compdb:/var/lib/postgresql/data/
environment:
POSTGRES_DB: compdb
POSTGRES_PASSWORD: root
networks:
- internal
ports:
- "8001:5432"
api:
container_name: back
build:
context: back/
dockerfile: api.docker
entrypoint: ["python", "/back/manage.py", "runserver", "0.0.0.0:8000"]
networks:
- internal
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
restart: always
environment:
POSTGRES_DB: 'compdb'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'root'
POSTGRES_HOST: 'db'
front:
container_name: front
build:
context: front/
dockerfile: front.docker
entrypoint: ["npm","start", "--prefix", "/front/"]
networks:
- internal
ports:
- "3000:3000"
expose:
- "3000"
depends_on:
- api
restart:
always
staff:
container_name: staff
build:
context: staff/
dockerfile: staff.docker
entrypoint: ["npm","start","--prefix","/staff/"]
networks:
- internal
ports:
- "3006:3006"
expose:
- "3006"
depends_on:
- api
restart:
always
volumes:
compdb:
networks:
internal:
front
FROM node:8
RUN mkdir /front
COPY package*.json /front/
RUN npm install /front
COPY . /front/
back
FROM python:3.6-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
RUN mkdir /back
COPY requirements.txt /back/
RUN pip install -r /back/requirements.txt
COPY . /back/
staff is similar to front.
The problem was solved by moving the build context into each directory with docker compose. Running startup scripts can be done by changing the entrypoint, however for local development attaching to the container to run migrations or similar is more convenient.
I want use Docker run my project(react+nodejs+mongodb),
Dockerfile:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
CMD nohup sh -c 'npm start && node ./server/server.js'
docker-compose.yml:
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
run docker-compose up --build, the 3000 port is worked, but the 8080 port dies
localhost:3000
localhost:8080
I would suggest create a container for the server and have it seperate from the "chat" container. Its best to have each container do one thing and one thing only (almost like the philosophy behind unix commands)
In any case here is some modifications that I would make to the compose file.
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
# You don't need to expose this port to the outside world. Because you linked the two containers the chat app
# will be able to connect to mongodb using hostname mongodb inside the container network.
# ports:
# - "27017:27017"
Btw what happens if you run:
$ docker-compose down
and then
$ docker-compose up
$ docker ps
can you see the ports exposed in docker ps output?
your chat service depends on mongo so you also need to have this in your chat
depends_on:
- mongo
This docker-compose file works for me. Note that i am saving the data from the database to a local directory. You should add this directory to gitignore.
version: "3.2"
services:
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- NODE_ENV=production
ports:
- "28017:27017"
expose:
- 28017 # you can connect to this mongodb with studio3t
volumes:
- ./mongodb-data:/data/db
restart: always
networks:
- docker-network
express:
container_name: express
environment:
- NODE_ENV=development
restart: always
build:
context: .
args:
buildno: 1
expose:
- 3000
ports:
- "3000:3000"
links:
- mongo # link this service to the database service
depends_on:
- mongo
command: "npm start" # override the default command to use nodemon in dev
networks:
- docker-network
networks:
docker-network:
driver: bridge
You may also find that using node you have to wait for the mongodb container to be ready before you can connect to the database.
I'm new to Docker, so i don't know if it's a programming mistake or something, one thing i found strange is that in a Mac it worked fine, but running on windows, doesn't.
docker-compose.yml
version: '2.1'
services:
db:
build: ./backend
restart: always
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=123
- MYSQL_DATABASE=demo
- MYSQL_USER=user
- MYSQL_PASSWORD=123
php:
build: ./frontend
ports:
- "80:80"
volumes:
- ./frontend:/var/www/html
links:
- db
Docker file inside ./frontend
FROM php:7.2-apache
# Enable mysqli to connect to database
RUN docker-php-ext-install mysqli
# Document root
WORKDIR /var/www/html
COPY . /var/www/html/
Dockerfile inside ./backend
FROM mysql:5.7
COPY ./demo.sql /docker-entrypoint-initdb.d
Console:
$ docker-compose up
Creating phpsampleapp_db_1 ... done
Creating phpsampleapp_db_1 ...
Creating phpsampleapp_php_1 ...
It stays forever like that, i tried a bunch of things.
I'm using Docker version 17.12.0-ce. And enabled Linux container mode.
I think i don't need the "version" and "services", but anyway.
Thanks.
In my case, the fix was simply to restart Docker Desktop. After that all went smoothly