Trouble running startup scripts for containers - docker

I am trying to start three docker containers using docker-compose; a postgresql database, a rest api and a frontend web app.
Previously I had the rest api and the database working perfectly (ran migrations and started the server) but when I wanted to send requests to it from my web app I had trouble connecting to the docker network. All the discussion on the internet was centered around connecting a web app in a container to an api in another container and I didn't find any promising method to connect to it (besides enabling port forwarding in the kernel and exposing myself to the network?) so I just decided to package the web app in a container as well.
My directory structure:
ProjectName
|-> projectapi
|-> |-> api.docker
|-> |-> api_start.sh
|-> projectapp
|-> |-> front.docker
|-> |-> front_start.sh
|-> docker-compose.yml
The problem is running these startup scripts in the right context (tbh I only need to run a npm start and a python manage.py runserver).
I can't think of anything I haven't tried, but most of my efforts have been centered around mucking around with paths because for the longest time the issue was not finding the file which I think I have fixed.
This is the docker-compose file, I have tried having all sorts of command entries in here to run the desired startup script. I have also used entrypoint.
version: '3.7'
services:
db:
container_name: projectdb
image: postgres:9.6-alpine
restart: always
volumes:
- projectdb:/var/lib/postgresql/data/
environment:
POSTGRES_DB: projectdb
POSTGRES_PASSWORD: root
ports:
- "8001:5432"
api:
container_name: projectapi
build:
context: projectapi/
dockerfile: api.docker
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
restart: always
environment:
POSTGRES_DB: 'projectdb'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'root'
POSTGRES_HOST: 'db'
front:
container_name: projectapp
build:
context: projectapp/
dockerfile: front.docker
ports:
- "3000:3000"
restart:
always
volumes:
projectdb:
Then there are the Dockerfiles:
front
FROM node:8
RUN mkdir /projectapp
COPY $HOSTDIR/package*.json /projectapp/
RUN npm install /projectapp
COPY $HOSTDIR/* /projectapp/
ENTRYPOINT ["npm", "start", "/projectapp"]
back
FROM python:3.6-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
RUN mkdir /projectapi
COPY $HOSTDIR/requirements.txt /projectapi/
RUN pip install -r /projectapi/requirements.txt
COPY $HOSTDIR/* /projectapi/
CMD ["python", "manage.py", "migrate"]
That last line could be runserver as well. These are just some examples of the permutations I've gone through but at this point I feel the problem is some conceptual misunderstanding, I've read the docs.
The error messages are all different permutations of not finding the startup script. I think there was a point where I managed to run a startup script and the error became it couldn't find manage.py, at which point I started to look into how to write the script better than just: python manage.py runserver but didn't get very far.

Try using a WORKDIR. Your current run path doesn't have manage.py -
I tried changing it momentarily -
WORKDIR /projectapi
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver"]

I have solved this issue but still haven't gotten the network to work, however that's another question. While I solved the problem with the startup scripts it turned out not to be needed, here's the current state of things:
Directory structure is the same as before.
docker-compose.yml
version: '3.7'
services:
db:
container_name: compdb
image: postgres:9.6-alpine
restart: always
volumes:
- compdb:/var/lib/postgresql/data/
environment:
POSTGRES_DB: compdb
POSTGRES_PASSWORD: root
networks:
- internal
ports:
- "8001:5432"
api:
container_name: back
build:
context: back/
dockerfile: api.docker
entrypoint: ["python", "/back/manage.py", "runserver", "0.0.0.0:8000"]
networks:
- internal
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
restart: always
environment:
POSTGRES_DB: 'compdb'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'root'
POSTGRES_HOST: 'db'
front:
container_name: front
build:
context: front/
dockerfile: front.docker
entrypoint: ["npm","start", "--prefix", "/front/"]
networks:
- internal
ports:
- "3000:3000"
expose:
- "3000"
depends_on:
- api
restart:
always
staff:
container_name: staff
build:
context: staff/
dockerfile: staff.docker
entrypoint: ["npm","start","--prefix","/staff/"]
networks:
- internal
ports:
- "3006:3006"
expose:
- "3006"
depends_on:
- api
restart:
always
volumes:
compdb:
networks:
internal:
front
FROM node:8
RUN mkdir /front
COPY package*.json /front/
RUN npm install /front
COPY . /front/
back
FROM python:3.6-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
RUN mkdir /back
COPY requirements.txt /back/
RUN pip install -r /back/requirements.txt
COPY . /back/
staff is similar to front.
The problem was solved by moving the build context into each directory with docker compose. Running startup scripts can be done by changing the entrypoint, however for local development attaching to the container to run migrations or similar is more convenient.

Related

Docker Prisma Error P1001: Dockerizing a NestJS, Prisma Postgres app

I have made a NestJS app with NPX and I am using Prisma and Postgres.
Below is my Dockerfile:
FROM node:16
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
EXPOSE 8080
CMD ["npm", "run", "start", "api"]
And my .env:
DATABASE_URL="postgres://myuser:mypassword#todo-db:5432/todoapp-db?schema=public?connection_timeout=300"
And my docker-compose.yml
version: '3.8'
services:
nest-api:
container_name: nest-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
depends_on:
- todo-db
- prisma-postgres-api
env_file:
- .env
todo-db:
image: postgres:13
ports:
- 5432:5432
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: todoapp-db
prisma-postgres-api:
stdin_open: true
build:
context: .
dockerfile: Dockerfile
container_name: prisma-postgres-api
depends_on:
- todo-db
ports:
- '3000:3000'
restart: always
command: npx prisma migrate dev
The error I get is the following:
prisma-postgres-api | Error: P1001: Can't reach database server at `todo-db`:`5432`
prisma-postgres-api | Please make sure your database server is running at `todo-db`:`5432`.
I have tried every solution I could find online but none seem to work and I can't figure out where I am going wrong. I'd really appreciate some help, been stuck here for some time now.

Docker: Node server is not running after start the server

I have a Dockerfile and a docker-compose.yml file.
If I execute docker-compose up, it returns:
Creating network "demoapi_webnet" with the default driver
Creating demoapi_web_1 ... done
Creating d2c_postgres ... done
Attaching to demoapi_web_1, d2c_postgres
...
d2c_postgres | 2020-07-28 00:47:48.772 UTC [1] LOG: database system is ready to accept connections
But my node server is not starting.
These are my docker configuration files:
Dockerfile
FROM node:12.13-alpine As development
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY dist .
COPY wait-for-it.sh .
CMD ["npm", "run", "start"]
docker-compose.yml
version: '3'
services:
db:
image: postgres
networks:
- webnet
container_name: "d2c_postgres"
environment:
POSTGRES_PASSWORD: 010203
POSTGRES_USER: postgres
POSTGRES_DB: demo
ports:
- "5432:5432"
web:
image: nest-app
ports:
- "3000:3000"
networks:
- webnet
environment:
DB_HOST: db
command: ["./wait-for-it.sh", "db:5432", "--", "npm", "run", "start"]
networks:
webnet:
My only clue is this line:
env: can't execute 'bash': No such file or directory
I can stablish a connection to pgadmin/postgres with that configuration, but the node server is not starting. What am I doing wrong and how can I solve it?
Wait-for-it is base on bash and it's not compatible with alpine as alpine is base on ash or sh that is why you are seeing can't execute 'bash': No such file or directory. You can look into the open issue for alpine support.
Can you make an /bin/sh version for use with alpine linux
For alpine, you can use wait-for
./wait-for is a script designed to synchronize services like docker containers. It is sh and alpine compatible.
services:
db:
image: postgres:9.4
backend:
build: backend
command: sh -c './wait-for db:5432 -- npm start'
depends_on:
- db
After big research, I found a similar issue here:
docker-compose: nodejs container not communicating with Postgres container
For some reason wait for it wasn't working (not sure if is a windows issue), that sh file is not mandatory to wait until database start, you can use depends_on to indicate that the server should start after a specified service:
version: '3'
services:
db:
image: postgres
networks:
- webnet
container_name: "node_postgres"
environment:
POSTGRES_PASSWORD: 010203
POSTGRES_USER: postgres
POSTGRES_DB: demo
ports:
- "5432:5432"
web:
image: nest-app
depends_on:
- db
ports:
- "3000:3000"
networks:
- webnet
environment:
DB_HOST: db
command: ["npm", "run", "start"]
networks:
webnet:

docker-compose: max depth exceeded

I was using docker-compose, but when I tried to build it again, this error shows, I have build this docker-compose multiple times:
ERROR: Service 'api' failed to build: max depth exceeded
I tried to execute docker system prune to clean my containers, but it didn't work.
docker-compose.yml
version: "3"
services:
client:
container_name: my_client
image: mhart/alpine-node:12
build: ./client
restart: always
ports:
- "3000:3000"
working_dir: /client
volumes:
- ./client:/client
entrypoint: ["npm", "start"]
links:
- api
networks:
- my_network
api:
container_name: my_api
build: ./api
restart: always
ports:
- "9000:9000"
environment:
DB_HOSTNAME: mysql
working_dir: /api
volumes:
- ./api:/api
depends_on:
- mysql
networks:
- my_network
mysql:
container_name: my_mysql
build: ./db
restart: always
volumes:
- /var/lib/mysql
- ./db:/db
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=n
- MYSQL_USER=n
- MYSQL_PASSWORD=n
- MYSQL_DATABASE=n
networks:
- my_network
command: '--default-authentication-plugin=mysql_native_password'
networks:
my_network:
driver: bridge
this is the Dockerfile:
FROM mhart/alpine-node:12
WORKDIR /api
COPY package*.json /api/
RUN npm i -G nodemon
RUN npm install
COPY . /api/
EXPOSE 9000
CMD ["npm", "run", "dev"]
any help is appreciated.
So, I figure out, I just needed to execute docker system prune -a to remove any stopped container. Now --build is working again.
This command deleted all my local docker images related to my dockerfile. After building it so many times my local storage has reached a limited, thus the error max depth exceeded.
Max depth doesn't indicate an out-of-storage-capacity error (though a prune could accidentally fix it).
Rather it indicates that the api image that you were building had too many layers.
A plausible theory is that you have a recursion caused by having this in your compose file:
image: mhart/alpine-node:12
build: ./client
and this in a Dockerfile
FROM mhart/alpine-node:12
(I'm assuming the Dockerfile in ./client is also FROM the same image).
Your build is essentially adding a few layers onto your local mhart/alpine-node:12 image every time you run it (you can confirm by running docker history mhart/alpine-node:12).
If so, you should probably rename the image in your compose file.

Rails in Docker long waits

I have a Rails app configured in Docker for development. When I run docker-compose up, the container starts and all the associated ones specified in my docker-compose.yml file do as well. It all works fine, apart from the fact that my app container takes ages to start. Once started it is perfect. I'm rather unsure where to start to find out what is causing this delay. It could be a Rails OR Docker issue. I do not have this issue in other Docker/Rails applications, Just this one.
I know this isn't much to go on, but i'm hoping people could give me some pointers as to where to look to try to find where this delay is coming from or what's happening in that time and then I can post more information to help narrow it down.
Thanks
Dockerfile:
FROM starefossen/ruby-node:2-8-stretch
RUN apt-get update && apt-get install -y build-essential
WORKDIR /app
COPY Gemfile* ./
RUN bundle install
COPY . .
CMD ["rails", "s", "-b", "0.0.0.0"]
docker-compose.yml:
version: '3.7'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- 3000:3000
- 35729:35729
- 5000:5000
- 9200:9200
env_file:
- '.env'
volumes:
- .:/app
- type: tmpfs
target: /app/tmp/pids/
depends_on:
- database
- elasticsearch
database:
image: postgres:9.6-alpine
volumes:
- pg-data:/var/lib/postgresql/data
webpacker:
build: .
command: ./bin/webpack-dev-server
volumes:
- .:/app
ports:
- '3035:3035'
adminer:
image: adminer
restart: always
ports:
- "8080:8080"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.1
volumes:
- es-data:/usr/share/elasticsearch/data
volumes:
pg-data:
es-data:

Access redis database in docker compose

I have a Django app that I want to move to docker. A redis dump.rdb file is in the root directory of the project, and contains data needed for the app to work. I normally start that by running redis-server while in the same directory. How can I move this configuration to docker? I know I can use volumes and suspect I need to mount my code folder as one, but will that cause other issues? Here is my current docker setup:
Dockerfile
FROM python:2.7.14
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD . /code/
ADD requirements /requirements
RUN pip install -r /requirements/local.txt
docker-compose.yml
version: '3'
services:
db:
image: postgres:9.6.3
expose:
- "5432"
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:3.2.6
expose:
- "6379"
volumes:
- ./code
redis_cache:
image: redis:3.2.6
expose:
- "6379"
elasticsearch:
image: elasticsearch:5.6.6
expose:
- "9200"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://postgres#db/postgres
- ENVIRONMENT=development
- REDIS_URL=redis://redis:6379
- REDIS_CACHE_URL=redis://redis_cache:6379
- ELASTIC_ENDPOINT=elasticsearch:9200
env_file: docker.env
depends_on:
- db
- redis
- elasticsearch
volumes:
- .:/code
volumes:
pgdata: {}
There are several ways. What do you need to prefer depends on your project and what kind of information stored in dump.rds file.
you can create your custom redis image with dump.rds file inside. Then you need to push it to your repository.
You can, as you mention above, mount volume from source code. But I prefer mount not whole code directory but mount only redis directory which stores data intended for Redis.
Also, you can create some migration script in web container. It may create some data in redis container as well as in db container.

Resources