Docker: ./entrypoint.sh not found - docker

I am trying to setup a django project and dockerize it.
I'm having trouble running the container.
As far as I know, it's successfully abe to build it, but fails to run.
This is the error I get:
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./entrpoint.sh\": stat ./entrpoint.sh: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
This is the dockerfile:
FROM python:3.6
RUN mkdir /backend
WORKDIR /backend
ADD . /backend/
RUN pip install -r requirements.txt
RUN apt-get update \
&& apt-get install -yyq netcat
RUN chmod 755 entrypoint.sh
ENTRYPOINT ["./entrpoint.sh"]
This is the compose file:
version: '3.7'
services:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=django
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
web:
restart: on-failure
build: .
container_name:backend
volumes:
- .:/backend
env_file:
- ./api/.env
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
hostname: web
depends_on:
- db
volumes:
postgres_data:
And there is an entrypoint file which runs automatic migrations, if any:
Here is the script:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py migrate
exec "$#"
Where am I going wrong?

The problem is that you it's not the entrypoint.sh missing but the nc command.
To solve this you have to install the netcat package.
Since python:3.6 is based on debian buster, you can simply add the following command after the FROM directive:
RUN apt-get update \
&& apt-get install -yyq netcat
EDIT for further improvements:
copy only the requirements.txt, install the packages then copy the rest. This will improve the cache usage and every build (after the first) will be faster (unless you touch the requirements.txt)
replace the ADD with COPY unless you're exploding a tarball
The result should look like this:
FROM python:3.6
RUN apt-get update \
&& apt-get install -yyq netcat
RUN mkdir /backend
WORKDIR /backend
COPY requirements.txt /backend/
RUN pip install -r requirements.txt
COPY . /backend/
ENTRYPOINT ["./entrypoint.sh"]

Related

Permission denied when running docker-compose up on Mac OS

I'm having some issues with permissions in my docker-compose, Dockerfile scripts.
This is the error I have when running docker-compose up:
As you can see I have a "Permission denied" error that prevents my API to be up and running.
This is what my docker-compose.yml file looks like (I skipped the database part because it's not relevant to the problem I have here):
version: '3'
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "1338:1337"
links:
- postgres
environment:
- DATABASE_URL=postgres://postgres:postgres#postgres:5432/postgres
- POSTGRES_PASSWORD=postgres
volumes:
- ./:/usr/src/app
- /usr/src/app/node_modules
command: [
"docker/api/wait-for-postgres.sh",
"postgres",
"docker/api/start.sh"
]
And my Dockerfile:
FROM node:14
RUN apt-get update && apt-get install -y postgresql-client
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
RUN npm install -g nodemon
COPY . /usr/src/app
EXPOSE 1337
What I've tried so far is changing the permissions and switching to root user inside my container but it didn't change a thing (still have the same error as the one shown in the screenshot above).
FROM node:14
RUN apt-get update && apt-get install -y postgresql-client
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
RUN npm install -g nodemon
COPY . /usr/src/app
USER root
RUN chmod +x docker/api/start.sh
RUN chmod +x docker/api/wait-for-postgres.sh
EXPOSE 1337
EDIT:
Content of wait-for-postgres.sh script:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 10
done
>&2 echo "Postgres is up - executing command"
exec "$#"
Any thoughts on this ? Thanks for your help !

File created in image by docker not reflecting in container run by docker compose

I have a docker file which has a command RUN python3 manage.py dumpdata --natural-foreign --exclude=auth.permission --exclude=contenttypes --indent=4 > data.json" this creates a Json file.
when i build the docker file it creates an image of specific name and when i run that using below command and open in bash i am able to see the data.json file created.
docker run -it --rm vijeth11/fassionplaza bash
files in Docker container created via above cmd
when i use the same image and run docker compose run web bash cmd
i am not able to see the data.json file, while other files are present in the container.
files in Docker container created via Docker compose
Is there anything wrong in my docker commands
Command used to build:
docker build --no-cache -t vijeth11/fassionplaza .
Docker-compose.yml
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_DB=fashionplaza
ports:
- "5432:5432"
web:
image: vijeth11/fassionplaza
command: >
sh -c "ls -l && python3 manage.py makemigrations && python3 manage.py migrate && python3 manage.py loaddata data.json && gunicorn --bind :8000 --workers 3 FashionPlaza.wsgi"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY ./Backend /code/Backend
COPY ./frontEnd /code/frontEnd
WORKDIR /code/Backend
RUN pip3 install -r requirements.txt
WORKDIR /code/Backend/FashionPlaza
RUN python3 manage.py dumpdata --natural-foreign \
--exclude=auth.permission --exclude=contenttypes \
--indent=4 > data.json
RUN chmod 755 data.json
WORKDIR /code/frontEnd/FashionPlaza
RUN apt-get update -y
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt install nodejs -y
RUN npm i
RUN npm run prod
ARG buildtime_variable=PROD
ENV server_type=$buildtime_variable
WORKDIR /code/Backend/FashionPlaza
Thank you in advance.
You map your current directory to /code when you run with these lines in your docker-compose file
volumes:
- .:/code
That hides all existing files in /code and replaces it with the mapped directory.
Since your data.json file is located in /code/Backend/FashionPlaza in the image, it becomes hidden and inaccessible.
The best thing to do is to map your volumes to empty directories in the image, so you don't inadvertently hide anything.

Postgres database, Error : NetworkError when attempting to fetch resource

I am trying to do a Docker image but I have some problems. Here is my docker-compose.yml :
version: '3.7'
services:
web:
container_name: web
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 35729:35729
stdin_open: true
depends_on:
- db
db:
restart: always
environment:
- POSTGRES_USER=admin
- POSTGRES_PASS=pass
- POSTGRES_DB=mydb
- POSTGRES_PORT=5432
- POSTGRES_HOST=localhost
- POSTGRES_HOST_AUTH_METHOD=trust
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
And there my Dockerfile :
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/web
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install nodejs
RUN apk add --update nodejs nodejs-npm
RUN apk add zlib-dev jpeg-dev gcc musl-dev
# copy project
COPY . .
RUN python -m pip install -U --force-reinstall pip
RUN python -m pip install Pillow
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN pip install Pillow
# run entrypoint.sh
ENTRYPOINT ["sh", "./entrypoint.sh"]
Anf finally my entrypoint.sh :
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
exec "$#"
When I do that :
docker-compose up -d --build
It works perfectly. Then I type that :
docker-compose exec web npm start --prefix ./front/
It looks ok but when I type in my browser http://localhost:3000/
I got that kind of messages : Error : NetworkError when attempting to fetch resource.
I thought the front is ok but I am not able to communicate with the back and so the database.
Could you help me please ?
Thank you very much !
As I can see in docker-compose.yml file you did not define the environment variables for Postgres in the web container. Please define the environment variables for the below :
DATABASE
SQL_HOST
SQL_PORT
Then bring down the docker and bring up the docker hopefully it will help you.

Docker: bash: bundle: command not found

I am trying to Dockerize my Rails 6 app but seem to be falling at the last hurdle. When running docker-compose up everything runs fine until i get to "Attaching to rdd-ruby_db_1, rdd-ruby_web_1" in the console and then I get the error bash: bundle: command not found.
I am aware of the other answers on Stackoverflow for the same issue but i have tried all before posting this.
My Dockerfile:
FROM ruby:2.7
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN cd /usr/bin/
RUN bundle install
FROM node:6.7.0
RUN npm install -g yarn
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
My docker-compose
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: xxx
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
I originally followed the guide in the Docker documentation thinking this would work over at https://docs.docker.com/compose/rails/
Thanks.

ERROR: Service 'redis' failed to build. When building redis image by docker-compose

I'm dockerizing an application which based on nodejs, redis and mysql. I already installed redis server and its running fine, but I'm enable to dockerize all three by using docker-compose.yml
$ docker-compose up --build
Building redis
Step 1/11 : FROM node:alpine
---> e079048502ec
Step 2/11 : FROM redis:alpine
---> da2b86c1900b
Step 3/11 : RUN mkdir -p /usr/src/app
---> Using cache
---> 28b2f837b54c
Step 4/11 : WORKDIR /usr/src/app
---> Using cache
---> d1147321eec4
Step 5/11 : RUN apt-get install redis-server
---> Running in 2dccd5689663
/bin/sh: apt-get: not found
ERROR: Service 'redis' failed to build: The command '/bin/sh -c apt-get install redis-server' returned a non-zero code: 127
This is my dockerfile.
Dockerfile:
FROM node:alpine
FROM redis:alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
## Install Redis ##
RUN apt-get install redis-server
## Install nodejs on ubuntu ##
RUN sudo apt-get update && wget http://nodejs.org/dist/v0.6.9/node-v0.6.9.tar.gz \
&& tar -xvzf node-v0.6.9.tar.gz \
&& cd node-v0.6.9 \
&& ./configure && make && sudo make install \
&& mkdir myapp && cd myapp \
&& npm init \
&& npm install express --save \
&& npm install express \
&& npm install --save path serve-favicon morgan cookie-parser body-parser \
&& npm install --save express jade \
&& npm install --save debug \
COPY package.json /usr/src/app/
COPY redis.conf /usr/local/etc/redis/redis.conf
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf", "npm", "start" ]
This is docker-compose.yml file
docker-compose.yml
version: '2'
services:
db:
build: ./docker/mysql
# image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
#- ./mysql:/docker-entrypoint-initdb.d
# restart: always
environment:
MYSQL_ROOT_PASSWORD: root
# MYSQL_DATABASE: cg_apiserver
# MYSQL_USER: root
# MYSQL_PASSWORD: root
redis:
build: ./docker/redis
image: "redis:alpine"
node:
build: ./docker/node
ports:
- '3000:80'
restart: always
volumes:
- .:/usr/src/app
depends_on:
- db
- redis
command: npm start
volumes:
db_data:
It seems that you have tried to merge two Dockerfile's in one
First, your multiple FROM has no sense here. The basic concept is to base FROM only one base image. See this
Second, you have a docker-compose looking good, but seeing the Dockerfile, it shows that you are trying to build both applications (redis and node app) in the same image.
So take redis stuff out from ./docker/node/Dockerfile:
FROM node:alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
## Install nodejs on ubuntu ##
RUN wget http://nodejs.org/dist/v0.6.9/node-v0.6.9.tar.gz \
&& tar -xvzf node-v0.6.9.tar.gz \
&& cd node-v0.6.9 \
&& ./configure && make && sudo make install \
&& mkdir myapp && cd myapp \
&& npm init \
&& npm install express --save \
&& npm install express \
&& npm install --save path serve-favicon morgan cookie-parser body-parser \
&& npm install --save express jade \
&& npm install --save debug \
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD ["npm", "start" ]
Use this ./docker/redis/Dockerfile:
FROM redis:alpine
COPY redis.conf /usr/local/etc/redis/redis.conf
# No need to set a custom CMD
And, I recommend to remove the "image:" line from redis (docker-compose.yml). It is not necessary:
redis:
build: ./docker/redis
image: "redis:alpine" <----
Edit. Also, you don't need apt-get update anymore. I've removed this sudo apt-get update &&
It is working now after having the below changes:
Create a folder in root docker
Inside the docker create folder redis
Create Dockerfile having the below contents:
docker >> redis >> Dockerfile
FROM smebberson/alpine-base:1.0.0
'#MAINTAINER Scott Mebberson <scott#scottmebberson.com>
VOLUME ["/data"]
'#Expose the ports for redis
EXPOSE 6379
There was no change in the docker-compose.yml file.
Run the below command and see the output
Run this command to build the container
sudo docker-compose up --build -d
Run this command to check the running container
sudo docker ps
Run this command to check the network and get the IP
sudo docker inspect redis_container_name
sudo docker inspect node_container_name
I'v solved this problem (COPY don't work) easy in my project: just add "context" - path to Dockerfile directory in your YML file (version 3), example:
build:
context: Starkman.Backend.Storage/Redis
dockerfile: Dockerfile
"Starkman.Backend.Storage/Redis" - its path to directory. And an unknown temporary directory for command "COPY" will be inside your "context".
This is my Dockerfile:
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
EXPOSE 6379
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

Resources