I am trying to do a Docker image but I have some problems. Here is my docker-compose.yml :
version: '3.7'
services:
web:
container_name: web
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 35729:35729
stdin_open: true
depends_on:
- db
db:
restart: always
environment:
- POSTGRES_USER=admin
- POSTGRES_PASS=pass
- POSTGRES_DB=mydb
- POSTGRES_PORT=5432
- POSTGRES_HOST=localhost
- POSTGRES_HOST_AUTH_METHOD=trust
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
And there my Dockerfile :
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/web
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install nodejs
RUN apk add --update nodejs nodejs-npm
RUN apk add zlib-dev jpeg-dev gcc musl-dev
# copy project
COPY . .
RUN python -m pip install -U --force-reinstall pip
RUN python -m pip install Pillow
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN pip install Pillow
# run entrypoint.sh
ENTRYPOINT ["sh", "./entrypoint.sh"]
Anf finally my entrypoint.sh :
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
exec "$#"
When I do that :
docker-compose up -d --build
It works perfectly. Then I type that :
docker-compose exec web npm start --prefix ./front/
It looks ok but when I type in my browser http://localhost:3000/
I got that kind of messages : Error : NetworkError when attempting to fetch resource.
I thought the front is ok but I am not able to communicate with the back and so the database.
Could you help me please ?
Thank you very much !
As I can see in docker-compose.yml file you did not define the environment variables for Postgres in the web container. Please define the environment variables for the below :
DATABASE
SQL_HOST
SQL_PORT
Then bring down the docker and bring up the docker hopefully it will help you.
Related
Aasumption
I m using docker-compose and wanted to use the whenever gem to do a cron process that deletes at a certain time in Rails, but upon research I found that I have to install and run cron in docker. So I looked into it, but I can't find any information about Alpine regarding cron processing in Rails. Can an
yone tell me how to do this?
What we have achieved
I want to execute a specific process once a day.
Code
Here is my Dockerfile:
FROM ruby:2.7.1-alpine
ARG WORKDIR
ENV RUNTIME_PACKAGES="linux-headers libxml2-dev make gcc libc-dev nodejs tzdata postgresql-dev postgresql git" \
DEV_PACKAGES="build-base curl-dev" \
HOME=/${WORKDIR} \
LANG=C.UTF-8 \
TZ=Asia/Tokyo
RUN echo ${HOME}
WORKDIR ${HOME}
COPY Gemfile* ./
RUN apk update && \
apk upgrade && \
apk add --no-cache ${RUNTIME_PACKAGES} && \
apk add --virtual build-dependencies --no-cache ${DEV_PACKAGES} && \
bundle install -j4 && \
apk del build-dependencies
COPY . .
CMD ["rails", "server", "-b", "0.0.0.0"]
Here is my Docker Compose file:
version: '3.8'
services:
db:
image: postgres:12.3-alpine
environment:
TZ: UTC
PGTZ: UTC
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
volumes:
- ./api/tmp/db:/var/lib/postgresql/data
api:
build:
context: ./api
args:
WORKDIR: $WORKDIR
command: /bin/sh -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
environment:
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
API_DOMAIN: "localhost:$FRONT_PORT"
APP_URL: "http://localhost:$API_PORT"
volumes:
- ./api:/$WORKDIR
depends_on:
- db
ports:
- "$API_PORT:$CONTAINER_PORT"
mailcatcher:
image: schickling/mailcatcher
ports:
- "1080:1080"
- "1025:1025"
front:
build:
context: ./front
args:
WORKDIR: $WORKDIR
CONTAINER_PORT: $CONTAINER_PORT
API_URL: "http://localhost:$API_PORT"
command: yarn run dev
volumes:
- ./front:/$WORKDIR
ports:
- "$FRONT_PORT:$CONTAINER_PORT"
depends_on:
- api
Actual processing
/config/schedule.rb
require File.expand_path(File.dirname(__FILE__) + "/environment")
ENV.each { |k, v| env(k, v) }
set :output, "#{Rails.root}/log/cron.log"
set :environment, :development
every 1.days do
runner "User.guest_reset"
end
What we tried
I did a lot of research and found a lot of information on using cron with apt, but could not find any information on using apk.
Separate cron into another service in your docker-compose.yml by using the same image as your rails app image (which built by Dockerfile). Then run cron and whenever --update-crontab in the command.
docker-compose.yml
version: '3'
services:
app:
image: myapp
depends_on:
- 'db'
build:
context: .
command: bash -c "rm -f tmp/pids/server.pid &&
bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- ".:/myapp"
cron:
image: myapp
command: bash -c "touch log/cron.log && cron && whenever --update-crontab &&
crontab -l && tail -f log/cron.log"
volumes:
- '.:/myapp'
db:
image: postgres:13
ports: # 127.0.0.1 to only expose the port to loopback
- '127.0.0.1:5432:5432'
volumes:
- 'postgres_dev:/var/lib/postgresql/data'
Dockerfile
FROM ruby:3.0.1
RUN apt-get update -qq && apt-get install -y postgresql-client cron vim \
&& mkdir /myapp
WORKDIR /myapp
ENV BUNDLE_WITHOUT=development:test
COPY Gemfile Gemfile.lock ./
RUN bundle install --jobs 20 --retry 5
COPY package.json ./
RUN npm install --check-files --production
I'm having some issues with permissions in my docker-compose, Dockerfile scripts.
This is the error I have when running docker-compose up:
As you can see I have a "Permission denied" error that prevents my API to be up and running.
This is what my docker-compose.yml file looks like (I skipped the database part because it's not relevant to the problem I have here):
version: '3'
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "1338:1337"
links:
- postgres
environment:
- DATABASE_URL=postgres://postgres:postgres#postgres:5432/postgres
- POSTGRES_PASSWORD=postgres
volumes:
- ./:/usr/src/app
- /usr/src/app/node_modules
command: [
"docker/api/wait-for-postgres.sh",
"postgres",
"docker/api/start.sh"
]
And my Dockerfile:
FROM node:14
RUN apt-get update && apt-get install -y postgresql-client
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
RUN npm install -g nodemon
COPY . /usr/src/app
EXPOSE 1337
What I've tried so far is changing the permissions and switching to root user inside my container but it didn't change a thing (still have the same error as the one shown in the screenshot above).
FROM node:14
RUN apt-get update && apt-get install -y postgresql-client
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
RUN npm install -g nodemon
COPY . /usr/src/app
USER root
RUN chmod +x docker/api/start.sh
RUN chmod +x docker/api/wait-for-postgres.sh
EXPOSE 1337
EDIT:
Content of wait-for-postgres.sh script:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 10
done
>&2 echo "Postgres is up - executing command"
exec "$#"
Any thoughts on this ? Thanks for your help !
I am trying to setup a django project and dockerize it.
I'm having trouble running the container.
As far as I know, it's successfully abe to build it, but fails to run.
This is the error I get:
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./entrpoint.sh\": stat ./entrpoint.sh: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
This is the dockerfile:
FROM python:3.6
RUN mkdir /backend
WORKDIR /backend
ADD . /backend/
RUN pip install -r requirements.txt
RUN apt-get update \
&& apt-get install -yyq netcat
RUN chmod 755 entrypoint.sh
ENTRYPOINT ["./entrpoint.sh"]
This is the compose file:
version: '3.7'
services:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=django
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
web:
restart: on-failure
build: .
container_name:backend
volumes:
- .:/backend
env_file:
- ./api/.env
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
hostname: web
depends_on:
- db
volumes:
postgres_data:
And there is an entrypoint file which runs automatic migrations, if any:
Here is the script:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py migrate
exec "$#"
Where am I going wrong?
The problem is that you it's not the entrypoint.sh missing but the nc command.
To solve this you have to install the netcat package.
Since python:3.6 is based on debian buster, you can simply add the following command after the FROM directive:
RUN apt-get update \
&& apt-get install -yyq netcat
EDIT for further improvements:
copy only the requirements.txt, install the packages then copy the rest. This will improve the cache usage and every build (after the first) will be faster (unless you touch the requirements.txt)
replace the ADD with COPY unless you're exploding a tarball
The result should look like this:
FROM python:3.6
RUN apt-get update \
&& apt-get install -yyq netcat
RUN mkdir /backend
WORKDIR /backend
COPY requirements.txt /backend/
RUN pip install -r requirements.txt
COPY . /backend/
ENTRYPOINT ["./entrypoint.sh"]
I am using Docker with the open source BI tool Apache Superset. I have added a new file, specifically a .geojson file in the CountryMap directory. Now, when I try to build using docker-compose up --build or make changes in the frontend, Docker is not fully updated, and I get a file not found error when trying to run a query. When I look inside the container via docker exec -it container_id bash, the new file is there.
Dockerfile:
FROM python:3.6-jessie
RUN useradd --user-group --create-home --no-log-init --shell /bin/bash superset
# Configure environment
ENV LANG=C.UTF-8 \
LC_ALL=C.UTF-8
RUN apt-get update -y
# Install dependencies to fix `curl https support error` and `elaying package configuration warning`
RUN apt-get install -y apt-transport-https apt-utils
# Install superset dependencies
# https://superset.incubator.apache.org/installation.html#os-dependencies
RUN apt-get install -y build-essential libssl-dev \
libffi-dev python3-dev libsasl2-dev libldap2-dev libxi-dev
# Install extra useful tool for development
RUN apt-get install -y vim less postgresql-client redis-tools
# Install nodejs for custom build
# https://superset.incubator.apache.org/installation.html#making-your-own-build
# https://nodejs.org/en/download/package-manager/
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt-get install -y nodejs
WORKDIR /home/superset
COPY requirements.txt .
COPY requirements-dev.txt .
COPY contrib/docker/requirements-extra.txt .
RUN pip install --upgrade setuptools pip \
&& pip install -r requirements.txt -r requirements-dev.txt -r requirements-extra.txt \
&& rm -rf /root/.cache/pip
RUN pip install gevent
COPY --chown=superset:superset superset superset
ENV PATH=/home/superset/superset/bin:$PATH \
PYTHONPATH=/home/superset/superset/:$PYTHONPATH
USER superset
RUN cd superset/assets \
&& npm ci \
&& npm run build \
&& rm -rf node_modules
COPY contrib/docker/docker-init.sh .
COPY contrib/docker/docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
HEALTHCHECK CMD ["curl", "-f", "http://localhost:8088/health"]
EXPOSE 8088
docker-compose.yml:
version: '2'
services:
redis:
image: redis:3.2
restart: unless-stopped
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis:/data
postgres:
image: postgres:10
restart: unless-stopped
environment:
POSTGRES_DB: superset
POSTGRES_PASSWORD: superset
POSTGRES_USER: superset
ports:
- "127.0.0.1:5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
superset:
build:
context: ../../
dockerfile: contrib/docker/Dockerfile
restart: unless-stopped
environment:
POSTGRES_DB: superset
POSTGRES_USER: superset
POSTGRES_PASSWORD: superset
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
REDIS_HOST: redis
REDIS_PORT: 6379
# If using production, comment development volume below
#SUPERSET_ENV: production
SUPERSET_ENV: development
# PYTHONUNBUFFERED: 1
user: root:root
ports:
- 8088:8088
depends_on:
- postgres
- redis
volumes:
# this is needed to communicate with the postgres and redis services
- ./superset_config.py:/home/superset/superset/superset_config.py
# this is needed for development, remove with SUPERSET_ENV=production
- ../../superset:/home/superset/superset
volumes:
postgres:
external: false
redis:
external: false
Why is there a not found error?
try to use absolute path in volumes:
volumes:
- /home/me/my_project/superset_config.py:/home/superset/superset/superset_config.py
- /home/me/my_project/superset:/home/superset/superset
It is because docker-compose is utilizing cache. If the dockerfile and the docker-compose.yml in not changed it does not recreate the container image. To avoid this you should use the following flag:
--force-recreate
--force-recreate
Recreate containers even if their configuration and image haven't
changed.
For development purposes I like to use the following switch as well:
-V, --renew-anon-volumes
Recreate anonymous volumes instead of retrieving data from the previous containers.
I'm dockerizing an application which based on nodejs, redis and mysql. I already installed redis server and its running fine, but I'm enable to dockerize all three by using docker-compose.yml
$ docker-compose up --build
Building redis
Step 1/11 : FROM node:alpine
---> e079048502ec
Step 2/11 : FROM redis:alpine
---> da2b86c1900b
Step 3/11 : RUN mkdir -p /usr/src/app
---> Using cache
---> 28b2f837b54c
Step 4/11 : WORKDIR /usr/src/app
---> Using cache
---> d1147321eec4
Step 5/11 : RUN apt-get install redis-server
---> Running in 2dccd5689663
/bin/sh: apt-get: not found
ERROR: Service 'redis' failed to build: The command '/bin/sh -c apt-get install redis-server' returned a non-zero code: 127
This is my dockerfile.
Dockerfile:
FROM node:alpine
FROM redis:alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
## Install Redis ##
RUN apt-get install redis-server
## Install nodejs on ubuntu ##
RUN sudo apt-get update && wget http://nodejs.org/dist/v0.6.9/node-v0.6.9.tar.gz \
&& tar -xvzf node-v0.6.9.tar.gz \
&& cd node-v0.6.9 \
&& ./configure && make && sudo make install \
&& mkdir myapp && cd myapp \
&& npm init \
&& npm install express --save \
&& npm install express \
&& npm install --save path serve-favicon morgan cookie-parser body-parser \
&& npm install --save express jade \
&& npm install --save debug \
COPY package.json /usr/src/app/
COPY redis.conf /usr/local/etc/redis/redis.conf
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf", "npm", "start" ]
This is docker-compose.yml file
docker-compose.yml
version: '2'
services:
db:
build: ./docker/mysql
# image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
#- ./mysql:/docker-entrypoint-initdb.d
# restart: always
environment:
MYSQL_ROOT_PASSWORD: root
# MYSQL_DATABASE: cg_apiserver
# MYSQL_USER: root
# MYSQL_PASSWORD: root
redis:
build: ./docker/redis
image: "redis:alpine"
node:
build: ./docker/node
ports:
- '3000:80'
restart: always
volumes:
- .:/usr/src/app
depends_on:
- db
- redis
command: npm start
volumes:
db_data:
It seems that you have tried to merge two Dockerfile's in one
First, your multiple FROM has no sense here. The basic concept is to base FROM only one base image. See this
Second, you have a docker-compose looking good, but seeing the Dockerfile, it shows that you are trying to build both applications (redis and node app) in the same image.
So take redis stuff out from ./docker/node/Dockerfile:
FROM node:alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
## Install nodejs on ubuntu ##
RUN wget http://nodejs.org/dist/v0.6.9/node-v0.6.9.tar.gz \
&& tar -xvzf node-v0.6.9.tar.gz \
&& cd node-v0.6.9 \
&& ./configure && make && sudo make install \
&& mkdir myapp && cd myapp \
&& npm init \
&& npm install express --save \
&& npm install express \
&& npm install --save path serve-favicon morgan cookie-parser body-parser \
&& npm install --save express jade \
&& npm install --save debug \
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD ["npm", "start" ]
Use this ./docker/redis/Dockerfile:
FROM redis:alpine
COPY redis.conf /usr/local/etc/redis/redis.conf
# No need to set a custom CMD
And, I recommend to remove the "image:" line from redis (docker-compose.yml). It is not necessary:
redis:
build: ./docker/redis
image: "redis:alpine" <----
Edit. Also, you don't need apt-get update anymore. I've removed this sudo apt-get update &&
It is working now after having the below changes:
Create a folder in root docker
Inside the docker create folder redis
Create Dockerfile having the below contents:
docker >> redis >> Dockerfile
FROM smebberson/alpine-base:1.0.0
'#MAINTAINER Scott Mebberson <scott#scottmebberson.com>
VOLUME ["/data"]
'#Expose the ports for redis
EXPOSE 6379
There was no change in the docker-compose.yml file.
Run the below command and see the output
Run this command to build the container
sudo docker-compose up --build -d
Run this command to check the running container
sudo docker ps
Run this command to check the network and get the IP
sudo docker inspect redis_container_name
sudo docker inspect node_container_name
I'v solved this problem (COPY don't work) easy in my project: just add "context" - path to Dockerfile directory in your YML file (version 3), example:
build:
context: Starkman.Backend.Storage/Redis
dockerfile: Dockerfile
"Starkman.Backend.Storage/Redis" - its path to directory. And an unknown temporary directory for command "COPY" will be inside your "context".
This is my Dockerfile:
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
EXPOSE 6379
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]