How do I rewrite dockerfile (and run commands) as docker-compose.yaml - docker

Tell me the way to translate docker command into docker-compose command.
Using docker command, I can see my app working on http://localhost:9000.
But in another case, it doesn't.
Though it may be cause of specifying ports mapping, I have no idea.
What's the reason for?
Followings are files and commands I've tried.
dockerfile
FROM node:8.11.3-alpine
WORKDIR /app
RUN apk update \
&& npm install -g npm #vue/cli \
&& npm install
EXPOSE 8080
CMD ["npm", "run", "serve"]
docker-compose.yaml
version: '3'
services:
service:
build: .
ports:
- "9000:8080"
volumes:
- ./:/app
docker command: ok
sudo docker run -it -v $(pwd):/app -p 9000:8080 dockerimage
docker-compose command: problem
sudo dock-compose run service

Related

File created in image by docker not reflecting in container run by docker compose

I have a docker file which has a command RUN python3 manage.py dumpdata --natural-foreign --exclude=auth.permission --exclude=contenttypes --indent=4 > data.json" this creates a Json file.
when i build the docker file it creates an image of specific name and when i run that using below command and open in bash i am able to see the data.json file created.
docker run -it --rm vijeth11/fassionplaza bash
files in Docker container created via above cmd
when i use the same image and run docker compose run web bash cmd
i am not able to see the data.json file, while other files are present in the container.
files in Docker container created via Docker compose
Is there anything wrong in my docker commands
Command used to build:
docker build --no-cache -t vijeth11/fassionplaza .
Docker-compose.yml
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_DB=fashionplaza
ports:
- "5432:5432"
web:
image: vijeth11/fassionplaza
command: >
sh -c "ls -l && python3 manage.py makemigrations && python3 manage.py migrate && python3 manage.py loaddata data.json && gunicorn --bind :8000 --workers 3 FashionPlaza.wsgi"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY ./Backend /code/Backend
COPY ./frontEnd /code/frontEnd
WORKDIR /code/Backend
RUN pip3 install -r requirements.txt
WORKDIR /code/Backend/FashionPlaza
RUN python3 manage.py dumpdata --natural-foreign \
--exclude=auth.permission --exclude=contenttypes \
--indent=4 > data.json
RUN chmod 755 data.json
WORKDIR /code/frontEnd/FashionPlaza
RUN apt-get update -y
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt install nodejs -y
RUN npm i
RUN npm run prod
ARG buildtime_variable=PROD
ENV server_type=$buildtime_variable
WORKDIR /code/Backend/FashionPlaza
Thank you in advance.
You map your current directory to /code when you run with these lines in your docker-compose file
volumes:
- .:/code
That hides all existing files in /code and replaces it with the mapped directory.
Since your data.json file is located in /code/Backend/FashionPlaza in the image, it becomes hidden and inaccessible.
The best thing to do is to map your volumes to empty directories in the image, so you don't inadvertently hide anything.

Docker: ./entrypoint.sh not found

I am trying to setup a django project and dockerize it.
I'm having trouble running the container.
As far as I know, it's successfully abe to build it, but fails to run.
This is the error I get:
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./entrpoint.sh\": stat ./entrpoint.sh: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
This is the dockerfile:
FROM python:3.6
RUN mkdir /backend
WORKDIR /backend
ADD . /backend/
RUN pip install -r requirements.txt
RUN apt-get update \
&& apt-get install -yyq netcat
RUN chmod 755 entrypoint.sh
ENTRYPOINT ["./entrpoint.sh"]
This is the compose file:
version: '3.7'
services:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=django
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
web:
restart: on-failure
build: .
container_name:backend
volumes:
- .:/backend
env_file:
- ./api/.env
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
hostname: web
depends_on:
- db
volumes:
postgres_data:
And there is an entrypoint file which runs automatic migrations, if any:
Here is the script:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py migrate
exec "$#"
Where am I going wrong?
The problem is that you it's not the entrypoint.sh missing but the nc command.
To solve this you have to install the netcat package.
Since python:3.6 is based on debian buster, you can simply add the following command after the FROM directive:
RUN apt-get update \
&& apt-get install -yyq netcat
EDIT for further improvements:
copy only the requirements.txt, install the packages then copy the rest. This will improve the cache usage and every build (after the first) will be faster (unless you touch the requirements.txt)
replace the ADD with COPY unless you're exploding a tarball
The result should look like this:
FROM python:3.6
RUN apt-get update \
&& apt-get install -yyq netcat
RUN mkdir /backend
WORKDIR /backend
COPY requirements.txt /backend/
RUN pip install -r requirements.txt
COPY . /backend/
ENTRYPOINT ["./entrypoint.sh"]

Creating image with docker and docker compose

I am novice with Docker and I am trying to create a docker image and use the docker container so I did the following:
My Dockerfile is:
FROM ubuntu:16.04
# # Front stack
# RUN apt-get install -y npm && \
# npm install -g #angular/cli
FROM python:3.6
RUN apt-get update
RUN apt-get install -y libpython-dev curl build-essential unzip python-dev libaio-dev libaio1 vim
rpm2cpio cpio python-pip dos2unix
RUN mkdir /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install -r /code/requirements.txt
RUN pip install --upgrade pip
COPY . /code/
WORKDIR /code
ENV PYTHONPATH=/code/py_lib
CMD ["bash", "-c", "tail -f /dev/null"]
My dockerCompose file is:
version: '3.5'
services:
testsample:
image: toto/test-sample
restart: unless-stopped
env_file:
- .env
command: bash -c "pip3 install -r requirements.txt && tail -f /dev/null"
# command: bash -c "tail -f /dev/null"
volumes:
- .:/code
I executed these commands:
docker build . -f Dockerfile
docker images
docker-compose up
This gave me an error:
Pulling testsample (toto/test-sample:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume
data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]y
Pulling testsample (toto/test-sample:)...
ERROR: pull access denied for toto/test-sample, repository does not exist or may require 'docker
login': denied: requested access to the resource is denied
I tried docker login and I am able to connect.
So what would lead to this problem?
You have to provide tag name when you are building a docker image using a docker file like the following:
docker build -t toto/test-sample -f Dockerfile .
-t here is for the tag name
-f here is for telling the name of the Dockerfile (in this case it is optinal as Dockerfile is the default name)
If you put the Dockerfile in the same directory as your docker-compose.yml file, you can do the following:
version: '3.5'
services:
testsample:
image: toto/test-sample
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
env_file:
- .env
volumes:
- .:/code
Then, do:
docker-compose up --build -d
Otherwise, if you are simply having problems building the image, you just need to do:
docker build -t toto/test-sample .
build command should be:
docker build -t toto/test-sample .

Cant connect to rails docker container on localhost

Im having trouble accessing my containerized rails app from my local machine. I'm following this quickstart guide as a template and made some tweaks to the paths for my gemfile and gemfile.lock. The quickstart guide moves on to docker-compose, but I want to try accessing the app without it first to get familiar with these processes before moving on.
This is my dockerfile:
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile ./Gemfile
COPY Gemfile.lock ./Gemfile.lock
RUN gem install bundler -v 2.0.1
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000:3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
and this is the entrypoint file:
#!/bin/bash
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /myapp/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
I am able to successfully build and run the image, but when I try to access 0.0.0.0:3000 I get a cant connect error.
I also attached a screenshot of my app directory structure, the Dockerfile and entrypoint are at the root.
One thing that seems strange is when I try to run the logs for the container I dont get any output, but when I shut the container down I see the startup logs. Not sure why that is.
I am running docker desktop 2.1.0.3. Any thoughts/help are very appreciated.
use just EXPOSE 3000 in dockerfile.
run container ror with mapping port to localhost from your new docker image <image>
docker run -d --name ror -p 3000:3000 <image>
now you should be able to access localhost:3000
Here's an example of mine that works:
The usual dockerfile, nothing special here.
Then, in docker-compose.yml, add environment variable, or place in .env file, the DATABASE_URL (important bit is using the host.docker.internal instead of localhost
Then in your database.yml, specify the url with the ENV key
Then start the container by running docker-compose up
#Dockerfile
FROM ruby:3.0.5-alpine
RUN apk add --update --no-cache \
bash \
build-base \
tzdata \
postgresql-dev \
yarn \
git \
curl \
wget \
gcompat
COPY Gemfile Gemfile.lock ./
WORKDIR /app
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash
RUN gem install bundler:2.4.3
RUN bundle lock --add-platform x86_64-linux
RUN bundle install
COPY package.json yarn.lock ./
RUN yarn install
COPY . .
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0", "--pid=/tmp/server.pid"]
#docker-compose.yml
version: "3.9"
services:
app:
image: your_app_name
volumes:
- /app
env_file:
- .env
environment:
- DATABASE_URL=postgresql://postgres#host.docker.internal:5432/<your_db_name>
ports:
- "3000:3000"
webpack_dev_server:
build: .
command: bin/webpack-dev-server
ports:
- "3035:3035"
volumes:
- /app
env_file:
- .env
environment:
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
redis:
image: redis
#database.yml
development:
<<: *default
database: giglifepro_development
url: <%= ENV.fetch('DATABASE_URL') %>

issue with exposing ports using docker compose

docker run -it -p 3000:3000 -v $(pwd):/src budotemplate_app node server.js works but docker-compse run app node server.js doesn't show anything in the browser. any ideas?
https://github.com/oren/budo-template/blob/af0681a3b8af4d6f4ca16d4a371f775261986476/docker-compose.yml
docker-compose.yml
app:
build: .
volumes:
- .:/src
ports:
- "3000:3000"
expose:
- "3000"
Dockerfile
FROM alpine:edge
RUN echo "http://dl-4.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk update
RUN apk add --update iojs && rm -rf /var/cache/apk/*
WORKDIR /src
COPY . /src
EXPOSE 3000
CMD ["node"]
run command in docker-compose is different than docker.
If you want the ports to be exposed you have to use --service-ports.
This is the complete command: docker-compse run --service-ports app node server.js

Resources