I am new to docker, currently following book to learn Django.
Is it necessary to be in virtual environment when running the below
command?
I have gone through docker basic videos which says it saves each apps as images. But where these images are saved?.
Does this line make the current pc root directory or dockers Image '
WORKDIR /usr/src/app'
ADD is placed before RUN in the Dockerfile.
$ sudo docker-compose build
But I got these errors.
ERROR: Service 'app' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder912263941/config/requirements.txt: no such file or directory
Dockerfile
FROM python:3
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
mysql-client default-libmysqlclient-dev
WORKDIR /usr/src/app
ADD config/requirements.txt ./
RUN pip3 install --upgrade pip; \
pip3 install -r requirements.txt
RUN django-admin startproject myproject .;\
mv ./myproject ./origproject
docker-compose.yml
version: '2'
services:
db:
image: 'mysql:5.7'
app:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- './project:/usr/src/app/myproject'
- './media:/usr/src/app/media'
- './static:/usr/src/app/static'
- './templates:/usr/src/app/templates'
- './apps/external:/usr/src/app/external'
- './apps/myapp1:/usr/src/app/myapp1'
- './apps/myapp2:/usr/src/app/myapp2'
ports:
- '8000:8000'
links:
- db
requirements.txt
Pillow~=5.2.0
mysqlclient~=1.3.0
Django~=2.1.0
Is it necessary to be in virtual environment when running the below
command?
No, the docker build environment is isolated from the host. Any virtualenv on the host is ignored on the build context and the resulting image.
I have gone through docker basic videos which says it saves each apps
as images. But where these images are saved?.
The images are stored somewhere in /var/lib/docker but isn't meant to be browsed manually. You can send the images somewhere with docker push <image:tag> or save them with docker save <image:tag> -o <image>.tar
Does this line make the current pc root directory or dockers Image ' WORKDIR > /usr/src/app'
That line change the current workdir on the image.
ERROR: Service 'app' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder912263941/config/requirements.txt: no such file or directory
This error means that you do not have config/requirements.txt in your current directory where build is run. Adjust your path on the Dockerfile properly.
$ docker-compose up -d
This will download the necessary Docker images and create a container for the web service.
Related
The question is self explanatory but just wanted to add some details. I am running an ubuntu container containing some python flask code:
FROM ubuntu:latest
ADD app/ /app
WORKDIR /app
RUN apt-get update -y && \
apt-get install -y python3-pip python-dev build-essential
RUN pip3 install -r requirements.txt
RUN pip3 install flask
EXPOSE 50000
ENTRYPOINT ["python3"]
CMD ["app.py"]
The docker compose file looks something like this:
version: "2"
services:
app:
container_name: flask-app
restart: always
build:
context: ./
dockerfile: app/Dockerfile
volumes:
- "./app:/app"
ports:
- "5000:5000"
stdin_open: true
tty: true
How do I attach to the container and run an interactive bash shell? Currently the attach command just hangs without returning.
Docker attach:
Attach local standard input, output, and error streams to a running container
...
Note: The attach command will display the output of the ENTRYPOINT/CMD process. This can appear as if the attach command is hung when in fact the process may simply not be interacting with the terminal at that time.
Docker exec:
The docker exec command runs a new command in a running container.
TL;DR: You want docker exec -it [docker-instance-id] /bin/sh to get to a terminal. docker attach will just show you stdout from your flask app from that point on (which might be nothing, which is why it appears to hang).
I'm trying to copy my ./dist after building my angular app.
here is my Dockerfile
# Create image based off of the official Node 10 image
FROM node:12-alpine
RUN apk update && apk add --no-cache make git
RUN mkdir -p /home/project/frontend
# Change directory so that our commands run inside this new directory
WORKDIR /home/project/frontend
# Copy dependency definitions
COPY package*.json ./
RUN npm cache verify
## installing packages
RUN npm install
COPY ./ ./
RUN npm run build --output-path=./dist
COPY /dist /var/www/front
but when I run docker-compose build dashboard I get this error
Service 'dashboard' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builderxxx/dist: no such file or directory
I don't know why is there something wrong?
if you need to check also docker-compose file
...
dashboard:
container_name: dashboard
build: ./frontend
image: dashboard
container_name: dashboard
restart: unless-stopped
networks:
- app-network
...
The Dockerfile COPY directive copies content from the build context (the host-system directory in the build: line) into the image. If you're just trying to move around content within the image, you can RUN cp or RUN mv to use the ordinary Linux shell commands instead.
RUN npm run build --output-path=./dist \
&& cp -a dist /var/www/front
I am trying to dockerize my React-Flask app by dockerizing each one of them and using docker-compose to put them together.
Here the Dockerfiles for each app look like:
React - Frontend
FROM node:latest
WORKDIR /frontend/
ENV PATH /frontend/node_modules/.bin:$PATH
COPY package.json /frontend/package.json
COPY . /frontend/
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
CMD ["npm", "run", "start"]
Flask - Backend
#Using ubuntu as our base
FROM ubuntu:latest
#Install commands in ubuntu, including pymongo for DB handling
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
RUN python -m pip install pymongo[srv]
#Unsure of COPY command's purpose, but WORKDIR points to /backend
COPY . /backend
WORKDIR /backend/
RUN pip install -r requirements.txt
#Run order for starting up the backend
ENTRYPOINT ["python"]
CMD ["app.py"]
Each of them works fine when I just use docker build and docker up. I've checked that they work fine when they are built and ran independently. However, when I docker-compose up the docker-compose.yml which looks like
# Docker Compose
version: '3.7'
services:
frontend:
container_name: frontend
build:
context: frontend
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- '.:/frontend'
- '/frontend/node_modules'
backend:
build: ./backend
ports:
- "5000:5000"
volumes:
- .:/code
Gives me the error below
Starting frontend ... error
Starting dashboard_backend_1 ...
ERROR: for frontend Cannot start service sit-frontend: error while creating mount source path '/host_mnt/c/Users/myid/DeskStarting dashboard_backend_1 ... error
ERROR: for dashboard_backend_1 Cannot start service backend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: for frontend Cannot start service frontend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: for backend Cannot start service backend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: Encountered errors while bringing up the project.
Did this happen because I am using Windows? What can be the issue? Thanks in advance.
For me the only thing that worked was restarting the Docker deamon
Check if this is related to docker/for-win issue 1560
I had the same issue. I was able to resolve it by running:
docker volume rm -f [name of docker volume with error]
Then restarting docker, and running:
docker-compose up -d --build
I tried these same steps without restarting my docker, but restarting my computer and that didn't resolve the issue.
What resolved the issue for me was removing the volume with the error, restarting my docker, then doing a build again.
Other cause:
On Windows this may be due to a user password change. Uncheck the box to stop sharing the drive and then allow Docker to detect that you are trying to mount the drive and share it.
Also mentioned:
I just ran docker-compose down and then docker-compose up. Worked for me.
I have tried with docker container prune then press y to remove all stopped containers. This issue has gone.
I saw this after I deleted a folder I'd shared with docker and recreated one with the same name. I think this deleted the permissions. To resolve it I:
Unshared the folder in docker settings
Restarted docker
Ran docker container prune
Ran docker-compose build
Ran docker-compose up.
Restarting the docker daemon will work.
I have a Dockerfile that contains steps that create a directory and runs an angular build script outputing to that directory. This all seems to run correctly. However when the container runs, the built files and directory are not there.
If I run a shell in the image:
docker run -it pnb_web sh
# cd /code/static
# ls
assets favicon.ico index.html main.js main.js.map polyfills.js polyfills.js.map runtime.js runtime.js.map styles.js styles.js.map vendor.js vendor.js.map
If I exec a shell in the container:
docker exec -it ea23c7d30333 sh
# cd /code/static
sh: 1: cd: can't cd to /code/static
# cd /code
# ls
Dockerfile api docker-compose.yml frontend manage.py mysite.log pnb profiles requirements.txt settings.ini web_variables.env
david#lightning:~/Projects/pnb$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea23c7d30333 pnb_web "python3 manage.py r…" 13 seconds ago Up 13 seconds 0.0.0.0:8000->8000/tcp pnb_web_1_267d3a69ec52
This is my dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt install nodejs
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
RUN mkdir /code/static
WORKDIR /code/frontend
RUN npm install -g #angular/cli
RUN npm install
RUN ng build --outputPath=/code/static
and associated docker-compose:
version: '3'
services:
db:
image: postgres
web:
build:
context: .
dockerfile: Dockerfile
working_dir: /code
env_file:
- web_variables.env
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
In the second example, the static directory has never been created or built into. I thought that a container is an instance of an image. How can the container be missing files from the image?
You're confusing build-time and run-time, along playing with Volumes.
Remember that host mount has priority over FS provided by the running container, so even your built image has assets, they are going to be overwritten by .services.web.volumes because you're mounting the host filesystem that overwrites the build result.
If you try to avoid volumes mounting you'll notice that everything is working as expected.
I tried to make a simple application with Yesod and PostgreSQL using Docker Compose but RUN yesod init -n myApp -d postgresql didn't seem to work as expected.
I defined Dockerfile and docker-compose.yml as below:
Dockerfile:
FROM shuny/ghc-7.8.4:latest
MAINTAINER shuny
# Create default config
RUN cabal update
# Add stackage remote repo
RUN sed -i 's/^remote-repo: [a-zA-Z0-9_\/:.]*$/remote-repo: stackage:http:\/\/www.stackage.org\/lts/g' /root/.cabal/config
# Update packages
RUN cabal update
# Generate locale otherwise happy (because of tf-random) will fail
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN echo $LANG
# Install build tools for yesod
RUN cabal install alex happy yesod-bin
# Install library for yesod-postgres
RUN apt-get update && apt-get install -y libpq-dev
RUN mkdir /code
WORKDIR /code
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
ADD . /code
WORKDIR /code
# ADD settings.yml /code/myApp/config/
docker-compose.yml:
database:
image: postgres
ports:
- "5432"
web:
build: .
tty: true
command: yesod devel
volumes:
- .:/code/
ports:
- "3000:3000"
links:
- database
and docker-compose build returned as below:
Step 0 : FROM shuny/ghc-7.8.4:latest
...
Step 17 : WORKDIR /code
---> Running in bf99d0aca48c
---> 37c3c94338d7
Removing intermediate container bf99d0aca48c
Successfully built 37c3c94338d7
but when I check like this:
$docker-compose run web /bin/bash
root#0fe5fb1a3b20:/code# ls
root#0fe5fb1a3b20:/code#
it showed nothing while this commands seem to work as expected:
docker run -ti 37c3c94338d7
root#31e94428de37:/code# ls
docker-compose.yml Dockerfile myApp settings.yml
root#31e94428de37:/code# ls myApp/
app config Handler Model.hs Settings.hs test
Application.hs dist Import myApp.cabal static
cabal.sandbox.config Foundation.hs Import.hs Settings templates
How can I fix it?
I really appliciate any feedback, thank you.
You are doing strange things with volumes and the ADD instruction.
First you build your application inside the image:
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
Then you add the content of the folder that contains the Dockerfile in the /code folder of the image. I guess this step is useless.
ADD . /code
Then, if you run a container without -volume option, everything works fine
docker run -ti 37c3c94338d7
But in your docker-compose.yml file, you specified a volume option that overides the /code folder in the container with the folder that contains the docker-compose.yml file on the host machine. Therefore, you don't have the content generated during the build of your image anymore.
There are two possibilities:
Don't use the volume instruction in the docker-compose.yml file
Put the content of the /code/myApp/ folder of the image inside the ./myApp folder of the host.
It depends on why you want to use the volume option in docker-compose.yml.
I don't really know what is your goal. But if what you are trying to do is to access to the files built inside the container from the host machine, maybe this should do what you are looking for:
Remove the build steps from your Dockerfile
Run a shell inside a "web" container: docker-compose run web bash
Launch the build commands
So you will have built your application while the volume was mounted and will see the files on the host machine.
Exit the shell
Launch Docker Compose normally
If you just want to be able to backup the content of the /code/myApp/ folder, maybe you should omit the path on the host machine from the volume section of docker-compose.yml.
volumes:
- /code/
And follow this section of the documentation
I hope it helps