I am able to run my application with the following command:
docker run --rm -p 4000:4000 myapp:latest python3.8 -m pipenv run flask run -h 0.0.0.0
I am trying to write a docker-compose file so that I can bringup the app using
docker-compose up. This is not working. How do "add" the docker run params to the docker-compose file?
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
volumes:
- .:/code
You need to use command to specify this.
version: '3'
services:
web:
build: .
ports:
- '4000: 4000'
image: myapp:latest
command: 'python3.8 -m pipenv run flask run -h 0.0.0.0'
volumes:
- .:/code
You should use CMD in your Dockerfile to specify this. Since you'll want to specify this every time you run a container based on the image, there's no reason to want to specify it manually when you run the image.
CMD python3.8 -m pipenv run flask run -h 0.0.0.0
Within the context of a Docker container, it's typical to install packages into the "system" Python: it's already isolated from the host Python by virtue of being in a Docker container, and the setup to use a virtual environment is a little bit tricky. That gets rid of the need to run pipenv run.
FROM python:3.8
WORKDIR /code
COPY Pipfile Pipfile.lock .
RUN pipenv install --deploy --system
COPY . .
CMD flask run -h 0.0.0.0
Since the /code directory is already in your image, you can actually make your docker-compose.yml shorter by removing the unnecessary bind mount
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
# no volumes:
Related
I was required to copy files from ubuntu container to host machine. So I found a way to do that using docker cp <containerId>:/file/path/within/container /host/path/target command. I able to use this command while I am not in the ubuntu conainer, i.e., when I am not docker exec mode.
My problem is I want to know how to run Docker commands from Dockerfile, so that on/before exiting from the ubuntu container, the docker command should run which will copy the content from container to host.
I mightn't be able to do it with CMD[ ] as it would be docker inside docker container.
Any help would be very appreciable.
Here is my docker-compose file
version: '3'
services:
ubuntu:
build:
context: .
dockerfile: Dockerfile
image: ubuntu
ports:
- 8091
volumes:
- ./dir:/dir
volumes:
dir:
external: false
Here is my Dockerfile
FROM ubuntu:latest
WORKDIR /dir
VOLUME /dir
RUN apt-get update
RUN apt-get install -y
EXPOSE 8000
COPY . .
CMD ["/bin/bash"]
Edit 1:
I tried implementing my Original Problem using bind mount this way. But this way too it is not syncing the folders
version: '3'
services:
ubuntu:
build:
context: .
dockerfile: Dockerfile
image: ubuntu
ports:
- 8091
volumes:
- type: bind
source: ./dir
target: /dir
volumes:
dir:
external: false
I have a GraphQL application that run inside a container. If I run docker compose build followed by docker compose up I can connect to it via localhost:9999/graphql. Inside the dockerfile the port forwarding is 9999:80. When I run the docker container ls command I can see the ports are forewarded as expected.
I'd like to running this in a VS Code remote container. Selecting Open folder in remote container gives me the option of selecting either the dockerfile or the docker-compose file to build the container. I've tried both options and neither allows me to access the GraphQL playground from localhost. Running from docker-compose I can see that the ports appear to be forwarded in the same manner as if I ran docker compose up but I can't access the site.
Where am I going wrong?
Update: If I run docker compose up on the container that is built by vs code, I can connect to localhost and the graphql playground.
FROM docker.removed.local/node
MAINTAINER removed
WORKDIR /opt/app
COPY package.json /opt/app/package.json
COPY package-lock.json /opt/app/package-lock.json
COPY .npmrc /opt/app/.npmrc
RUN echo "nameserver 192.168.11.1" > /etc/resolv.conf && npm ci
RUN mkdir -p /opt/app/logs
# Setup a path for using local npm packages
RUN mkdir -p /opt/node_modules
ENV PATH /opt/node_modules/.bin:$PATH
COPY ./ /opt/app
EXPOSE 80
ENV NODE_PATH /opt:/opt/app:$NODE_PATH
ARG NODE_ENV
VOLUME ["/opt/app"]
CMD ["forever", "-o", "/opt/app/logs/logs.log", "-e", "/opt/app/logs/error.log", "-a", "server.js"]
version: '3.5'
services:
server:
build: .
container_name: removed-data-graph
command: nodemon --ignore 'public/*' --legacy-watch src/server.js
image: docker.removed.local/removed-data-graph:local
ports:
- "9999:80"
volumes:
- .:/opt/app
- /opt/app/node_modules/
#- ${LOCAL_PACKAGE_DIR}:/opt/node_modules
depends_on:
- redis
networks:
- company-network
environment:
- NODE_ENV=dev
redis:
container_name: redis
image: redis
networks:
- company-network
ports:
- "6379:6379"
networks:
company-network:
name: company-network
docker-compose up -d works fine when I have only the postgres service in the docker-compose.yml code below. But once I add the python service, the postgres container is never run even though its image is built. docker container ls -a shows that it does not exist.
version: '3'
services:
postgres:
build:
context: .
dockerfile: Dockerfile.postgres
restart: always
container_name: test_postgres
ports:
- "5431:5432"
# Once this python service is added, the postgres does not run.
python:
depends_on:
- postgres
build:
context: .
dockerfile: Dockerfile.python
restart: on-failure:10
container_name: test_python
ports:
- "8001:8000"
I haven't been able to find clear information on why this should be. Some solutions mention that version 3 doesn't use depends_on anymore. I thought this might be a possible issue so I removed it and added restart: on-failure:10 but it made no difference.
If I run docker-compose up -d with just the postgres service in it first, then add the python service into the same docker-compose.yml file and run it again, both images are built and containers run properly.
Not sure if necessary but here are the Dockerfiles for the services:
Dockerfile.postgres:
FROM postgres
WORKDIR /docker-entrypoint-initdb.d
ENV POSTGRES_DB test_postgres
ENV POSTGRES_PASSWORD 1234
COPY init.sql /docker-entrypoint-initdb.d
EXPOSE 5432
Dockerfile.python:
FROM python:latest
RUN mkdir /code
WORKDIR /code
COPY ./backend/ /code
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN python manage.py migrate
RUN python manage.py loaddata customers
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
What am I doing wrong?
I have set up a docker-compose.yml file that runs a web service along with postgres.
It works nicely when I run it with docker-compose up.
docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile:
FROM python:3
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
CMD ["python", "manage.py", "runserver"]
Is there any way to construct an image out of the services?
I tried it with docker-compose build, but running the created image simply freezes the terminal.
Thanks!
docker-compose is a container orchestration tool, albeit a simple one , and not a bundler of multiple images and preferences into one. In fact, such a thing does not even exists.
What happens when you run docker-compose up is that it effectively runs docker-compose build for those images that need to be built, web in your example, and then effectively replaces the build: . with image: web and executes the configuration as defined by the compose file.
So if you were to run docker-compose build manually and wanted to run the same configuration you have in the compose file manually, you would need to something along the lines of (in order)
run docker-compose build or docker build -t web . to build the web image
run docker run --name db postgres
run docker run --name web -v .:/code -p 8000:8000 web python manage.py runserver 0.0.0.0:8000
I am trying to get webpack setup on my docker container. It is working, and running, but when I save on my local computer it is not updating my files in my container. I have the following docker-compose file:
version: '2'
services:
web:
build:
context: .
dockerfile: docker/web/Dockerfile
container_name: arc-bis-www-web
restart: on-failure:3
environment:
FPM_HOST: 'php'
ports:
- 8080:8080
volumes:
- ./app:/usr/local/src/app
php:
build:
context: .
dockerfile: docker/php/Dockerfile
environment:
CRM_HOST: '192.168.1.79'
CRM_NAME: 'ARC_test_8_8_17'
CRM_PORT: '1433'
CRM_USER: 'sa'
CRM_PASSWORD: 'Multi*Gr4in'
volumes:
- ./app:/usr/local/src/app
node:
build:
context: .
dockerfile: docker/node/Dockerfile
container_name: arc-bis-www-node
volumes:
- ./app:/usr/local/src/app
and my node container is run by the following dockerfile:
FROM node:8
RUN useradd --create-home user
RUN mkdir /usr/local/src/app
RUN mkdir /usr/local/src/app/src
RUN mkdir /usr/local/src/app/test
WORKDIR /usr/local/src/app
# Copy application source files
COPY ./app/package.json /usr/local/src/app/package.json
COPY ./app/.babelrc /usr/local/src/app/.babelrc
COPY ./app/webpack.config.js /usr/local/src/app/webpack.config.js
COPY ./app/test /usr/local/src/app/test
RUN chown -R user:user /usr/local/src/app
USER user
RUN npm install
ENTRYPOINT ["npm"]
Now I have taken out the copy calls from above and it still runs fine, but neither option is allowing me to save files locally and have them show up in the localhost for my container. Ideally, I thought having a volume would allow me to update my local files and have it read by the volume in the container. Does that make sense? I am still feeling my way around Docker. Thanks in advance for any help.
If you start your container with -v tag, you can map the container and your local storage. You can find more information here.