I was required to copy files from ubuntu container to host machine. So I found a way to do that using docker cp <containerId>:/file/path/within/container /host/path/target command. I able to use this command while I am not in the ubuntu conainer, i.e., when I am not docker exec mode.
My problem is I want to know how to run Docker commands from Dockerfile, so that on/before exiting from the ubuntu container, the docker command should run which will copy the content from container to host.
I mightn't be able to do it with CMD[ ] as it would be docker inside docker container.
Any help would be very appreciable.
Here is my docker-compose file
version: '3'
services:
ubuntu:
build:
context: .
dockerfile: Dockerfile
image: ubuntu
ports:
- 8091
volumes:
- ./dir:/dir
volumes:
dir:
external: false
Here is my Dockerfile
FROM ubuntu:latest
WORKDIR /dir
VOLUME /dir
RUN apt-get update
RUN apt-get install -y
EXPOSE 8000
COPY . .
CMD ["/bin/bash"]
Edit 1:
I tried implementing my Original Problem using bind mount this way. But this way too it is not syncing the folders
version: '3'
services:
ubuntu:
build:
context: .
dockerfile: Dockerfile
image: ubuntu
ports:
- 8091
volumes:
- type: bind
source: ./dir
target: /dir
volumes:
dir:
external: false
Related
I have a node project which uses Redis for queue purposes.
I required the Redis in compose file & it's working fine. But when I try to build the docker image from the Dockerfile and run that built image with docker run, it can't find/connect to the Redis.
My question is: If docker doesn’t include the images from the compose file when building the image from Dockerfile, how the built image can run?
Compose & Dockerfile are given below.
version: '3'
services:
oaq-web:
image: node:16.10-alpine3.13
container_name: oaq-web
volumes:
- ./:/usr/src/oaq
networks:
- oaq-network
working_dir: /usr/src/oaq
ports:
- "5000:5000"
command: npm run dev
redis:
image: redis:6.2
ports:
- "6379:6379"
networks:
- oaq-network
networks:
oaq-network:
driver: bridge
FROM node:16.10-alpine3.13
RUN mkdir -p app
COPY . /app
WORKDIR /app
RUN npm install
RUN npm run build
CMD ["npm", "start"]
I have a GraphQL application that run inside a container. If I run docker compose build followed by docker compose up I can connect to it via localhost:9999/graphql. Inside the dockerfile the port forwarding is 9999:80. When I run the docker container ls command I can see the ports are forewarded as expected.
I'd like to running this in a VS Code remote container. Selecting Open folder in remote container gives me the option of selecting either the dockerfile or the docker-compose file to build the container. I've tried both options and neither allows me to access the GraphQL playground from localhost. Running from docker-compose I can see that the ports appear to be forwarded in the same manner as if I ran docker compose up but I can't access the site.
Where am I going wrong?
Update: If I run docker compose up on the container that is built by vs code, I can connect to localhost and the graphql playground.
FROM docker.removed.local/node
MAINTAINER removed
WORKDIR /opt/app
COPY package.json /opt/app/package.json
COPY package-lock.json /opt/app/package-lock.json
COPY .npmrc /opt/app/.npmrc
RUN echo "nameserver 192.168.11.1" > /etc/resolv.conf && npm ci
RUN mkdir -p /opt/app/logs
# Setup a path for using local npm packages
RUN mkdir -p /opt/node_modules
ENV PATH /opt/node_modules/.bin:$PATH
COPY ./ /opt/app
EXPOSE 80
ENV NODE_PATH /opt:/opt/app:$NODE_PATH
ARG NODE_ENV
VOLUME ["/opt/app"]
CMD ["forever", "-o", "/opt/app/logs/logs.log", "-e", "/opt/app/logs/error.log", "-a", "server.js"]
version: '3.5'
services:
server:
build: .
container_name: removed-data-graph
command: nodemon --ignore 'public/*' --legacy-watch src/server.js
image: docker.removed.local/removed-data-graph:local
ports:
- "9999:80"
volumes:
- .:/opt/app
- /opt/app/node_modules/
#- ${LOCAL_PACKAGE_DIR}:/opt/node_modules
depends_on:
- redis
networks:
- company-network
environment:
- NODE_ENV=dev
redis:
container_name: redis
image: redis
networks:
- company-network
ports:
- "6379:6379"
networks:
company-network:
name: company-network
I am able to run my application with the following command:
docker run --rm -p 4000:4000 myapp:latest python3.8 -m pipenv run flask run -h 0.0.0.0
I am trying to write a docker-compose file so that I can bringup the app using
docker-compose up. This is not working. How do "add" the docker run params to the docker-compose file?
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
volumes:
- .:/code
You need to use command to specify this.
version: '3'
services:
web:
build: .
ports:
- '4000: 4000'
image: myapp:latest
command: 'python3.8 -m pipenv run flask run -h 0.0.0.0'
volumes:
- .:/code
You should use CMD in your Dockerfile to specify this. Since you'll want to specify this every time you run a container based on the image, there's no reason to want to specify it manually when you run the image.
CMD python3.8 -m pipenv run flask run -h 0.0.0.0
Within the context of a Docker container, it's typical to install packages into the "system" Python: it's already isolated from the host Python by virtue of being in a Docker container, and the setup to use a virtual environment is a little bit tricky. That gets rid of the need to run pipenv run.
FROM python:3.8
WORKDIR /code
COPY Pipfile Pipfile.lock .
RUN pipenv install --deploy --system
COPY . .
CMD flask run -h 0.0.0.0
Since the /code directory is already in your image, you can actually make your docker-compose.yml shorter by removing the unnecessary bind mount
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
# no volumes:
I am trying to get webpack setup on my docker container. It is working, and running, but when I save on my local computer it is not updating my files in my container. I have the following docker-compose file:
version: '2'
services:
web:
build:
context: .
dockerfile: docker/web/Dockerfile
container_name: arc-bis-www-web
restart: on-failure:3
environment:
FPM_HOST: 'php'
ports:
- 8080:8080
volumes:
- ./app:/usr/local/src/app
php:
build:
context: .
dockerfile: docker/php/Dockerfile
environment:
CRM_HOST: '192.168.1.79'
CRM_NAME: 'ARC_test_8_8_17'
CRM_PORT: '1433'
CRM_USER: 'sa'
CRM_PASSWORD: 'Multi*Gr4in'
volumes:
- ./app:/usr/local/src/app
node:
build:
context: .
dockerfile: docker/node/Dockerfile
container_name: arc-bis-www-node
volumes:
- ./app:/usr/local/src/app
and my node container is run by the following dockerfile:
FROM node:8
RUN useradd --create-home user
RUN mkdir /usr/local/src/app
RUN mkdir /usr/local/src/app/src
RUN mkdir /usr/local/src/app/test
WORKDIR /usr/local/src/app
# Copy application source files
COPY ./app/package.json /usr/local/src/app/package.json
COPY ./app/.babelrc /usr/local/src/app/.babelrc
COPY ./app/webpack.config.js /usr/local/src/app/webpack.config.js
COPY ./app/test /usr/local/src/app/test
RUN chown -R user:user /usr/local/src/app
USER user
RUN npm install
ENTRYPOINT ["npm"]
Now I have taken out the copy calls from above and it still runs fine, but neither option is allowing me to save files locally and have them show up in the localhost for my container. Ideally, I thought having a volume would allow me to update my local files and have it read by the volume in the container. Does that make sense? I am still feeling my way around Docker. Thanks in advance for any help.
If you start your container with -v tag, you can map the container and your local storage. You can find more information here.
I am learning Docker and trying to build a Dockerfile that will run a Tomcat using a docker-compose rather than docker.
The Dockerfile is as follows:
# Base the image on tomcat
FROM tomcat:7.0.82-jre7
WORKDIR /usr/local/tomcat
# Install updates & commands
RUN apt-get update && apt-get install -y vim
# Add some pre-set files
COPY tomcat-users.xml /usr/local/tomcat/conf
# Run the Tomcat on port 8080
EXPOSE 8080
# Start tomcat
# CMD ["bin/startup.sh", "run"]
The docker-compose.yml file is as follows:
version: '3'
services:
tomcat:
image: tomcat:7.0
build:
context: ./
dockerfile: Dockerfile
ports:
- 8888:8080
container_name: tomcat7
volumes:
- ./tomcat7:/usr/local/tomcat:rw
entrypoint: /bin/bash /usr/local/tomcat/bin/startup.sh
tty: true
The tomcat7 docker container starts but in exit mode.
Any idea how to make it run?