I have a setup where I have a Dockerfile and a docker-compose.yml.
Dockerfile:
# syntax=docker/dockerfile:1
FROM php:7.4
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
RUN docker-php-ext-install mysqli pdo pdo_mysql
RUN apt-get -y update
RUN apt-get -y install git
COPY . .
RUN composer install
YML file:
version: '3.8'
services:
foo_db:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=foo
- MYSQL_DATABASE=foo
foo_app:
image: foo_php
platform: linux/x86_64
restart: unless-stopped
ports:
- 8000:8000
links:
- foo_db
environment:
- DB_CONNECTION=mysql
- DB_HOST=foo_db
- DB_PORT=3306
- DB_PASSWORD=foo
command: sh -c "php artisan serve --host=0.0.0.0 --port=8000"
foo_phpmyadmin:
image: phpmyadmin
links:
- foo_db
environment:
PMA_HOST: foo_db
PMA_PORT: 3306
PMA_ARBITRARY: 1
PMA_USER: root
PMA_PASSWORD: foo
restart: always
ports:
- 8081:80
In order to set this up on a new workstation the steps I am taking are first running:
docker build -t foo_php .
As I understand it this runs the commands in the Dockerfile and creates a new image called foo_php.
Once that is done I am running docker compose up.
Question:
How can I tell docker that I would like my foo_app image to be automatically built, so that I can skip the step of first building the image. Ideally I would have one command similar to docker compose up that I could call each time I want to launch my containers. The first time it would build the images it needs including this custom image of mine described in the Dockerfile, and subsequent times calling it would just run these images. Does a method to achieve this exist?
You can ask docker compose to build the image every time:
docker compose up --build
But you need to also instruct docker compose on what to build:
foo_app:
image: foo_php
build:
context: .
where context points to the folder containing your Dockerfile
Related
I have a node project which uses Redis for queue purposes.
I required the Redis in compose file & it's working fine. But when I try to build the docker image from the Dockerfile and run that built image with docker run, it can't find/connect to the Redis.
My question is: If docker doesn’t include the images from the compose file when building the image from Dockerfile, how the built image can run?
Compose & Dockerfile are given below.
version: '3'
services:
oaq-web:
image: node:16.10-alpine3.13
container_name: oaq-web
volumes:
- ./:/usr/src/oaq
networks:
- oaq-network
working_dir: /usr/src/oaq
ports:
- "5000:5000"
command: npm run dev
redis:
image: redis:6.2
ports:
- "6379:6379"
networks:
- oaq-network
networks:
oaq-network:
driver: bridge
FROM node:16.10-alpine3.13
RUN mkdir -p app
COPY . /app
WORKDIR /app
RUN npm install
RUN npm run build
CMD ["npm", "start"]
I have two docker run commands - the second container need to be ran in a folder created by the first. As in below
docker run -v $(pwd):/projects \
-w /projects \
gcr.io/base-project/mainmyoh:v1 init myprojectname
cd myprojectname
The above myprojectname folder was created by the first container. I need to run the second container in this folder as below.
docker run -v $(pwd):/project \
-w /project \
-p 3000:3000 \
gcr.io/base-project/myoh:v1
Here is the docker-compose file I have so far:
version: '3.3'
services:
firstim:
volumes:
- '$(pwd):/projects'
restart: always
working_dir: /project
image: gcr.io/base-project/mainmyoh:v1
command: 'init myprojectname'
secondim:
image: gcr.io/base-project/myoh:v1
working_dir: /project
volumes:
- '$(pwd):/projects'
ports:
- 3000:3000
What need to change to achieve this.
You can make the two services use a shared named volume:
version: '3.3'
services:
firstim:
volumes:
- '.:/projects'
- 'my-project-volume:/projects/myprojectname'
restart: always
working_dir: /project
image: gcr.io/base-project/mainmyoh:v1
command: 'init myprojectname'
secondim:
image: gcr.io/base-project/myoh:v1
working_dir: /project
volumes:
- 'my-project-volume:/projects'
ports:
- 3000:3000
volumes:
my-project-volume:
Also, just an observation: in your example the working_dir: references /project while the volumes point to /projects. I assume this is a typo and this might be something you want to fix.
You can build a custom image that does this required setup for you. When secondim runs, you want the current working directory to be /project, you want the current directory's code to be embedded there, and you want the init command to have run. That's easy to express in Dockerfile syntax:
FROM gcr.io/base-project/mainmyoh:v1
WORKDIR /project
COPY . .
RUN init myprojectname
CMD whatever should be run to start the real project
Then you can tell Compose to build it for you:
version: '3.5'
services:
# no build-only first image
secondim:
build: .
image: gcr.io/base-project/mainmyoh:v1
ports:
- '3000:3000'
In another question you ask about running a similar setup in Kubernetes. This Dockerfile-based setup can translate directly into a Kubernetes Deployment/Service, without worrying about questions like "what kind of volume do I need to use" or "how do I copy the code into the cluster separately from the image".
Preliminary Info
I have a docker-compose file that describes two services, one built from a dockerhub mysql image and the other built from a dockerhub node alpine image. The docker-compose.yml is as follows:
version: "3.8"
services:
client:
image: node:alpine
command: sh -c "cd server && yarn install && cd .. && yarn install && yarn start"
ports:
- "3000:3000"
working_dir: /portal
volumes:
- ./:/portal
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: password
MYSQL_DB: files
mysql:
image: mysql:5.7
volumes:
- yaml-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: files
volumes:
yaml-mysql-data:
Current Understanding
I'm trying to deploy my app using kubernetes, but a kubernetes .yml file requires that I provide a path to my container images on dockerhub. However, I don't have them on dockerhub. I'm not sure how to push my images as they are built from the mysql and node images that I pull.
I know that docker-compose push can be used, however it's for locally built images; whereas I'm pulling images from dockerhub and am providing specific instructions in my docker-compose.yml when spinning them up.
Question
How can I push these images including the commands that should be run; e.g. command: sh -c "cd server && yarn install && cd .. && yarn install && yarn start"? (which is on line 5 of docker-compose.yml above)
Thanks
The logic that you put in the docker-compose.yml actually belongs with a Dockerfile. So create a Dockerfile for your nodejs applications (there are plenty of examples for this).
Then in your docker-compose.yml you build your own image that you can then push to a registry.
version: "3.8"
services:
client:
image: your_registry/your_name:some_tag
build: .
ports: ...
environment: ....
I am able to run my application with the following command:
docker run --rm -p 4000:4000 myapp:latest python3.8 -m pipenv run flask run -h 0.0.0.0
I am trying to write a docker-compose file so that I can bringup the app using
docker-compose up. This is not working. How do "add" the docker run params to the docker-compose file?
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
volumes:
- .:/code
You need to use command to specify this.
version: '3'
services:
web:
build: .
ports:
- '4000: 4000'
image: myapp:latest
command: 'python3.8 -m pipenv run flask run -h 0.0.0.0'
volumes:
- .:/code
You should use CMD in your Dockerfile to specify this. Since you'll want to specify this every time you run a container based on the image, there's no reason to want to specify it manually when you run the image.
CMD python3.8 -m pipenv run flask run -h 0.0.0.0
Within the context of a Docker container, it's typical to install packages into the "system" Python: it's already isolated from the host Python by virtue of being in a Docker container, and the setup to use a virtual environment is a little bit tricky. That gets rid of the need to run pipenv run.
FROM python:3.8
WORKDIR /code
COPY Pipfile Pipfile.lock .
RUN pipenv install --deploy --system
COPY . .
CMD flask run -h 0.0.0.0
Since the /code directory is already in your image, you can actually make your docker-compose.yml shorter by removing the unnecessary bind mount
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
# no volumes:
I am trying to get webpack setup on my docker container. It is working, and running, but when I save on my local computer it is not updating my files in my container. I have the following docker-compose file:
version: '2'
services:
web:
build:
context: .
dockerfile: docker/web/Dockerfile
container_name: arc-bis-www-web
restart: on-failure:3
environment:
FPM_HOST: 'php'
ports:
- 8080:8080
volumes:
- ./app:/usr/local/src/app
php:
build:
context: .
dockerfile: docker/php/Dockerfile
environment:
CRM_HOST: '192.168.1.79'
CRM_NAME: 'ARC_test_8_8_17'
CRM_PORT: '1433'
CRM_USER: 'sa'
CRM_PASSWORD: 'Multi*Gr4in'
volumes:
- ./app:/usr/local/src/app
node:
build:
context: .
dockerfile: docker/node/Dockerfile
container_name: arc-bis-www-node
volumes:
- ./app:/usr/local/src/app
and my node container is run by the following dockerfile:
FROM node:8
RUN useradd --create-home user
RUN mkdir /usr/local/src/app
RUN mkdir /usr/local/src/app/src
RUN mkdir /usr/local/src/app/test
WORKDIR /usr/local/src/app
# Copy application source files
COPY ./app/package.json /usr/local/src/app/package.json
COPY ./app/.babelrc /usr/local/src/app/.babelrc
COPY ./app/webpack.config.js /usr/local/src/app/webpack.config.js
COPY ./app/test /usr/local/src/app/test
RUN chown -R user:user /usr/local/src/app
USER user
RUN npm install
ENTRYPOINT ["npm"]
Now I have taken out the copy calls from above and it still runs fine, but neither option is allowing me to save files locally and have them show up in the localhost for my container. Ideally, I thought having a volume would allow me to update my local files and have it read by the volume in the container. Does that make sense? I am still feeling my way around Docker. Thanks in advance for any help.
If you start your container with -v tag, you can map the container and your local storage. You can find more information here.