Preliminary Info
I have a docker-compose file that describes two services, one built from a dockerhub mysql image and the other built from a dockerhub node alpine image. The docker-compose.yml is as follows:
version: "3.8"
services:
client:
image: node:alpine
command: sh -c "cd server && yarn install && cd .. && yarn install && yarn start"
ports:
- "3000:3000"
working_dir: /portal
volumes:
- ./:/portal
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: password
MYSQL_DB: files
mysql:
image: mysql:5.7
volumes:
- yaml-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: files
volumes:
yaml-mysql-data:
Current Understanding
I'm trying to deploy my app using kubernetes, but a kubernetes .yml file requires that I provide a path to my container images on dockerhub. However, I don't have them on dockerhub. I'm not sure how to push my images as they are built from the mysql and node images that I pull.
I know that docker-compose push can be used, however it's for locally built images; whereas I'm pulling images from dockerhub and am providing specific instructions in my docker-compose.yml when spinning them up.
Question
How can I push these images including the commands that should be run; e.g. command: sh -c "cd server && yarn install && cd .. && yarn install && yarn start"? (which is on line 5 of docker-compose.yml above)
Thanks
The logic that you put in the docker-compose.yml actually belongs with a Dockerfile. So create a Dockerfile for your nodejs applications (there are plenty of examples for this).
Then in your docker-compose.yml you build your own image that you can then push to a registry.
version: "3.8"
services:
client:
image: your_registry/your_name:some_tag
build: .
ports: ...
environment: ....
Related
I have a setup where I have a Dockerfile and a docker-compose.yml.
Dockerfile:
# syntax=docker/dockerfile:1
FROM php:7.4
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
RUN docker-php-ext-install mysqli pdo pdo_mysql
RUN apt-get -y update
RUN apt-get -y install git
COPY . .
RUN composer install
YML file:
version: '3.8'
services:
foo_db:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=foo
- MYSQL_DATABASE=foo
foo_app:
image: foo_php
platform: linux/x86_64
restart: unless-stopped
ports:
- 8000:8000
links:
- foo_db
environment:
- DB_CONNECTION=mysql
- DB_HOST=foo_db
- DB_PORT=3306
- DB_PASSWORD=foo
command: sh -c "php artisan serve --host=0.0.0.0 --port=8000"
foo_phpmyadmin:
image: phpmyadmin
links:
- foo_db
environment:
PMA_HOST: foo_db
PMA_PORT: 3306
PMA_ARBITRARY: 1
PMA_USER: root
PMA_PASSWORD: foo
restart: always
ports:
- 8081:80
In order to set this up on a new workstation the steps I am taking are first running:
docker build -t foo_php .
As I understand it this runs the commands in the Dockerfile and creates a new image called foo_php.
Once that is done I am running docker compose up.
Question:
How can I tell docker that I would like my foo_app image to be automatically built, so that I can skip the step of first building the image. Ideally I would have one command similar to docker compose up that I could call each time I want to launch my containers. The first time it would build the images it needs including this custom image of mine described in the Dockerfile, and subsequent times calling it would just run these images. Does a method to achieve this exist?
You can ask docker compose to build the image every time:
docker compose up --build
But you need to also instruct docker compose on what to build:
foo_app:
image: foo_php
build:
context: .
where context points to the folder containing your Dockerfile
I have a node project which uses Redis for queue purposes.
I required the Redis in compose file & it's working fine. But when I try to build the docker image from the Dockerfile and run that built image with docker run, it can't find/connect to the Redis.
My question is: If docker doesn’t include the images from the compose file when building the image from Dockerfile, how the built image can run?
Compose & Dockerfile are given below.
version: '3'
services:
oaq-web:
image: node:16.10-alpine3.13
container_name: oaq-web
volumes:
- ./:/usr/src/oaq
networks:
- oaq-network
working_dir: /usr/src/oaq
ports:
- "5000:5000"
command: npm run dev
redis:
image: redis:6.2
ports:
- "6379:6379"
networks:
- oaq-network
networks:
oaq-network:
driver: bridge
FROM node:16.10-alpine3.13
RUN mkdir -p app
COPY . /app
WORKDIR /app
RUN npm install
RUN npm run build
CMD ["npm", "start"]
I have the following compose file and I need to run multiple commands on the container. I need the container to do a git pull to grab the containers config. This is a Debian build so I have tried to install git and then run the git command. When I do this, the container constantly restarts.
---
version: "3"
services:
kamailio:
image: kamailio/kamailio:5.2.8-stretch
restart: unless-stopped
container_name: kamailio
#environment:
command:
- bash
- -c
- >
apt-get install git -y;
cd /tmp;
git clone https://github.com/dOpensource/dsiprouter.git;
volumes:
- kamailio_Data:/etc/kamailio
I am setting up docker-compose for an existing Ruby on Rails project. I am using docker-compose version 1.23.1, build b02f1306 and Docker version 18.09.0, build 4d60db4
When I am trying to start my containers for development using docker-compose up --build my web and worker containers are exiting with code 10. When I /bin/bash into them the /web_gen folder only contains a /tmp/db inside of that and postgres files inside of that.
I can get the containers working by changing the volumes to - /web_gen but then the volumes will not hot reload.
My docker-compose.yml
version: '3'
services:
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/web_gen
ports:
- "3000:3000"
depends_on:
- db
- redis
db:
image: 'postgres:9.4.5'
volumes:
- ./tmp/db:/var/lib/postgresql/data
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
worker:
build: .
command: bundle exec sidekiq -c 1
volumes:
- .:/web_gen
depends_on:
- redis
Dockerfile
FROM ruby:2.3.3
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /web_gen
WORKDIR /web_gen
COPY Gemfile /web_gen/Gemfile
COPY Gemfile.lock /web_gen/Gemfile.lock
RUN bundle install
COPY . /web_gen
I am using docker compose and i have created a volume. I have mulitple containers. I am facing issue to run commands in the docker container.
I have node js container which have separate frontend and backend folders. i need to run npm install in both the folders.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
node:
build:
context: ./node
volumes_from:
- applications
ports:
- "4000:30001"
networks:
- frontend
- backend
This is my docker file for node
FROM node:6.10
MAINTAINER JC Gil <sensukho#gmail.com>
ENV TERM=xterm
ADD script.sh /tmp/
RUN chmod 777 /tmp/script.sh
RUN apt-get update && apt-get install -y netcat-openbsd
WORKDIR /var/www/html/Backend
RUN npm install
EXPOSE 4000
CMD ["/bin/bash", "/tmp/script.sh"]
my workdir is empty as location /var/www/html/Backend is not available while building but available when i conainter is up. So my command npm install do not work
What you probably want to do, is to ADD or COPY the package.json file to the correct location, RUN npm install, then ADD or COPY the rest of the source into the image. That way, docker build will re-run npm install only when needed.
It would probably be better to run frontend and backend in separate containers, but if that's not an option, it's completely feasible to run ADD package.json-RUN npm install-ADD . once for each application.
The RUN is an image build step, at build time the volume isn't attached yet.
I think you have to execute npm install inside CMD.
You can try to add npm install inside /tmp/script.sh
Let me know
As Tomas Lycken Mentioned to copy files and then run npm install. I have separated containers for Frontend and backend. Most important is the node modules for the frontend and backend. Need to create them as volumes in services so that they are available when we up container.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
- ${BACKEND}:/var/www/html/Backend
- ${FRONTEND}:/var/www/html/Frontend
apache:
build:
context: ./apache2
volumes_from:
- applications
volumes:
- ${APACHE_HOST_LOG_PATH}:/var/log/apache2
- ./apache2/sites:/etc/apache2/sites-available
- /var/www/html/Frontend/node_modules
- /var/www/html/Frontend/bower_components
- /var/www/html/Frontend/dist
ports:
- "${APACHE_HOST_HTTP_PORT}:80"
- "${APACHE_HOST_HTTPS_PORT}:443"
networks:
- frontend
- backend
node:
build:
context: ./node
ports:
- "4000:4000"
volumes_from:
- applications
volumes:
- /var/www/html/Backend/node_modules
networks:
- frontend
- backend