I am using this repo to set up a local wordpress development environment:
https://github.com/mjstealey/wordpress-nginx-docker#tldr
I hijacked the docker-compose to change the nginx port, but also to try to install openssl and vim on the nginx server. But when I do a docker-compose up the nginx server never starts properly.
This is what i see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
57cf3dff059f nginx:latest "bash" About a minute ago Restarting (0) 14 seconds ago nginx
I tried to reference a Dockerfile inside the docker-compose like this:
nginx:
image: nginx:${NGINX_VERSION:-latest}
container_name: nginx
ports:
- '8085:8085'
- '443:443'
build: .
volumes:
- ${NGINX_CONF_DIR:-./nginx}:/etc/nginx/conf.d
- ${NGINX_LOG_DIR:-./logs/nginx}:/var/log/nginx
- ${WORDPRESS_DATA_DIR:-./wordpress}:/var/www/html
- ${SSL_CERTS_DIR:-./certs}:/etc/letsencrypt
- ${SSL_CERTS_DATA_DIR:-./certs-data}:/data/letsencrypt
depends_on:
- wordpress
restart: always
Notice the line that says "build: .".
Here's the contents of my Dockerfile:
FROM debian:buster-slim
RUN apt-get update
# installing vim isn't necessary. but just handy.
RUN apt-get -y install openssl
RUN apt-get -y install vim
Clearly, I'm doing something wrong. Maybe I should be defining tasks directly in the docker-compose for the nginx server?
I wanted to find a way to make a clean separation between our customizations and the original code. But maybe this isn't possible.
Thanks
EDIT 1
This is what the Dockerfile looks like:
FROM nginx:latest
RUN apt-get update
&& apt-get -y install openssl
&& apt-get -y install vim
And the nginx section of the docker-compose.yml:
nginx:
#image: nginx:${NGINX_VERSION:-latest}
container_name: nginx
ports:
- '8085:8085'
- '443:443'
build: .
volumes:
- ${NGINX_CONF_DIR:-./nginx}:/etc/nginx/conf.d
- ${NGINX_LOG_DIR:-./logs/nginx}:/var/log/nginx
- ${WORDPRESS_DATA_DIR:-./wordpress}:/var/www/html
- ${SSL_CERTS_DIR:-./certs}:/etc/letsencrypt
- ${SSL_CERTS_DATA_DIR:-./certs-data}:/data/letsencrypt
depends_on:
- wordpress
restart: always
I think you maybe need to change the base image that you are using in the docker file?:
FROM nginx:latest
Then in the docker file, because you are creating your own customised version of nginx image, you should either give custom name or tag.
Related
I have a setup where I have a Dockerfile and a docker-compose.yml.
Dockerfile:
# syntax=docker/dockerfile:1
FROM php:7.4
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
RUN docker-php-ext-install mysqli pdo pdo_mysql
RUN apt-get -y update
RUN apt-get -y install git
COPY . .
RUN composer install
YML file:
version: '3.8'
services:
foo_db:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=foo
- MYSQL_DATABASE=foo
foo_app:
image: foo_php
platform: linux/x86_64
restart: unless-stopped
ports:
- 8000:8000
links:
- foo_db
environment:
- DB_CONNECTION=mysql
- DB_HOST=foo_db
- DB_PORT=3306
- DB_PASSWORD=foo
command: sh -c "php artisan serve --host=0.0.0.0 --port=8000"
foo_phpmyadmin:
image: phpmyadmin
links:
- foo_db
environment:
PMA_HOST: foo_db
PMA_PORT: 3306
PMA_ARBITRARY: 1
PMA_USER: root
PMA_PASSWORD: foo
restart: always
ports:
- 8081:80
In order to set this up on a new workstation the steps I am taking are first running:
docker build -t foo_php .
As I understand it this runs the commands in the Dockerfile and creates a new image called foo_php.
Once that is done I am running docker compose up.
Question:
How can I tell docker that I would like my foo_app image to be automatically built, so that I can skip the step of first building the image. Ideally I would have one command similar to docker compose up that I could call each time I want to launch my containers. The first time it would build the images it needs including this custom image of mine described in the Dockerfile, and subsequent times calling it would just run these images. Does a method to achieve this exist?
You can ask docker compose to build the image every time:
docker compose up --build
But you need to also instruct docker compose on what to build:
foo_app:
image: foo_php
build:
context: .
where context points to the folder containing your Dockerfile
Im am doing that in my docker-compose.yml:
app:
image: golang:1.14.3
ports:
- "8080:8080" ## Share API port with host machine.
depends_on:
- broker
- ffmpeg
volumes:
- .:/go/src/go-intelligent-monitoring-system
- /home/:/home/
working_dir: /go/src/go-intelligent-monitoring-system
command: apt-get install ffmpeg ########-------<<<<<<---------#################
command: go run main.go
But when I use it on my code, I have this error:
--> ""exec: "ffmpeg": executable file not found in $PATH""
Only the last command in compose file will take effect, so you didn't have chance to install ffmpeg with your current compose file.
As a replacement, you should install ffmpeg in your customized dockerfile like next, refers to this:
app:
build: ./dir
Put you customized Dockerfile in above dir like next:
Dockerfile:
FROM golang:1.14.3
RUN apt-get update && apt-get install ffmpeg -y
I am setting up docker-compose for an existing Ruby on Rails project. I am using docker-compose version 1.23.1, build b02f1306 and Docker version 18.09.0, build 4d60db4
When I am trying to start my containers for development using docker-compose up --build my web and worker containers are exiting with code 10. When I /bin/bash into them the /web_gen folder only contains a /tmp/db inside of that and postgres files inside of that.
I can get the containers working by changing the volumes to - /web_gen but then the volumes will not hot reload.
My docker-compose.yml
version: '3'
services:
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/web_gen
ports:
- "3000:3000"
depends_on:
- db
- redis
db:
image: 'postgres:9.4.5'
volumes:
- ./tmp/db:/var/lib/postgresql/data
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
worker:
build: .
command: bundle exec sidekiq -c 1
volumes:
- .:/web_gen
depends_on:
- redis
Dockerfile
FROM ruby:2.3.3
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /web_gen
WORKDIR /web_gen
COPY Gemfile /web_gen/Gemfile
COPY Gemfile.lock /web_gen/Gemfile.lock
RUN bundle install
COPY . /web_gen
Hi there I am new to Docker. I have an docker-compose.yml which looks like this:
version: "3"
services:
lmm-website:
image: lmm/lamp:php${PHP_VERSION:-71}
container_name: ${CONTAINER_NAME:-lmm-website}
environment:
HOME: /home/user
command: supervisord -n
volumes:
- ..:/builds/lmm/website
- db_website:/var/lib/mysql
ports:
- 8765:80
- 12121:443
- 3309:3306
networks:
- ntw
volumes:
db_website:
networks:
ntw:
I want to install the Yarn package manager from within the docker-compose file:
sudo apt-get update && sudo apt-get install yarn
I could not figure out how to declare this, I have tried
command: supervisord -n && sudo apt-get update && sudo apt-get install yarn
which fails silently. How do I declare this correctly? Or is docker-compose.yml the wrong place for this?
Why not use Dockerfile which is specifically designed for this task?
Change your "image" property to "build" property to link a Dockerfile.
Your docker-compose.yml would look like this:
services:
lmm-website:
build:
context: .
dockerfile: Dockerfile
container_name: ${CONTAINER_NAME:-lmm-website}
environment:
HOME: /home/user
command: supervisord -n
volumes:
- ..:/builds/lmm/website
- db_website:/var/lib/mysql
ports:
- 8765:80
- 12121:443
- 3309:3306
networks:
- ntw
volumes:
db_website:
networks:
Then create a text file named Dockerfile in the same path as docker-compose.yml with the following content:
FROM lmm/lamp:php${PHP_VERSION:-71}
RUN apt-get update && apt-get install -y bash
You can add as much SO commands as you want using Dockerfile's RUN (cp, mv, ls, bash...), apart from other Dockerfile capabilities like ADD, COPY, etc.
+info:
https://docs.docker.com/engine/reference/builder/
+live-example:
I made a github project called hello-docker-react. As it's name says is a docker-react box, and can serve you as an example as I am installing yarn plus other tools using the procedure I explained above.
In addition to that, I also start yarn using an entrypoint bash script linked to docker-compose.yml file using docker-compose entrypoint property.
https://github.com/lopezator/hello-docker-react
You can only do it with a Dockerfile, because the command operator in docker-compose.yml only keeps the container alive during the time the command is executed, and then it stops.
Try this
command: supervisord -n && apt-get update && apt-get install yarn
Because sudo doesn't work in docker.
My first time trying to help out:
would like you to give it a try (I found it on the internet)
FROM lmm/lamp:php${PHP_VERSION:-71}
USER root
RUN apt-get update && apt-get install -y bash
In my docker container, I'm trying to install several packages with pip along with installing Bower via npm. It seems however that whichever of pip or npm run first, the other's contents in /usr/local/bin are overwritten (specifically, gunicorn is missing with the below Dockerfile, or Bower is missing if I swap the order of my FROM..RUN blocks).
Is this the expected behavior of Docker, and if so, how can I go about installing both my pip packages and Bower into the same directory, /usr/local/bin?
Here's my Dockerfile:
FROM python:3.4.3
RUN mkdir /code
WORKDIR /code
ADD ./requirements/ /code/requirements/
RUN pip install -r /code/requirements/docker.txt
ADD ./ /code/
FROM node:0.12.7
RUN npm install bower
Here's my docker-compose.yml file:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
#-redis:redis
volumes:
- .:/code
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
webstatic:
restart: always
build: .
volumes:
- /usr/src/app/static
env_file: .env
command: bash -c "/code/manage.py bower install && /code/manage.py collectstatic --noinput"
nginx:
restart: always
#build: ./config/nginx
image: nginx
ports:
- "80:80"
volumes:
- /www/static
- config/nginx/conf.d:/etc/nginx/conf.d
volumes_from:
- webstatic
links:
- web:web
postgres:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
ports:
- "5432:5432"
Update: I went ahead and cross-posted this as a docker-compose issue since it's unclear if there is an actual bug or if my configuration is a problem. I'll keep both posts updated, but do post in either if you have an idea of what is going on. Thanks!
You cannot use multiple FROM commands in Dockerfile and you cannot create image from 2 different base images. So if you need node and python in the same image you could either add node to python image or add python to node image.