I'm working on build containers exploiting a monitoring application (Centreon).
When i build my container manually (with docker run) and when i build my docker file, i have different results. Somes scripts used by the application are missing.
Here's my dockerfile :
FROM centos:centos7
LABEL Author = "AurelienH."
LABEL Description = "DOCKERFILE : Creates a Docker Container for a Centreon poller"
#Update and install requirements
RUN yum update -y
RUN yum install -y wget nano httpd git
#Install Centreon repo
RUN yum install -y --nogpgcheck http://yum.centreon.com/standard/3.4/el7/stable/noarch/RPMS/centreon-release-3.4-4.el7.centos.noarch.rpm
#Install Centreon
RUN yum install -y centreon-base-config-centreon-engine centreon centreon-pp-manager centreon-clapi
RUN yum install -y centreon-widget*
RUN yum clean all
#PHP Time Zone
RUN echo -n "date.timezone = Europe/Paris" > /etc/php.d/php-timezone.ini
#Supervisor
RUN yum install -y python-setuptools
RUN easy_install supervisor
COPY /cfg/supervisord.conf /etc/
RUN yum clean all
EXPOSE 22 80 5667 5669
CMD ["/usr/bin/supervisord", "--configuration=/etc/supervisord.conf"]
The difference i see is in the /usr/lib/nagios/plugins folder. I miss some scripts here. And when i execute the exact same commands but in a container i'm running i can find my files.
Maybe it has something to do with writing permissions for the user that executes the commands with docker-compose ?
EDIT :
docker-compose file :
version: "3"
services:
centreon:
build: ./centreon
depends_on:
- mariadb
container_name: sinelis-central
volumes:
- ./central-broker-config:/etc/centreon-broker
- ./central-centreon-plugins:/usr/lib/centreon/plugins
- ./central-engine-config:/etc/centreon-engine
- ./central-logs-broker:/var/log/centreon-broker
- ./central-logs-engine:/var/log/centreon-engine
- ./central-nagios-plugins:/usr/lib/nagios/plugins
- ./central-ssh-key:/home/centreon/.ssh/
ports:
- "80:80"
- "5667:5667"
- "5669:5669"
- "22:22"
deploy:
restart_policy:
window: 300s
links:
- mariadb
mariadb:
image: mariadb
container_name: sinelis-mariadb
environment:
MYSQL_ROOT_PASSWORD: passwd2017
deploy:
restart_policy:
window: 300s
To run the container I use the docker run -it centos:centos7 command
It doesn't matter what you put in your image at that location, you will always see the contents of your volume mount:
- ./central-nagios-plugins:/usr/lib/nagios/plugins
Docker does not initialize host volumes to the contents of the image, and once you have data in the volume, docker does an initialization with any volume type.
Keep in mind the build happens on an image without any of the other configurations in the compose file applied, no volumes are mounted for you to update. Then when you run your container, you overlay the directories of the image with the volumes you select. Build time and run time are two separate phases.
Edit: to have a named volume point to a host directory, you can defined a bind mount volume. This will not create the directory if it does not already exist (the attempt to mount the volume will fail and the container would not start). But if the directory is empty, it will be initialized to the contents of your image:
version: "3"
volumes:
central-nagios-plugins:
driver: local
driver_opts:
type: none
o: bind
device: /usr/lib/nagios/plugins
services:
centreon:
....
volumes:
...
- central-nagios-plugins:/usr/lib/nagios/plugins
...
It will be up to you to empty the contents of this volume when you want it to be reinitialized with the contents of your image, and merging multiple versions of this directory would also be a process you'd need to create yourself.
Related
I have just started using Docker as it has been recommended to me as something that makes development easy, but so far it has been nothing but pain. I have installed docker engine (v20.10.12) and docker composer (v 2.2.3) as per the documentation given by docker for Ubuntu OS. Both work as intended.
Whenever I new up a new container with docker compose, no matter the source, I have writing privilege issues to files generated by the docker container (for example a laravel application where I have used php artisan to create a controller file). I have so far pinpointed the issue to be as follows:
By default docker runs as root within the container. It "bridges" the root user to the root user on the local machine and uses root:root to create files on the Ubuntu filesystem (my workspace is placed in ~/workspace/laravel). Then when opening the files in an IDE (vscode in this instance) I get the error:
Failed to save to '<file_name>': insufficient permissions. Select 'Retry as Sudo' to retry as superuser
If I try to parse my own local user into the machine and tells it to use that specific userid and groupid it's all good when I'm using the first user created on the machine (1000:1000) since that will match with the containers default user if we look at the bitnami/laravel docker image for example.
All of this can be fixed by running chown -R yadayada . on the workspace directory every time I use php artisan to create a file. But I do not think this is sustainable or smart in any way shape or form.
How can I tell my docker container to, on startup, to check if a user with my UID and GID exists and if not, then make a user with that id and assign it as a system user?
My docker-compose.yml for this example
version: '3.8'
services:
api_php-database:
image: postgres
container_name: api_php-database
restart: unless-stopped
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: laravel_docker
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- '5432:5432'
api_php-apache:
container_name: api_php-apache
build:
context: ./php
ports:
- '8080:80'
volumes:
- ./src:/var/www/laravel_docker
- ./apache/default.conf:/etc/apache2/sites-enabled/000-default.conf
depends_on:
- api_php-database
My Dockerfile for this example
FROM php:8.0-apache
RUN apt update && apt install -y g++ libicu-dev libpq-dev libzip-dev zip zlib1g-dev && docker-php-ext-install intl opcache pdo pdo_pgsql pgsql
WORKDIR /var/www/laravel_docker
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
In general, this is not possible, but there are workarounds (I do not recommend them for production).
The superuser UID is always 0, this is written in the kernel code.
It is not possible to automatically change the ownership of non-root files.
In this case, when developing, you can use these methods:
If superuser rights are not required:
You can create users dynamically, then docker-compose.yml:
version: "3.0"
services:
something:
image: example-image
volumes:
- /user/path1:/container/path1
- /user/path2:/container/path2
# The double $ is needed to indicate that the variable is in the container
command: ["bash", "-c", "chown -R $$HOST_UID:$$HOST_GID /container/path1 /container/path2; useradd -g $$HOST_GID -u $$HOST_UID user; su -s /bin/bash user"]
environment:
HOST_GID: 100
HOST_UID: 1000
Otherwise, if you run commands in a container as root in Bash:
Bash will run the script from the PROMPT_COMMAND variable after each command is executed
This can be used in development by changing docker-compose.yaml:
version: "3.0"
services:
something:
image: example-image
volumes:
- /user/path1:/container/path1
- /user/path2:/container/path2
command: ["bash"]
environment:
HOST_UID: 1000
HOST_GID: 100
# The double $ is needed to indicate that the variable is in the container
PROMPT_COMMAND: "chown $$HOST_UID:$$HOST_GID -R /container/path1 /container/path2"
When I tried docker-compose build and docker-compose up -d
I suffered api-server container didn't start.
I tried
docker logs api-server
yarn run v1.22.5
$ nest start --watch
/bin/sh: nest: not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
It seems nest packages didn't installed. because package.json was not copied to container from host.
But in my opinion,by volume was binded by docker-compose.yml, Therefore the command yarn install should refer to the - ./api:/src.
Why do we need to COPY files to container ?
Why only the volume binding doesn't work well ?
If someone has opinion,please let me know.
Thanks
The following is dockerfile
FROM node:alpine
WORKDIR /src
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN apk --no-cache add curl
RUN yarn install
CMD yarn start:dev
Following is
docker-compose.yml
version: '3'
services:
api-server:
build: ./api
links:
- 'db'
ports:
- '3000:3000'
volumes:
- ./api:/src
- ./src/node_modules
tty: true
container_name: api-server
Volumes are mounted at runtime, not at build time, therefore in your case, you should copy the package.json prior to installing dependencies, and running any command that needs these dependencies.
Some references:
Docker build using volumes at build time
Can You Mount a Volume While Building Your Docker Image to Cache Dependencies?
Hi there I am new to Docker. I have an docker-compose.yml which looks like this:
version: "3"
services:
lmm-website:
image: lmm/lamp:php${PHP_VERSION:-71}
container_name: ${CONTAINER_NAME:-lmm-website}
environment:
HOME: /home/user
command: supervisord -n
volumes:
- ..:/builds/lmm/website
- db_website:/var/lib/mysql
ports:
- 8765:80
- 12121:443
- 3309:3306
networks:
- ntw
volumes:
db_website:
networks:
ntw:
I want to install the Yarn package manager from within the docker-compose file:
sudo apt-get update && sudo apt-get install yarn
I could not figure out how to declare this, I have tried
command: supervisord -n && sudo apt-get update && sudo apt-get install yarn
which fails silently. How do I declare this correctly? Or is docker-compose.yml the wrong place for this?
Why not use Dockerfile which is specifically designed for this task?
Change your "image" property to "build" property to link a Dockerfile.
Your docker-compose.yml would look like this:
services:
lmm-website:
build:
context: .
dockerfile: Dockerfile
container_name: ${CONTAINER_NAME:-lmm-website}
environment:
HOME: /home/user
command: supervisord -n
volumes:
- ..:/builds/lmm/website
- db_website:/var/lib/mysql
ports:
- 8765:80
- 12121:443
- 3309:3306
networks:
- ntw
volumes:
db_website:
networks:
Then create a text file named Dockerfile in the same path as docker-compose.yml with the following content:
FROM lmm/lamp:php${PHP_VERSION:-71}
RUN apt-get update && apt-get install -y bash
You can add as much SO commands as you want using Dockerfile's RUN (cp, mv, ls, bash...), apart from other Dockerfile capabilities like ADD, COPY, etc.
+info:
https://docs.docker.com/engine/reference/builder/
+live-example:
I made a github project called hello-docker-react. As it's name says is a docker-react box, and can serve you as an example as I am installing yarn plus other tools using the procedure I explained above.
In addition to that, I also start yarn using an entrypoint bash script linked to docker-compose.yml file using docker-compose entrypoint property.
https://github.com/lopezator/hello-docker-react
You can only do it with a Dockerfile, because the command operator in docker-compose.yml only keeps the container alive during the time the command is executed, and then it stops.
Try this
command: supervisord -n && apt-get update && apt-get install yarn
Because sudo doesn't work in docker.
My first time trying to help out:
would like you to give it a try (I found it on the internet)
FROM lmm/lamp:php${PHP_VERSION:-71}
USER root
RUN apt-get update && apt-get install -y bash
I am using docker compose and i have created a volume. I have mulitple containers. I am facing issue to run commands in the docker container.
I have node js container which have separate frontend and backend folders. i need to run npm install in both the folders.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
node:
build:
context: ./node
volumes_from:
- applications
ports:
- "4000:30001"
networks:
- frontend
- backend
This is my docker file for node
FROM node:6.10
MAINTAINER JC Gil <sensukho#gmail.com>
ENV TERM=xterm
ADD script.sh /tmp/
RUN chmod 777 /tmp/script.sh
RUN apt-get update && apt-get install -y netcat-openbsd
WORKDIR /var/www/html/Backend
RUN npm install
EXPOSE 4000
CMD ["/bin/bash", "/tmp/script.sh"]
my workdir is empty as location /var/www/html/Backend is not available while building but available when i conainter is up. So my command npm install do not work
What you probably want to do, is to ADD or COPY the package.json file to the correct location, RUN npm install, then ADD or COPY the rest of the source into the image. That way, docker build will re-run npm install only when needed.
It would probably be better to run frontend and backend in separate containers, but if that's not an option, it's completely feasible to run ADD package.json-RUN npm install-ADD . once for each application.
The RUN is an image build step, at build time the volume isn't attached yet.
I think you have to execute npm install inside CMD.
You can try to add npm install inside /tmp/script.sh
Let me know
As Tomas Lycken Mentioned to copy files and then run npm install. I have separated containers for Frontend and backend. Most important is the node modules for the frontend and backend. Need to create them as volumes in services so that they are available when we up container.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
- ${BACKEND}:/var/www/html/Backend
- ${FRONTEND}:/var/www/html/Frontend
apache:
build:
context: ./apache2
volumes_from:
- applications
volumes:
- ${APACHE_HOST_LOG_PATH}:/var/log/apache2
- ./apache2/sites:/etc/apache2/sites-available
- /var/www/html/Frontend/node_modules
- /var/www/html/Frontend/bower_components
- /var/www/html/Frontend/dist
ports:
- "${APACHE_HOST_HTTP_PORT}:80"
- "${APACHE_HOST_HTTPS_PORT}:443"
networks:
- frontend
- backend
node:
build:
context: ./node
ports:
- "4000:4000"
volumes_from:
- applications
volumes:
- /var/www/html/Backend/node_modules
networks:
- frontend
- backend
Here is what I want to do:
docker-compose build
docker-compose $COMPOSE_ARGS run --rm task1
docker-compose $COMPOSE_ARGS run --rm task2
docker-compose $COMPOSE_ARGS run --rm combine-both-task2
docker-compose $COMPOSE_ARGS run --rm selenium-test
And a docker-compose.yml that looks like this:
task1:
build: ./task1
volumes_from:
- task1_output
command: ./task1.sh
task1_output:
image: alpine:3.3
volumes:
- /root/app/dist
command: /bin/sh
# essentially I want to copy task1 output into task2 because they each use different images and use different tech stacks...
task2:
build: ../task2
volumes_from:
- task2_output
- task1_output:ro
command: /bin/bash -cx "mkdir -p task1 && cp -R /root/app/dist/* ."
So now all the required files are in task2 container... how would I start up a web server and expose a port with the content in task2?
I am stuck here... how do I access the stuff from task2_output in my combine-tasks/Dockerfile:
combine-both-task2:
build: ../combine-tasks
volumes_from:
- task2_output
In recent versions of docker, named volumes replace data containers as the easy way to share data between containers.
docker volume create --name myshare
docker run -v myshare:/shared task1
docker run -v myshare:/shared -p 8080:8080 task2
...
Those commands will set up one local volume, and the -v myshare:/shared argument will make that share available as the folder /shared inside each of each container.
To express that in a compose file:
version: '2'
services:
task1:
build: ./task1
volumes:
- 'myshare:/shared'
task2:
build: ./task2
ports:
- '8080:8080'
volumes:
- 'myshare:/shared'
volumes:
myshare:
driver: local
To test this out, I made a small project:
- docker-compose.yml (above)
- task1/Dockerfile
- task1/app.py
- task2/Dockerfile
I used node's http-server as task2/Dockerfile:
FROM node
RUN npm install -g http-server
WORKDIR /shared
CMD http-server
and task1/Dockerfile used python:alpine, to show two different stacks writing and reading.
FROM python:alpine
WORKDIR /app
COPY . .
CMD python app.py
here's task1/app.py
import time
count = 0
while True:
fname = '/shared/{}.txt'.format(count)
with open(fname, 'w') as f:
f.write('content {}'.format(count))
count = count + 1
time.sleep(10)
Take those four files, and run them via docker compose up in the directory with docker-compose.yml - then visit $DOCKER_HOST:8080 to see a steadily updated list of files.
Also, I'm using docker version 1.12.0 and compose version 1.8.0 but this should work for a few versions back.
And be sure to check out the docker docs for details I've probably missed here:
https://docs.docker.com/engine/tutorials/dockervolumes/
For me the best way to copy file from or to container is using docker cp for example:
If you want copy schema.xml from apacheNutch container to solr container then:
docker cp apacheNutch:/root/nutch/conf/schema.xml /tmp/schema.xml
server/solr/configsets/nutch/
docker cp /tmp/schema.xml
solr:/opt/solr-8.1.1/server/solr/configsets/nutch/conf