Docker-compose linking service into dockerfile - docker

I am pretty new to docker, and I am trying to make a container with multiple apps.
Let say that my docker-compose file is like this :
version: '2'
services:
myapp:
build: ./dockerfiles/myapp
volumes:
- ./www:/var/www
- ./logs:/var/log
- ./mysql-data:/var/lib/mysql
- ./php:/etc/php5
- ./nginx:/etc/nginx
ports:
- "8082:8000"
- "6606:3306"
links:
- mysql:mysql
- php:php
- nginx:nginx
mysql:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: M#yW3Bw35t
MYSQL_USER: replymwp
MYSQL_PASSWORD: ZSzLPoOi9wlhFaiJ
php:
image: php:5.6-fpm
links:
- mysql:db
nginx:
image: nginx
links:
- php:php
Now, in myapp DockerFile, I want to install a package that needs mysql.
FROM debian:jessie
RUN apt-get update
RUN apt-get update
RUN apt-get install -y apt-show-versions
RUN apt-get install -y wget
RUN wget http://repo.ajenti.org/debian/key -O- | apt-key add -
RUN echo "deb http://repo.ajenti.org/ng/debian main main ubuntu" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y ajenti
RUN apt-get install -y ajenti-v ajenti-v-ftp-vsftpd ajenti-v-php-fpm ajenti-v-mysql
EXPOSE 8000
ENTRYPOINT ["ajenti-panel"]
Now the problem is, when docker try to build my image, it install php, mysql etc... even if I link it in my docker-compose file. And secondly, when it try to install mysql, It prompt for a master password and stay blocked at this step, even if I fill something...
Maybe am I totally wrong in my way of using it?
Any help would be appreciate.

I suppose your ajenti has a dependency on mysql, so if you do apt-get install ajenti, it tries to satisfy that dependency. Specifically you are installing ajenti-v-mysql, which does seem to have a mysql dependency
Because you want to run mysql seperate, you might need to do --no-install-recommends ? This is a flag voor apt-get, so you'd get something like
apt-get install <packagename> --no-install-recommends
This would mean you get NO dependencies, so you might need to figure out which other depenencies you need.
The php-fpm has the same issue, I suppose that whole line which includes ajenti-v-php-fpm is a bit too much?

If you're planning on using separate mysql and php containers, then why are you still including the installation in the mpapp dockerfile on this line:
RUN apt-get install -y ajenti-v ajenti-v-ftp-vsftpd ajenti-v-php-fpm ajenti-v-mysql
If you're going to use mysql and php containers then you don't need them in your app. This should also take care of your second problem about being prompted for mysql password.
Keep in mind that you will need to change the hostnames of mysql and your php configuration from your myapp configuration. I think you might be better off looking for a tutorial for setting up docker compose, you'll have to look yourself to find the most suitable but something like this would give you a good start.

Related

Containers doesn't start after doing docker-compose up -d

i'm having some problems using dockers.
First of all, i did a docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- 8000:80
volumes:
- $HOME/sitios:/var/www/html
db:
build: .
ports:
- 3000:3306
volumes:
- $HOME/"mariadb copia":/var/lib
As you can see here, i want to make a docker with two volumes, one with HTTP and other with mariadb server.
Here is my Dockerfile:
FROM ubuntu:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install nano mariadb-server apache2 -y
Then, i use the command sudo docker-compose up -d, however, the docker doesn't start at all, i try sudo docker start <name> but it doesn't work.
I already googled and i already looked into the official docker documentation but i can't find anything.
Thanks for your help.
Because you're using ubuntu:latest you're missing an entrypoint.
So it starts but immediately exits with exit 0.
Another thing: do not do apt upgrade inside the dockerfile, just take another Image (with a higher version or whatever you looking for).
One more thing: you use two different services, in the same image, it's pretty weird.
And one last thing: use the official Images, so you do not have to build them yourself.

Exposing Docker Volumes to Nginx

I'm trying to connect a Json file which resides in a docker volume of the following container to my main docker container which is running a django project.
Since I am using Caprover my Docker Compose options are very limited.
So Docker Composer is not really an option. I want to instead just expose the json file over the web with a link.
Something like domain.com/folder/jsonfile.json
Can somebody tell me if this is possible inside this dockerfile?
The image I am using is crucial to the container so can I just add an nginx image or do I need any other changes to make this work?
Or is nginx not even necessary?
FROM ubuntu:devel
ENV TZ=Etc/UTC
ARG APP_HOME=/app
WORKDIR ${APP_HOME}
ENV DEBIAN_FRONTEND=noninteractive
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime
RUN echo $TZ > /etc/timezone
RUN apt-get update && apt-get upgrade -y
RUN apt-get install gnumeric -y
RUN mkdir -p /etc/importer/data
RUN mkdir /voldata
COPY config.toml /etc/importer/
COPY datasets/* /etc/importer/data/
VOLUME /voldata
COPY importer /usr/bin/
RUN chmod +x /usr/bin/importer
COPY . ${APP_HOME}
CMD sleep 999d
Using the same volume in 2 containers
docker-compose:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file'
the mechanism above replaces the volumes_from since v3, but this works for v2 as well:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes_from:
- service1
If you want to avoid unintentional altering add :ro for readonly to the target service:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file:ro'
http-server
Surely you can provide the file via http (or other protocol). There are two oppertunities:
Including a http-service to your container (quite easy depending on what is already given in the container) e.g. using nodejs you can use this https://www.npmjs.com/package/http-server very easy. Size doesn't matter? So just install:
RUN apt-get install -y nodejs npm
RUN npm install -g http-server
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/path/to/your/json"]
docker-compose (Runs per default on 8080, so open this):
existing_service:
ports:
- '8080:8080'
Run a stand alone http-server (nginx, apache httpd,..) in another container, but then you depend again on using the same volume for two services, so for local solutions quite an overkill.
Base image
If you don't have good reasons i'll would never use something like :devel, :rolling or :latest as base image. Stick to a LTS version instead like ubuntu:22.04
Testing for http-server
Dockerfile
FROM ubuntu:20.04
ENV TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get install -y nodejs npm
RUN npm install -g http-server#13.1.0 # Issue with JSON-File in V14: https://github.com/http-party/http-server/issues/634
COPY ./test.json ./usr/wwwhttp/test.json
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/usr/wwwhttp/"]
# docker build -t test/httpserver:latest .
# docker run -p 8080:8080 test/httpserver:latest
Disclaimer:
I am not that familiar with node-docker-images, this is just to give a quick working solution and go on from there. I'm not using nodeJS in production, but I'm sure it can be optimized from being fat to.. well.. being rather fat. But for quick prototyping size doesn't matter.
If you want to just have two containers access the same file, just use a volume with --mount.

Docker: How to use a Dockerfile in addition to docker-compose.yml

I am using this repo to set up a local wordpress development environment:
https://github.com/mjstealey/wordpress-nginx-docker#tldr
I hijacked the docker-compose to change the nginx port, but also to try to install openssl and vim on the nginx server. But when I do a docker-compose up the nginx server never starts properly.
This is what i see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
57cf3dff059f nginx:latest "bash" About a minute ago Restarting (0) 14 seconds ago nginx
I tried to reference a Dockerfile inside the docker-compose like this:
nginx:
image: nginx:${NGINX_VERSION:-latest}
container_name: nginx
ports:
- '8085:8085'
- '443:443'
build: .
volumes:
- ${NGINX_CONF_DIR:-./nginx}:/etc/nginx/conf.d
- ${NGINX_LOG_DIR:-./logs/nginx}:/var/log/nginx
- ${WORDPRESS_DATA_DIR:-./wordpress}:/var/www/html
- ${SSL_CERTS_DIR:-./certs}:/etc/letsencrypt
- ${SSL_CERTS_DATA_DIR:-./certs-data}:/data/letsencrypt
depends_on:
- wordpress
restart: always
Notice the line that says "build: .".
Here's the contents of my Dockerfile:
FROM debian:buster-slim
RUN apt-get update
# installing vim isn't necessary. but just handy.
RUN apt-get -y install openssl
RUN apt-get -y install vim
Clearly, I'm doing something wrong. Maybe I should be defining tasks directly in the docker-compose for the nginx server?
I wanted to find a way to make a clean separation between our customizations and the original code. But maybe this isn't possible.
Thanks
EDIT 1
This is what the Dockerfile looks like:
FROM nginx:latest
RUN apt-get update
&& apt-get -y install openssl
&& apt-get -y install vim
And the nginx section of the docker-compose.yml:
nginx:
#image: nginx:${NGINX_VERSION:-latest}
container_name: nginx
ports:
- '8085:8085'
- '443:443'
build: .
volumes:
- ${NGINX_CONF_DIR:-./nginx}:/etc/nginx/conf.d
- ${NGINX_LOG_DIR:-./logs/nginx}:/var/log/nginx
- ${WORDPRESS_DATA_DIR:-./wordpress}:/var/www/html
- ${SSL_CERTS_DIR:-./certs}:/etc/letsencrypt
- ${SSL_CERTS_DATA_DIR:-./certs-data}:/data/letsencrypt
depends_on:
- wordpress
restart: always
I think you maybe need to change the base image that you are using in the docker file?:
FROM nginx:latest
Then in the docker file, because you are creating your own customised version of nginx image, you should either give custom name or tag.

Gitlab CI with Docker Images

I am using a Gitlag Server and got 2 gitlab-runners (one on my local and one on a VServer) - both work perfectly with echo and simple stuff like building a ubuntu server with mysql and php
stages:
- dbserver
- deploy
build:
stage: dbserver
image: ubuntu:16.04
services:
- mysql:5.7
- php:7.0
variables:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: test2
script:
- apt-get update -q && apt-get install -qqy --no-install-recommends
mysql-client
- mysql --user=root --password=\"$MYSQL_ROOT_PASSWORD\" --
host=mysql < test.sql
I want to import a DB now, but I do not get the idea or the technic behind it. How do I import a .sql file lying on my local PC or server? Do I need to create a DOCKERFILE by myself or can I do that just with the gitlab.yml file?
You can use scp to copy the .sql file to the runner.
You may need to add the commands to install openssh-client e.g.:
script:
apt-get update -y && apt-get install openssh-client -y
and then just add the scp line before invoking mysql, e.g.:
- scp user#server:/path/to/file.sql /tmp/temp.sql
- mysql --user=root --password=\"$MYSQL_ROOT_PASSWORD\" --host=mysql < /tmp/temp.sql
I found a solution which binds a directory from the gitlab-runner machine to the actual container I am using:
sudo nano /etc/gitlab-runner/config.toml
there you change the volumes to something like this
volumes = ["/home/ubuntu/test:/cache"]
/home/ubuntu/test is the directory from the machine and /cache the one from the container
Before you do so I recommend to stop the runner and then start it again

How to install packages from Docker compose?

Hi there I am new to Docker. I have an docker-compose.yml which looks like this:
version: "3"
services:
lmm-website:
image: lmm/lamp:php${PHP_VERSION:-71}
container_name: ${CONTAINER_NAME:-lmm-website}
environment:
HOME: /home/user
command: supervisord -n
volumes:
- ..:/builds/lmm/website
- db_website:/var/lib/mysql
ports:
- 8765:80
- 12121:443
- 3309:3306
networks:
- ntw
volumes:
db_website:
networks:
ntw:
I want to install the Yarn package manager from within the docker-compose file:
sudo apt-get update && sudo apt-get install yarn
I could not figure out how to declare this, I have tried
command: supervisord -n && sudo apt-get update && sudo apt-get install yarn
which fails silently. How do I declare this correctly? Or is docker-compose.yml the wrong place for this?
Why not use Dockerfile which is specifically designed for this task?
Change your "image" property to "build" property to link a Dockerfile.
Your docker-compose.yml would look like this:
services:
lmm-website:
build:
context: .
dockerfile: Dockerfile
container_name: ${CONTAINER_NAME:-lmm-website}
environment:
HOME: /home/user
command: supervisord -n
volumes:
- ..:/builds/lmm/website
- db_website:/var/lib/mysql
ports:
- 8765:80
- 12121:443
- 3309:3306
networks:
- ntw
volumes:
db_website:
networks:
Then create a text file named Dockerfile in the same path as docker-compose.yml with the following content:
FROM lmm/lamp:php${PHP_VERSION:-71}
RUN apt-get update && apt-get install -y bash
You can add as much SO commands as you want using Dockerfile's RUN (cp, mv, ls, bash...), apart from other Dockerfile capabilities like ADD, COPY, etc.
+info:
https://docs.docker.com/engine/reference/builder/
+live-example:
I made a github project called hello-docker-react. As it's name says is a docker-react box, and can serve you as an example as I am installing yarn plus other tools using the procedure I explained above.
In addition to that, I also start yarn using an entrypoint bash script linked to docker-compose.yml file using docker-compose entrypoint property.
https://github.com/lopezator/hello-docker-react
You can only do it with a Dockerfile, because the command operator in docker-compose.yml only keeps the container alive during the time the command is executed, and then it stops.
Try this
command: supervisord -n && apt-get update && apt-get install yarn
Because sudo doesn't work in docker.
My first time trying to help out:
would like you to give it a try (I found it on the internet)
FROM lmm/lamp:php${PHP_VERSION:-71}
USER root
RUN apt-get update && apt-get install -y bash

Resources