Install local portainer with docker compose - docker

I'm reading the docker-ce dock for installation. It's simple two lines :
docker volume create portainer_data &&
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
Is there a way to do all this stuff with docker-compose file ? So the portainer container will be groupped with all the others when executing docker-comppose down ?

This french tutorial got something working :
portainer:
container_name: portainer
image: portainer/portainer-ce:latest #latest might not be the best option
restart: unless-stopped
command: -H unix:///var/run/docker.sock
ports:
- 9000:9000
volumes:
- /var/run/docker.sock:/var/run/docker.sock:fr
- /etc/localtime:/etc/localtime:fr
- /etc/timezone:/etc/timezone:fr #not sure about those 3 volumes
- dataportainer:/data
volumes:
dataportainer:

Related

phpmyadmin docker container can not access mariadb database?

I am trying to make a quick connection setup using the fallowing setup
Copy & Paste to recreate the issue
docker rm -f mariadb && docker run --detach --name mariadb --env MARIADB_USER=user --env MARIADB_PASSWORD=secret --env MARIADB_ROOT_PASSWORD=secret -p 3306:3306 mariadb:latest
docker rm -f phpmyadd && docker run --name phpmyadd -d -e PMA_HOST=host -e PMA_PORT=3306 -p 8080:80 phpmyadmin
docker exec -it mariadb bash
I can login in to mariadb container and access mariadb with
mysql -uroot -psecret
I can also access phpmyadmin container at http://localhost:8080
However when i try to login to mariadb through phpmyadmin i get the fallowing:
It shows that the port is exposed but I can not access it with telnet..
Any idea what is missing here?
For 2 containers to be able to talk to each other, you would have to setup a docker-compose instead. Something like this should work
version: '3.8'
volumes:
mariadb:
driver: local
services:
mariadb:
image: mariadb:10.6
restart: always
environment:
MYSQL_ROOT_PASSWORD: YOUR_ROOT_PASSWORD_HERE
MYSQL_USER: YOUR_MYSQL_USER_HERE
MYSQL_PASSWORD: YOUR_USER_PW_HERE
ports:
- "40000:3306"
volumes:
- mariadb:/var/lib/mysql
phpmyadmin:
image: phpmyadmin
restart: always
ports:
- "40001:80"
environment:
- PMA_HOST=mariadb
- PMA_PORT=3306
And you would start everything using docker-compose up

How do I convert docker-compose configuration to dockerfile

I am a bit confused I was trying to convert dockercompose of elastic kibana to dockerfile. But networking part and connectivity part is bit confusing for me. Can anyone help me with conversion and a bit of explanation.
Thanks a lot!
version: "3.0"
services:
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
environment:
- xpack.security.enabled=true
- "discovery.type=single-node"
networks:
- es-net
ports:
- 9200:9200
kibana:
container_name: kb-container
image: docker.elastic.co/kibana/kibana:6.5.4
environment:
- ELASTICSEARCH_HOSTS=http://es-container:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
es-net:
driver: bridge
Docker Compose and Dockerfiles are completely different things. The Dockerfile is a configuration file used to create Docker images. The docker-compose.yml file is a configuration file used by Docker Compose to launch Docker containers using Docker images.
To launch the above containers without using Docker Compose you could run:
docker network create es-net
docker run -d -e xpack.security.enabled=true -e "discovery.type=single-node" -p 9200:9200 --network es-net --name es-container docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker run -d -e ELASTICSEARCH_HOSTS=http://es-container:9200 -p 5601:5601 --network es-net --name kb-container docker.elastic.co/kibana/kibana:6.5.4
Alternatively, you could run the containers on the hosts network stack (rather than the es-net nework). Kibana would then be able to talk to ElasticSearch on localhost:
docker run -d -e xpack.security.enabled=true -e "discovery.type=single-node" --network host --name es-container docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker run -d -e ELASTICSEARCH_HOSTS=http://localhost:9200 --network host --name kb-container docker.elastic.co/kibana/kibana:6.5.4
(I haven't actually run these so the commands might need some tweaking).
In that docker-compose.yml file, the only thing that could be built into an image at all are the environment variables, and there's not much benefit to hard-coding your deployment configuration like this. In particular you cannot force the eventual container name or manually specify the eventual networking configuration in an image.
If you're looking for a compact self-contained description of what to run that you can redistribute, the docker-compose.yml is it. Don't try to send around images, or focus on trying to have a single container; instead, distribute the docker-compose.yml file as the way to run your application. I'd consider Compose a standard enough tool that anyone who has Docker already has it and knows how to run docker-compose up -d.
# How to run this application on a different system
# (with Docker and Compose preinstalled):
here$ scp docker-compose.yml there:
here$ ssh there
there$ sudo docker-compose up -d

Docker compsoe not works the sama as docker run

When I run my ELK by docker run works fine, but when I run by docker-compose logs are sent to logstash very slow and not stable.
docker-compose.yml
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
kibana:
image: docker.elastic.co/kibana/kibana:7.6.0
links:
- elasticsearch
ports:
- 5601:5601
logstash:
image: docker.elastic.co/logstash/logstash:7.6.0
links:
- elasticsearch
ports:
- 5044:5044
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/config:/usr/share/logstash/pipeline/
run by docker run
docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.6.
docker run -d --link edcf2406addf:elasticsearch -p 5601:5601 docker.elastic.co/kibana/kibana:7.6.0
docker run --link edcf2406addf:elasticsearch -v /var/www/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml -v /var/www/logstash/config:/usr/share/logstash/pipeline/ -p 5044:5044 docker.elastic.co/logstash/logstash:7.6.0
I would like to use docker-compose but something not works fine. Could someone help me configuring docker-compose properly?
docker-compose version 1.21.0-rc1
Docker version 19.03.5
Thanks

docker run vs docker-compose one of these things is not like the other

I have an nginx proxy setup with a shellscript that looks something like this
docker run --detach --name nginx-proxy --publish 80:80 --publish 443:443 --volume /etc/nginx/certs \
--volume /etc/nginx/vhost.d --volume /usr/share/nginx/html --volume /var/run/docker.sock:/tmp/docker.sock:ro --restart unless-stopped jwilder/nginx-proxy:alpine
echo proxy up
docker run --detach --name nginx-proxy-letsencrypt --volumes-from nginx-proxy --volume /var/run/docker.sock:/var/run/docker.sock:ro \
--restart unless-stopped jrcs/letsencrypt-nginx-proxy-companion
echo ssl companion up
docker run -d \
-e VIRTUAL_HOST=[domain] \
\-e "LETSENCRYPT_HOST=[domain]" \
-e "LETSENCRYPT_EMAIL=[emailaddress]" \
--name [domain] \
--expose 80 \
--restart always \
-v /code/[domain]:/var/www/html \
fauria/lamp
echo test site up at [domain]
and this site works properly and functions as expected.
I then stop the web server container and use the following docker-compose.yaml and it fails with a 502..
version: '3.3'
services:
lamp:
restart: always
image: fauria/lamp
container_name: [domain]
expose:
- "80"
volumes:
- /code/[domain]:/var/www/html
environment:
- VIRTUAL_HOST=[domain]
- LETSENCRYPT_HOST=[domain]
- LETSENCRYPT_EMAIL=[emailaddress]
Why? Aren't they the same? What am I missing?
When you use docker-compose, docker-compose creates a docker network for you, in which all of the services can communicate with each other. Since you simply stopped the container and started it with docker-compose, now it does not have access to the containers on your localhost. This is why you get the 502 error. What you need to do is add the other containers to your docker compose file, and make sure you are connecting to the hosts using the proper service name (instead of localhost use http://service_name:443). Alternatively you can somehow give the containers in your docker network access to your localhost, but I'm not sure how to do this. Maybe you need to use 0.0.0.0 instead of 127.0.0.1?
The problem is that i was not connecting my docker-compose to the bridge network used by default in the proxy image.
version: '3.3'
services:
lamp:
restart: always
image: fauria/lamp
network-mode: bridge
container_name: [domain]
expose:
- "80"
volumes:
- /code/[domain]:/var/www/html
environment:
- VIRTUAL_HOST=[domain]
- LETSENCRYPT_HOST=[domain]
- LETSENCRYPT_EMAIL=[emailaddress]

Docker gitlab container heatly but not accessible

Hello,
I have the following problem on docker 18.06.1-ce.
I have an owncloud container that works with the following configurations:
Image : owncloud/server:10.0
Status healthy
Ports : 0.0.0.0:4090->80/tcp, 0.0.0.0:4093->443/tcp
So far, so good, this container is functional.
Now, I want to add a gitlab container with the following configurations:
Image : gitlab/gitlab-ce:latest
Status : heatly
Ports : 0.0.0.0:2222->22/tcp, 0.0.0.0:8080->80/tcp, 0.0.0.0:4443->443/tcp
The problem is that I can't access the containers with the ports listed above (connection failed).
I tried to install the container in a different way:
By docker run command :
docker run --detach --hostname nsXXXXX.ip-XX-XXX-XX.eu --env GITLAB_OMNIBUS_CONFIG="external_url 'https://nsXXXXX.ip-XX-XXX-XX.eu:4443'; gitlab_rails['lfs_enabled'] = true;" --publish 4443:443 --publish 8080:80 --publish 2222:22 --name gitlab --restart always --volume /srv/gitlab/config:/etc/gitlab --volume /srv/gitlab/logs:/var/log/gitlab --volume /srv/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
And by docker-compose:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'nsXXXXXXX.ip-XX-XXX-XX.eu'
privileged: true
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://nsXXXXXXX.ip-XX-XXX-XX.eu:4443/'
gitlab_rails['gitlab_shell_ssh_port'] = 4182
ports:
- '4180:80'
- '4443:443'
- '4182:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
My docker is on a dedicated Debain Stretch hosted by kimsufi.
Do you have any ideas to help me? Thank you very much.
Solved : https://forum.gitlab.com/t/docker-gitlab-container-healthy-but-not-accessible/20042/5
It was necessary to map the port of the external URL to the internal port... Beginner's error:)

Resources