I have a docker based system that comprises of three containers:
1. The official PHP container, modified with some additional pear libs
2. mysql:5.7
3: alterrebe/postfix-relay (a postfix container)
The official php container has a volume that is linked to the host system's code repository which should in theory allow me to work on this application the same as I would if it were hosted "locally".
However, every time the system is brought up, I have to run
docker-compose stop && docker-compose up -d
in order to see the changes that I just made to the system. It's possible that I don't understand Docker correctly and this is by design, but stopping and starting the container after every code change slows down development substantially. Can anyone tell me what I am doing wrong (if anything)? Thanks in advance.
My docker-compose.yml is below (with variables and what not hidden of course)
web:
build: .
links:
- mysql
- mailrelay
environment:
- HIDDEN_VAR=placeholder
- ABC_ENV=development
volumes:
- ./html/:/var/www/html/
ports:
- "0.0.0.0:80:80"
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=abcdefg
- MYSQL_DATABASE=thedatabase
volumes:
- .:/db/:ro
mailrelay:
hostname: mailrelay
image: alterrebe/postfix-relay
ports:
- "25:25"
environment:
- EXT_RELAY_HOST=relay.relay.com
- EXT_RELAY_PORT=25
- SMTP_LOGIN=CLASSIFIED
- SMTP_PASSWORD=ABCDEFGHIK
- ACCEPTED_NETWORKS=172.0.0.0/8
Eventually I just started running
docker stop {{ container name }} && docker start {{ container name }}
every time instead of docker-compose. using Docker directly instead of docker-compose is super fast ( < 1second as opposed to over a minute) so it stopped being a big deal
Related
I've got a very simple single-host docker compose setup:
version: "3"
services:
bukofka:
image: picoglavar
restart: always
environment:
- PORT=8000
- MODEL=/models/large
volumes:
- glavar:/models
chlenix:
image: picoglavar
restart: always
environment:
- PORT=8000
- MODEL=/models/small
volumes:
- glavar:/models
# ... other containers ...
As you can see, it's only two services based off a single image, so nothing special really. When I open up docker ps I can see these two services churning. And there I open htop and see that each python application is run at least four times; this is very surprising because I haven't setup any in-container kind of replication, and I'm not running this in any kind of swarm mode.
Why does this happen?
I'm a complete idiot. And colour blind too, apparently.
Note that the lines in green are threads, not processes: https://superuser.com/a/1496571/173193
per #nick-odell
I followed this guide (https://www.jeffgeerling.com/blog/2021/monitor-your-internet-raspberry-pi) by Jeff Geerling to install an internet monitoring dashboard using prometheus and grafana running in docker containers.
Everything works great, but I noticed that the data is getting deleted after 15 days. After a quick search I found out that this is the default setting for the storage retention in prometheus.
I tried a lot by myself but I cannot find a way to change this setting.
Even though I found this tutorial (https://mkezz.wordpress.com/2017/11/13/prometheus-command-line-flags-in-docker-service/) which as far as I know should exactly tackle the problem I have but it doesn't work. I get the Error: Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. when running the first command mentioned.
Also I found this question (Increasing Prometheus storage retention) but I cannot use the best answer because my prometheus is running in the docker container.
Is there an easy way to set a command line flag for prometheus something like this --storage.tsdb.retention.time=30d?
This is the ReadMe-File I downloaded when I first installed it:
# Internet Monitoring Docker Stack with Prometheus + Grafana
> This repository is a fork from [maxandersen/internet-monitoring](https://github.com/maxandersen/internet-monitoring), tailored for use on a Raspberry Pi. It has only been tested on a Raspberry Pi 4 running Pi OS 64-bit beta.
Stand-up a Docker [Prometheus](http://prometheus.io/) stack containing Prometheus, Grafana with [blackbox-exporter](https://github.com/prometheus/blackbox_exporter), and [speedtest-exporter](https://github.com/MiguelNdeCarvalho/speedtest-exporter) to collect and graph home Internet reliability and throughput.
## Pre-requisites
Make sure Docker and [Docker Compose](https://docs.docker.com/compose/install/) are installed on your Docker host machine.
## Quick Start
`
git clone https://github.com/geerlingguy/internet-monitoring
cd internet-monitoring
docker-compose up -d
`
Go to [http://localhost:3030/d/o9mIe_Aik/internet-connection](http://localhost:3030/d/o9mIe_Aik/internet-connection) (change `localhost` to your docker host ip/name).
## Configuration
To change what hosts you ping you change the `targets` section in [/prometheus/pinghosts.yaml](./prometheus/pinghosts.yaml) file.
For speedtest the only relevant configuration is how often you want the check to happen. It is at 30 minutes by default which might be too much if you have limit on downloads. This is changed by editing `scrape_interval` under `speedtest` in [/prometheus/prometheus.yml](./prometheus/prometheus.yml).
Once configurations are done, run the following command:
$ docker-compose up -d
That's it. docker-compose builds the entire Grafana and Prometheus stack automagically.
The Grafana Dashboard is now accessible via: `http://<Host IP Address>:3030` for example http://localhost:3030
username - admin
password - wonka (Password is stored in the `config.monitoring` env file)
The DataSource and Dashboard for Grafana are automatically provisioned.
If all works it should be available at http://localhost:3030/d/o9mIe_Aik/internet-connection - if no data shows up try change the timeduration to something smaller.
<center><img src="images/dashboard.png" width="4600" heighth="500"></center>
## Interesting urls
http://localhost:9090/targets shows status of monitored targets as seen from prometheus - in this case which hosts being pinged and speedtest. note: speedtest will take a while before it shows as UP as it takes about 30s to respond.
http://localhost:9090/graph?g0.expr=probe_http_status_code&g0.tab=1 shows prometheus value for `probe_http_status_code` for each host. You can edit/play with additional values. Useful to check everything is okey in prometheus (in case Grafana is not showing the data you expect).
http://localhost:9115 blackbox exporter endpoint. Lets you see what have failed/succeded.
http://localhost:9798/metrics speedtest exporter endpoint. Does take about 30 seconds to show its result as it runs an actual speedtest when requested.
## Thanks and a disclaimer
Thanks to #maxandersen for making the original project this fork is based on.
Thanks to #vegasbrianc work on making a [super easy docker](https://github.com/vegasbrianc/github-monitoring) stack for running prometheus and grafana.
This setup is not secured in any way, so please only use on non-public networks, or find a way to secure it on your own.
After further tinkering with it I found a docker-compose.yml file and I simply added under the command-section of prometheus the --storage.tsdb.retention.time=30d as shown here:
version: "3.1"
volumes:
prometheus_data: {}
grafana_data: {}
networks:
front-tier:
back-tier:
services:
prometheus:
image: prom/prometheus:v2.25.2
restart: always
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
- '--storage.tsdb.retention.time=30d'
ports:
- 9090:9090
links:
- ping:ping
- speedtest:speedtest
networks:
- back-tier
grafana:
image: grafana/grafana
restart: always
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
depends_on:
- prometheus
ports:
- 3030:3000
env_file:
- ./grafana/config.monitoring
networks:
- back-tier
- front-tier
ping:
tty: true
stdin_open: true
expose:
- 9115
ports:
- 9115:9115
image: prom/blackbox-exporter
restart: always
volumes:
- ./blackbox/config:/config
command:
- '--config.file=/config/blackbox.yml'
networks:
- back-tier
speedtest:
tty: true
stdin_open: true
expose:
- 9798
ports:
- 9798:9798
image: miguelndecarvalho/speedtest-exporter
restart: always
networks:
- back-tier
nodeexp:
privileged: true
image: prom/node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
ports:
- 9100:9100
restart: always
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
networks:
- back-tier
By then running docker-compose create followed by docker start internet-monitoring_prometheus_1 I can see under [Hostname of Server]:9090/status under Storage Retention 30 days.
Is that the way it should be done, as I think I found my solution.
I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that?
Edit
Here is the compose file
version: "3"
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5433:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6378:6379'
web:
build: .
env_file:
- '.env'
ports:
- "8000:80"
volumes:
- './src/edb/Controllers:/app/Controllers'
- './src/edb/Views:/app/Views'
- './src/edb/wwwroot:/app/wwwroot'
- './src/edb/Lib:/app/Lib'
volumes:
postgres:
redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers.
Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers.
The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.
I have many project based on docker-compose files with different settings.
If I wanna start another project I will docker-compose stop current project and docker-compose up another.
But my issue sounds as: how to start 2 or more docker-compose images with any projects at the same time ?
My OS linux ubuntu.
My docker-compose look as:
application:
build: code
volumes:
- ./mp:/var/www/mp
- ./logs/mp:/var/www/mp/app/logs
tty: true
db:
image: mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mp-DB
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: php-fpm
ports:
- 9000:9000
volumes_from:
- application
links:
- db
nginx:
build: nginx
ports:
- 80:80
links:
- php
volumes_from:
- application
volumes:
- ./logs/nginx/:/var/log/nginx
elk:
image: willdurand/elk
ports:
- 81:80
volumes:
- ./elk/logstash:/etc/logstash
- ./elk/logstash/patterns:/opt/logstash/patterns
volumes_from:
- application
- php
- nginx
If I try run another project I got error
'driver failed programming external connectivity on endpoint
mpdockerenv_db_1 Bind for 0.0.0.0:3306 failed: port is already
allocate'
I think I should will to forward ports from containers with different ports but I don't know how to done it.
docker-compose is a tool for handle exactly build for your question .
I mean imagine you have a complex project and you need to organize and have more clean way to handle system environments.
Usually in a docker-compose.yml file you can have as meany docker images you will use.
f.example a partial file i use :
mongo:
image: mongo:latest
ports:
- "3002:27017"
environment:
MONGODB_DATABASE: "meteor-console-dev"
php-fpm-dev:
image: jokediaz/php-fpm.5.6-laravel
volumes:
- ./repos/datamigration:/usr/share/nginx/html/datamigration
- ./unixsock:/sock
- ./config/php-fpm-5.6/:/usr/local/etc/php
links:
- mongo
So if we take a look the following rules :
ports : you can map external port output : internal docker port
environment : set a system environment var
volumes : you're mapping a directory from your Filesystem : inside to
docker container ( so even if you destroy de container these data will be presisted )
links : docker internally has a little internal net DNS management , so if you type : docker network inspect bridge command you will see Subnet range and also a gateway ( usually 172.17.0.1 ) so that mean your running applications inside docker can see each others internally thought this ip , if you put link and the name of the image entry , the docker (little DNS) can map from one container ip to other.
Another point is to make a docker-compose up when docker-compose is modified for re-create all changes you did , a good idea is to make docker-compose down before ( be carefully this will be delete any un-mapped volume ) to clear and free space.
Take a look at docker-compose file reference :
docker-compose file reference
in your case
ports:
- 3306:3306
the port 3306 is in use by host . ( probably you have a running instance of mysql in your system so port is in use )
So simply change to other free port :
ports:
- 3308:3306
I want to restart a container if it crashes automatically. I am not sure how to go about doing this. I have a script docker-compose-deps.yml that has elasticsearch, redis, nats, and mongo. I run this in the terminal to set this up: docker-compose -f docker-compose-deps.yml up -d. After this I set up my containers by running: docker-compose up -d. Is there a way to make these containers restart if they crash? I noticed that docker has a built in restart, but I don't know how to implement this.
After some feedback I added restart: always to my docker-compose file and my docker-compose-deps.yml file. Does this look correct? Or is this how you would implement the restart always?
docker-compose sample
myproject-server:
build: "../myproject-server"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5880:5880
- 6971:6971
volumes:
- "../myproject-server/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
myproject-associate:
build: "../myproject-associate"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5870:5870
volumes:
- "../myproject-associate/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
docker-compose-deps.yml sample
nats:
image: nats
container_name: nats
restart: always
ports:
- 4222:4222
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- "./data:/data"
ports:
- 27017:27017
If you're using compose, it has a restart flag which is analogous to the one existing in the docker run command, so you can use that. Here is a link to the documentation about this part -
https://docs.docker.com/compose/compose-file/
When you deploy out, it depends where you deploy to. Most container clusters like kubernetes, mesos or ECS would have some configuration you can use to auto-restart your containers. If you don't use any of these tools you are probably starting your containers manually and can then just use the restart flag just as you would locally.
Looks good to me. What you want to understand when working on Docker policies is what each one means. always policy means that if it crashes for any reason automatically restart.
So if it stops for any reason, go ahead and restart it.
So why would you ever want to use always as opposed to say on-failure?
In some cases, you might have a container that you always want to ensure is running such as a web server. If you are running a public web application chances are you want that server to be available 100% of the time.
So for web application I expect you want to use always. On the other hand if you are running a worker process on a file and then naturally exit, that would be a good use case for the on-failure policy, because the worker container might be finished processing the file and you probably want to let it close out and not have it restart.
Thats where I would expect to use the on-failure policy. So not just knowing the syntax, but when to apply which policy and what each one means.