This question already has answers here:
Will a docker container auto sync time with its host machine?
(7 answers)
Closed 5 years ago.
I've followed installation docs in http://docs.drone.io/installation/
Below is my docker-compose.yml file
version: '2'
services:
drone-server:
image: drone/drone:0.8
ports:
- 80:8000
- 9000
volumes:
- /var/lib/drone:/var/lib/drone/
restart: always
environment:
- DRONE_OPEN=true
- DRONE_HOST= localhost
- DRONE_GITLAB=true
- DRONE_GITLAB_CLIENT=dfsdfsdf
- DRONE_GITLAB_SECRET=dsfdsf
- DRONE_GITLAB_URL=https://tecgit01.com
- DRONE_SECRET=${DRONE_SECRET}
drone-agent:
image: drone/agent:0.8
restart: always
depends_on:
- drone-server
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=drone-server:9000
- DRONE_SECRET=${DRONE_SECRET}
I'm running this on OSX(10.13.1) with Docker version 17.09.0-ce, build afdb6d4.
Local time in drone_agent is very different from the host time. This is causing the AWS API calls to fail when building my app. It throws this error.https://forums.aws.amazon.com/thread.jspa?threadID=103764#. I tried logging the current time inside the app to verify the time difference.
Is there a config to sync host time with the docker agent?
As you've pointed out, this isn't a Drone.io issue, it's instead an issue with the underlying VM running docker's clock becoming out of sync.
This can be fixed by following the steps outlined in the question you linked to.
Related
I've been using a docker-compose.yml file to set up a basic/simple instance of Nifi. Last week my nifi instance was working perfectly fine. Haven't changed anything to my nifi docker-compose file.
I have updated both the browser and docker desktop on Monday, and ever since then. However, my coworker has tried running the docker compose file and has had the same issue.
When I run docker compose up on the docker-compose.yml file, there are no issue in the container logs, and it seems the docker container is running perfectly fine. When I try and access 'https://localhost:8443/nifi' firefox returns the following message:
An error occurred during a connection to 127.0.0.1:8443. PR_END_OF_FILE_ERROR
I've tried different browsers, both chrome and edge return the following message:
This site can’t be reached localhost unexpectedly closed the connection.
I've also tried restarting my computer, docker desktop, and even the containers but nothing solved this issue. Here is my docker-compose.yml file contents:
version: '3'
services:
nifi:
cap_add:
- NET_ADMIN # low port bindings
image: apache/nifi
container_name: nifi
ports:
- "8080:8080/tcp" # HTTP interface
- "8443:8443/tcp" # HTTPS interface
- "514:514/tcp" # Syslog
- "514:514/udp" # Syslog
- "2055:2055/udp" # NetFlow
environment:
- SINGLE_USER_CREDENTIALS_USERNAME=admin
- SINGLE_USER_CREDENTIALS_PASSWORD=password1234
volumes:
- ../../nifi/drivers:/opt/nifi/nifi-current/drivers
- ../../nifi/certs:/opt/certs
- ./output:/opt/nifi/nifi-current/ls-target
- nifi-conf:/opt/nifi/nifi-current/conf
restart: unless-stopped
nifi-registry:
image: apache/nifi-registry
container_name: nifi-registry
ports:
- "18080:18080/tcp" # HTTP interface
restart: unless-stopped
Not sure what my next steps should be. I have followed the instructions on this site "https://kinsta.com/knowledgebase/pr-end-of-file-error/" but no luck. I feel as if it must be something with docker desktop or the container that's causing an issue with certs on the browser. Since, both my coworker and I are having this issue.
This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Communication between multiple docker-compose projects
(20 answers)
Closed 1 year ago.
I have two apps (microservices) in separate docker composes
app1.yml
version: "3.4"
services:
app1:
image: flask-app1
environment:
- APP2_URL=http://localhost:8000
ports:
- 5000:8000
volumes:
- "../:/app/"
depends_on:
- db_backend1
restart: on-failure
db_backend1:
...
app2.yml
version: "3.4"
services:
app2:
image: flask-app2
ports:
- 8000:8000
volumes:
- "..:/app"
restart: on-failure
of course they have other dependecies (database server, etc)
i need to run both of them locally, each of them can run well locally, but in this case app1 need to fetch data from app2 by sending a http get request, so i set the app2 url (http://localhost:8000) as an environment variable (just for dev purposes). but the always get requests exception error saying the connection end closed.
So, it will be great if anyone knows how to sort it out.
The container is a “device” so it has it’s own “localhost” so when you set the url as is, it’s called internally which is not what you want.
The solution is to create a network between the composes so you can refer to the specific container as “containerName:port”.
You can refer to :
Communication between multiple docker-compose projects
I followed this guide (https://www.jeffgeerling.com/blog/2021/monitor-your-internet-raspberry-pi) by Jeff Geerling to install an internet monitoring dashboard using prometheus and grafana running in docker containers.
Everything works great, but I noticed that the data is getting deleted after 15 days. After a quick search I found out that this is the default setting for the storage retention in prometheus.
I tried a lot by myself but I cannot find a way to change this setting.
Even though I found this tutorial (https://mkezz.wordpress.com/2017/11/13/prometheus-command-line-flags-in-docker-service/) which as far as I know should exactly tackle the problem I have but it doesn't work. I get the Error: Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. when running the first command mentioned.
Also I found this question (Increasing Prometheus storage retention) but I cannot use the best answer because my prometheus is running in the docker container.
Is there an easy way to set a command line flag for prometheus something like this --storage.tsdb.retention.time=30d?
This is the ReadMe-File I downloaded when I first installed it:
# Internet Monitoring Docker Stack with Prometheus + Grafana
> This repository is a fork from [maxandersen/internet-monitoring](https://github.com/maxandersen/internet-monitoring), tailored for use on a Raspberry Pi. It has only been tested on a Raspberry Pi 4 running Pi OS 64-bit beta.
Stand-up a Docker [Prometheus](http://prometheus.io/) stack containing Prometheus, Grafana with [blackbox-exporter](https://github.com/prometheus/blackbox_exporter), and [speedtest-exporter](https://github.com/MiguelNdeCarvalho/speedtest-exporter) to collect and graph home Internet reliability and throughput.
## Pre-requisites
Make sure Docker and [Docker Compose](https://docs.docker.com/compose/install/) are installed on your Docker host machine.
## Quick Start
`
git clone https://github.com/geerlingguy/internet-monitoring
cd internet-monitoring
docker-compose up -d
`
Go to [http://localhost:3030/d/o9mIe_Aik/internet-connection](http://localhost:3030/d/o9mIe_Aik/internet-connection) (change `localhost` to your docker host ip/name).
## Configuration
To change what hosts you ping you change the `targets` section in [/prometheus/pinghosts.yaml](./prometheus/pinghosts.yaml) file.
For speedtest the only relevant configuration is how often you want the check to happen. It is at 30 minutes by default which might be too much if you have limit on downloads. This is changed by editing `scrape_interval` under `speedtest` in [/prometheus/prometheus.yml](./prometheus/prometheus.yml).
Once configurations are done, run the following command:
$ docker-compose up -d
That's it. docker-compose builds the entire Grafana and Prometheus stack automagically.
The Grafana Dashboard is now accessible via: `http://<Host IP Address>:3030` for example http://localhost:3030
username - admin
password - wonka (Password is stored in the `config.monitoring` env file)
The DataSource and Dashboard for Grafana are automatically provisioned.
If all works it should be available at http://localhost:3030/d/o9mIe_Aik/internet-connection - if no data shows up try change the timeduration to something smaller.
<center><img src="images/dashboard.png" width="4600" heighth="500"></center>
## Interesting urls
http://localhost:9090/targets shows status of monitored targets as seen from prometheus - in this case which hosts being pinged and speedtest. note: speedtest will take a while before it shows as UP as it takes about 30s to respond.
http://localhost:9090/graph?g0.expr=probe_http_status_code&g0.tab=1 shows prometheus value for `probe_http_status_code` for each host. You can edit/play with additional values. Useful to check everything is okey in prometheus (in case Grafana is not showing the data you expect).
http://localhost:9115 blackbox exporter endpoint. Lets you see what have failed/succeded.
http://localhost:9798/metrics speedtest exporter endpoint. Does take about 30 seconds to show its result as it runs an actual speedtest when requested.
## Thanks and a disclaimer
Thanks to #maxandersen for making the original project this fork is based on.
Thanks to #vegasbrianc work on making a [super easy docker](https://github.com/vegasbrianc/github-monitoring) stack for running prometheus and grafana.
This setup is not secured in any way, so please only use on non-public networks, or find a way to secure it on your own.
After further tinkering with it I found a docker-compose.yml file and I simply added under the command-section of prometheus the --storage.tsdb.retention.time=30d as shown here:
version: "3.1"
volumes:
prometheus_data: {}
grafana_data: {}
networks:
front-tier:
back-tier:
services:
prometheus:
image: prom/prometheus:v2.25.2
restart: always
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
- '--storage.tsdb.retention.time=30d'
ports:
- 9090:9090
links:
- ping:ping
- speedtest:speedtest
networks:
- back-tier
grafana:
image: grafana/grafana
restart: always
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
depends_on:
- prometheus
ports:
- 3030:3000
env_file:
- ./grafana/config.monitoring
networks:
- back-tier
- front-tier
ping:
tty: true
stdin_open: true
expose:
- 9115
ports:
- 9115:9115
image: prom/blackbox-exporter
restart: always
volumes:
- ./blackbox/config:/config
command:
- '--config.file=/config/blackbox.yml'
networks:
- back-tier
speedtest:
tty: true
stdin_open: true
expose:
- 9798
ports:
- 9798:9798
image: miguelndecarvalho/speedtest-exporter
restart: always
networks:
- back-tier
nodeexp:
privileged: true
image: prom/node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
ports:
- 9100:9100
restart: always
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
networks:
- back-tier
By then running docker-compose create followed by docker start internet-monitoring_prometheus_1 I can see under [Hostname of Server]:9090/status under Storage Retention 30 days.
Is that the way it should be done, as I think I found my solution.
This question already has an answer here:
Build a single image based on docker compose containers
(1 answer)
Closed 9 months ago.
I have an application composed of a front end, a back end and a mongodb database, each of these dockerized in a container. When I build them with docker compose I have as many images as parts in my application (3).
Is there any way to build a single container from these 3 images and therefore a single image?
Thanks
You can write a Dockerfile if you want to run your application as a single container. it will give you single image as well.
I guess you could do this if you really wanted to. The preferred way is to use docker-compose for this. I would suggest that you create a docker-compose.yml file that helps you setup this:
nginx->frontend (possibly with server side rendering) -> backend -> mongodb
The idea behind docker-compose is to easily get that multi container application up and running using a docker-compose.yml file , then you can just bring up the application with:
$ docker-compose up
You could it setup with something like this:
(This is a hypothetical docker-compose.yml file, but with your correct values it should work. Let me know if you have any questions:
version: '2'
services:
frontend-container:
image: frontend:latest
links:
- backend-container
environment:
- DEBUG=True
restart: always
environment:
- BASE_HOST=http://backend-container:8000/
backend-container:
image: nodejs-backend:latest
links:
- mongodb
environment:
- NODE_ENV=production
- BASE_HOST=http://django-container:8000/
restart: always
mongodb:
image: mongo:latest
container_name: "mongodb"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- ./data/db:/data/db
command: mongod --smallfiles --logpath=/dev/null
nginx-container:
image: nginx-container-custom-config:latest
links:
- frontend-container
ports:
- "80:80"
I have a docker based system that comprises of three containers:
1. The official PHP container, modified with some additional pear libs
2. mysql:5.7
3: alterrebe/postfix-relay (a postfix container)
The official php container has a volume that is linked to the host system's code repository which should in theory allow me to work on this application the same as I would if it were hosted "locally".
However, every time the system is brought up, I have to run
docker-compose stop && docker-compose up -d
in order to see the changes that I just made to the system. It's possible that I don't understand Docker correctly and this is by design, but stopping and starting the container after every code change slows down development substantially. Can anyone tell me what I am doing wrong (if anything)? Thanks in advance.
My docker-compose.yml is below (with variables and what not hidden of course)
web:
build: .
links:
- mysql
- mailrelay
environment:
- HIDDEN_VAR=placeholder
- ABC_ENV=development
volumes:
- ./html/:/var/www/html/
ports:
- "0.0.0.0:80:80"
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=abcdefg
- MYSQL_DATABASE=thedatabase
volumes:
- .:/db/:ro
mailrelay:
hostname: mailrelay
image: alterrebe/postfix-relay
ports:
- "25:25"
environment:
- EXT_RELAY_HOST=relay.relay.com
- EXT_RELAY_PORT=25
- SMTP_LOGIN=CLASSIFIED
- SMTP_PASSWORD=ABCDEFGHIK
- ACCEPTED_NETWORKS=172.0.0.0/8
Eventually I just started running
docker stop {{ container name }} && docker start {{ container name }}
every time instead of docker-compose. using Docker directly instead of docker-compose is super fast ( < 1second as opposed to over a minute) so it stopped being a big deal