I have a wierd problem, as it seems to have been working fine until today. I can't tell what's changed since then, however. I run docker-compose up --build --force-recreate and the build fails saying that it can't resolve the host name.
The issue is specifically because of CURL commands inside one of the Dockerfiles:
USER logstash
WORKDIR /usr/share/logstash
RUN ./bin/logstash-plugin install logstash-input-beats
WORKDIR /tmp
COPY templates/winlogbeat.template.json winlogbeat.template.json
COPY templates/metricbeat.template.json metricbeat.template.json
RUN curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_template/metricbeat-6.3.2 -d#metricbeat.template.json
RUN curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_template/winlogbeat-6.3.2 -d#winlogbeat.template.json
Originally, I had those commands running inside of the Elasticsearch Container, but it stopped working, reporting Could not resolve host: elasticsearch; Unknown error
I thought maybe it was trying to do the RUN commands too soon, so moved the process to the Logstash container, but the issue remains. Logstash depends on Elasticsearch, so Elastic should be up and running by the time that the Logstash container is trying to run this.
I've tried deleting images, containers, network, etc but nothing seems to let me run these CURL commands during the build process;
I'm thinking that perhaps the Docker daemon is caching DNS names, but can't figure out how to reset it, as I've already deleted and recreated the network several times.
Can anyone offer any ideas?
Host: Ubuntu Server 18.04
SW: Docker-CE (current version)
ELK stack: All are the official 6.3.2 images provided by Elastic.
Docker-Compose.YML:
version: '2'
services:
elasticsearch:
build:
context: elasticsearch/
volumes:
- esdata:/usr/share/elasticsearch/data
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
# ports:
# - "9200:9200"
# - "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
HOSTNAME: "elasticsearch"
networks:
- elk
logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
- "5044:5044"
- "5045:5045"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
# Port 5601 is not exposed outside of the container
# Can be accessed through Nginx Reverse Proxy only
# ports:
# - "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
nginx:
build:
context: nginx/
environment:
- APPLICATION_URL=http://docker.local
volumes:
- ./nginx/conf.d/:/etc/nginx/conf.d:ro
ports:
- "80:80"
networks:
- elk
depends_on:
- elasticsearch
fouroneone:
build:
context: fouroneone/
# No direct access, only through Nginx Reverse Proxy
# ports:
# - "8181:80"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
esdata:
Running a curl to elasticsearch is a wrong shortcut as it may not be up, plus Dockerfile may be the wrong place altogether
Also I would not put this script in the Dockerfile but possibly use it to alter the ENTRYPOINT for the image if I really wanted to use Dockerfile (again I would not advise it)
Best to do here is to have a logstash service in docker-file with the image on updated input plugin only and remove all the rest of lines in Dockerfile. And you could have a logstash_setup service which does the setup bits (using logstash image or even cleaner a basic centos image which should have bash and curl installed - since all you do is run a couple of curl commands passing some files)
Script I am talking about might look something like this :
#!/bin/bash
set -euo pipefail
es_url=http://elasticsearch:9200
# Wait for Elasticsearch to start up before doing anything.
until curl -s $es_url -k -o /dev/null; do
sleep 1
done
#then put your code here
curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_ ...
Related
I have two docker containers. I'm trying to curl between one and the other container.
I've set up a common network.
Here is my docker-compose:
version: '3.8'
services:
web:
build: ./webapp/.
container_name: storyweb
command: python3 /app/manage.py runserver 0.0.0.0:8000
volumes:
- ./webapp/:/app/
ports:
- 8007:8000
networks:
- backend
env_file:
- ./.env.dev
novnc:
container_name: dramatica
build: ./dramatica/.
environment:
# Adjust to your screen size
- DISPLAY_WIDTH=1200
- DISPLAY_HEIGHT=800
- PUID=1000
- PGID=1000
ports:
- "8008:8080"
volumes:
- ./dramatica:/app
networks:
- backend
restart: unless-stopped
depends_on:
- web
entrypoint: /app/entrypoint.sh
networks:
backend:
driver: bridge
I'm tryin gto curl from inside vovnc container:
curl -i web:8007
I've also tried storyweb.
Any idea why this is not working?
When your server container runs a server process
python3 /app/manage.py runserver 0.0.0.0:8000
connections between containers need to go to that port 8000. ports: aren't used or required here, and can be removed if you don't require the service to be directly accessible from outside containers on the same Docker network.
If you are trying the command curl -i web:8007 from your host machine. It will not work. You try this command from inside a container. You can try the following method
curl -i localhost:8007
docker-compose novnc curl -i web:8007
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I've an application running on a server, but somehow the server rebooted but some docker services could restart, another not.
docker-compose ps:
Name Command State Ports
------------------------------------------------------------------------------------------------------------
elasticsearch /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
kibana sh -c ./bin/kibana-plugin ... Restarting
logstash /usr/local/bin/docker-entr ... Up 5044/tcp, 9600/tcp
If I try to see the logs of kibana by docker kibana ps:
Plugin kbn_radar already exists, please remove before installing a new version
Found previous install attempt. Deleting...
Attempting to transfer from file:///usr/share/kibana/config/kbn_radar.zip
Transferring 3686700 bytes....................
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
The problem is: kbn_radar takes a long time to restart, so I want to restart the kibana service without needing to restart the other applications. I've tried to change my .yml file where I've run the commands to start de plugins:
kibana:
image: docker.elastic.co/kibana/kibana:6.8.0
command:
- sh
- -c
- './bin/kibana-plugin install file:///usr/share/kibana/config/kbn_radar.zip && ./bin/kibana-plugin install file:///usr/share/kibana/config/ob-kb-funnel-6.8.zip && exec /usr/local/bin/kibana-docker'
So at the end, my docker compose was:
docker-compose.yml:
version: "3"
networks:
elasticsearch-net-624:
services:
elasticsearch-products-624-service:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
container_name: elasticsearch
restart: always
networks:
- elasticsearch-net-624
ports:
- "9200:9200"
- "9300:9300"
expose:
- "9200"
volumes:
- /home/docker/elastic.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /home/docker/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
- /docker/elastic/data:/usr/share/elasticsearch/data
- /docker/elastic/data/snapshots:/usr/share/elasticsearch/data/snapshots
kibana:
image: docker.elastic.co/kibana/kibana:6.8.0
command:
- sh
- -c
- 'exec /usr/local/bin/kibana-docker'
container_name: kibana
restart: always
hostname: kibana
networks:
- elasticsearch-net-624
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- XPACK_GRAPH_ENABLED=true
- XPACK_WATCHER_ENABLED=true
- XPACK_ML_ENABLED=true
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED
ports:
- "5601:5601"
expose:
- "5601"
links:
- elasticsearch-products-624-service
depends_on:
- elasticsearch-products-624-service
volumes:
- /home/docker/kibana.yml:/usr/share/kibana/config/kibana.yml
- /home/docker/ob-kb-funnel-6.8.zip:/usr/share/kibana/config/ob-kb-funnel-6.8.zip
- /home/docker/kbn_radar.zip:/usr/share/kibana/config/kbn_radar.zip
- /home/morpheus/docker/dashboard_app.js:/usr/share/kibana/src/legacy/core_plugins/kibana/public/dashboard/dashboard_app.js
logstash:
image: docker.elastic.co/logstash/logstash:6.8.0
container_name: logstash
restart: always
volumes:
- /home/docker/logstash.yml:/usr/share/logstash/config/logstash.yml
Finally I've tried to restart the service:
docker-compose -f docker-kibana.yml restart kibana
But, the service keeps trying to restart the plugins and if I run docker-compose ps, the command continues "sh -c ./bin/kibana-plugin ..."
How could I restart docker service with another command? Or restart my service without restarting the plugin that already exists?
I recommend that you create a build for your plugin and not do everything at the container start.
A simple dockerfile to fix your issue would look somewhat like this
FROM docker.elastic.co/kibana/kibana:6.8.0
COPY ob-kb-funnel-6.8.zip kbn_radar.zip /usr/share/kibana/config/
RUN ./bin/kibana-plugin install file:///usr/share/kibana/config/kbn_radar.zip &&
./bin/kibana-plugin install file:///usr/share/kibana/config/ob-kb-funnel-6.8.zip
ENTRYPOINT /usr/local/bin/kibana-docker
Next you would need to use docker-compose to build your image. We can do that by updating your service definition
kibana:
build:
context: ./kibana
container_name: kibana
restart: always
hostname: kibana
networks:
- elasticsearch-net-624
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- XPACK_GRAPH_ENABLED=true
- XPACK_WATCHER_ENABLED=true
- XPACK_ML_ENABLED=true
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED
ports:
- "5601:5601"
expose:
- "5601"
links:
- elasticsearch-products-624-service
depends_on:
- elasticsearch-products-624-service
volumes:
- /home/docker/kibana.yml:/usr/share/kibana/config/kibana.yml
- /home/morpheus/docker/dashboard_app.js:/usr/share/kibana/src/legacy/core_plugins/kibana/public/dashboard/dashboard_app.js
As you can see in the service definition we replaced image with build. We assume that your Dockerfile for kibana resides in a folder called kibana and also contains your plugin zip files.
next you can run docker-compose build and it will build you the required images for your compose stack.
The problem is that when you run a docker-compose or a docker stack, a context is created with all the initial data. If you later change this data, for example the command in your case, it will not take effect unless you restart the whole context, that is, unless you bring down and up again the docker-compose or stack.
However, you might try your luck with the following:
Edit the compose with the command you want to run now.
Remove the kibana container at all. I mean, don't try to restart kibana with docker-compose, but remove the container. docker rm -f dir_kibana
Run docker-compose up again. It should detect that kibana is missing and run it again.
I've been using localstack to develop a service against locally. I've just been running their docker image via docker run --rm -p 4567-4583:4567-4583 -p 8080:8080 localstack/localstack
And then I manually run a small script to set up my S3 buckets, SQS queues, etc.
Now, I'd like to make this easier for others so I thought I'd just add a Dockerfile and docker-compose.yml file. Unfortunately, when I try to get this up and running, using docker-compose up I get an error that the command from my setup script can't connect to the localstack services.
make_bucket failed: s3://localbucket Could not connect to the endpoint URL: "http://localhost:4572/localbucket"
Dockerfile:
FROM localstack/localstack
#since this is just local dev set up, localstack doesn't require
anything specific here.
ENV AWS_DEFAULT_REGION='[useast1]'
ENV AWS_ACCESS_KEY_ID='[lloyd]'
ENV AWS_SECRET_ACCESS_KEY='[christmas]'
COPY bin/localSetup.sh /localSetup.sh
COPY fixtures/notifications.json /notifications.json
RUN ["chmod", "+x", "/localSetup.sh"]
RUN pip install awscli
# expose service & web dashboard ports
EXPOSE 4567-4582 8080
ENTRYPOINT ["/localSetup.sh"]
docker-compose.yml
version: '3'
services:
localstack:
build: .
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localhost:4572 s3 mb s3://localbucket
#additional similar calls but left off for brevity
I've tried switching localhost to 127.0.0.1 in my script commands, but I wind up with the same error. I'm probably missing something silly here.
There is another way to create your custom AWS resources when localstack freshly starts up. Since you already have a bash script for your resources, you can simply volume mount your script to /docker-entrypoint-initaws.d/.
So my docker-compose file would be:
localstack:
image: localstack/localstack:latest
container_name: localstack_aws
ports:
- '4566:4566'
volumes:
- './localSetup.sh:/etc/localstack/init/ready.d/init-aws.sh'
Also, I would prefer awslocal over aws --endpoint in the bash script, as it leverages the credentials work and endpoint for you.
try adding hostname to the docker-compose file and editing your entrypoint file to reflect that hostname.
docker-compose.yml
version: '3'
services:
localstack:
build: .
hostname: localstack
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localstack:4572 s3 mb s3://localbucket
This was my docker-compose-dev.yaml I used for testing out an app that was using localstack. I used the command docker-compose -f docker-compose-dev.yaml up, I also used the same localSetup.sh you used.
version: '3'
services:
localstack:
image: localstack/localstack
hostname: localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8082}:${PORT_WEB_UI-8082}"
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- backend
sample-app:
image: "sample-app/sample-app:latest"
networks:
- backend
links:
- localstack
depends_on:
- "localstack"
networks:
backend:
driver: 'bridge'
Currently, I have set up 'nginx' and 'php-fpm' using 'docker-compose' like below. Nginx connects to php:9000, however, web server gives me 502 Bad Gateway error. I assume docker-compose command didn't link nginx and php-fpm correctly.
Any advice or suggestion? Thanks in advance.
version: '2'
services:
nginx:
container_name: nginx
image: wb/truckup-nginx:latest # private repo
#network_mode: 'bridge'
depends_on:
- php
volumes_from:
- php
ports:
- 443:443
- 80:80
links:
- php
php:
container_name: php
ports:
- 9000:9000
image: wb/truckup-app:0.1 # private repo
#environment:
#MYSQL_HOST: mysql
#MYSQL_USER: root
#MYSQL_PASSWORD: passme
#MYSQL_DATABASE: database
#데이터 볼륨 컨테이너 안의 데이터 볼륨 디렉터리에 접근가능
volumes:
- /home/*:/home/*
I think your compose-file is correct. To recheck link command run probaly or not. You can exec to your nginx container
docker exec -it nginx bash
and ping to php container using
ping php
if everything is ok, recheck your image.