I'm having docker-compose.yaml as below:
My problem is that stdout prints or logs written to it are appearing in graylog- but just those under the command /usr/bin/tini -- foo1.start.
And when I try entering the docker container of the service using:
docker exec -it [container_hash] bash
Launching command such as echo "Hello" or running a python script in which I will do import sys; sys.stdout.write("Hello again") --> no message of these will appear in received messages in graylog UI.
Any idea why stdout is not being collected via executing shell command or script within container? and collecting prints only from what run as a result of the make file command?
I don't understand this behavior as I piped all the stdout to gelf log-driver in docker-compose.
Edit: instructions how to use graylog in compose are from here
version: '3.4'
services:
foo1:
ports:
- target: 8081
published: 8084
mode: host
networks:
- dev-net
command: make foo1.start
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
some-mongo:
image: "mongo:3"
networks:
- dev-net
some-elasticsearch:
image: "elasticsearch:2"
command: "elasticsearch -Des.cluster.name='graylog'"
networks:
- dev-net
graylog:
image: graylog2/server:2.1.1-1
environment:
GRAYLOG_PASSWORD_SECRET: somepasswordpepper
GRAYLOG_ROOT_PASSWORD_SHA2: 8c6976e5b5410415bde908bd4dee15dfb1
GRAYLOG_WEB_ENDPOINT_URI: http://127.0.0.1:9000/api
links:
- some-mongo:mongo
- some-elasticsearch:elasticsearch
ports:
- "9000:9000"
- "12201:12201/udp"
networks:
- dev-net
networks:
dev-net:
ipam:
config:
- subnet: 192.168.12.0/24
Related
Suppose I have 2 yml files which I run via docker-compose up.
efk.yml:
version: '3.3'
services:
fluentd:
image: fluentd
...
volumes:
- ./fluentd/etc:/fluentd/etc
depends_on:
- elasticsearch
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: amazon/opendistro-for-elasticsearch:1.13.3
expose:
- 9200
ports:
- "9200:9200"
environment:
- "discovery.type=single-node"
kibana:
...
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: micro.kibana
(I've omit some irrelevant part, I need just logging)
app.yml:
version: '3.3'
services:
mysql:
image: mysql
..
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24224
# fluentd-async-connect: "true"
tag: micro.db
networks:
- default
app:
...
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24224
# fluentd-async-connect: "true"
tag: micro.php-fpm
networks:
- default
networks:
default:
external:
name: efk_default
I plan to launch efk stack initially like
docker-compose -p efk -f efk.yml up -d
and then:
docker-compose -p app -f app.yml up -d
I assume that bridge network efk_default will be created and I can access it from app stack (see app.yml for details). But app stack couldn't resolve fluentd:24224 in bridge network, I get following error on command above for app:
ERROR: for app Cannot start service app: failed to initialize logging driver: dial tcp: lookup fluentd: Temporary failure in name resolution
ERROR: Encountered errors while bringing up the project.
If I use smth dumb like localhost:24224 just to make it launch, via docker network inspect I can see all containers in one network. I've tried to use ip addr. in bridged network but it didn't work either.
Is it possible to have common logging service within this configuration?
If yes, what I'm doing wrong?
Thanks in advance.
Here's what I did to test it:
compose1.yml
version: '3'
services:
app1:
image: nginx
compose2.yml
version: '3'
services:
app2:
image: curlimages/curl
command: http://app1/
networks:
default:
external:
name: efk_default
Commands run:
docker-compose -p efk -f compose1.yml up -d
docker-compose -p efk -f compose2.yml up
and it outputs the Nginx welcome page.
I am launching containers via docker-compose, but 2 out of 3 containers are failing stating -:"exec user process caused "exec format error" "
The above error is caused while executing a file places at location /opt/whatsapp/bin/wait_on_postgres.sh, i need to add #!/bin/bash at top of this file.
Problem is, the container is exiting in no time so how to access this file to make necessary changes ??
Below is the docker-compose.yml i am using -:
version: '3'
volumes:
whatsappMedia:
driver: local
postgresData:
driver: local
services:
db:
image: postgres:10.6
command: "-p 3306 -N 500"
restart: always
environment:
POSTGRES_PASSWORD: testpass
POSTGRES_USER: root
expose:
- "33060"
ports:
- "33060:3306"
volumes:
- postgresData:/var/lib/postgresql/data
network_mode: bridge
wacore:
image: docker.whatsapp.biz/coreapp:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
network_mode: bridge
links:
- db
waweb:
image: docker.whatsapp.biz/web:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
ports:
- "9090:443"
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
WACORE_HOSTNAME: wacore
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
- "wacore"
links:
- db
- wacore
network_mode: bridge
Problem got resolved by using 64bit guest OS image.
I was running this container over 32 bit Centos which was causing the error.
I am trying to run rabbitmq along with influxdb TICK stack with docker-compose. When I run rabbitmq with this command:docker run -d --rm -p 5672:5672 -p 15672:15672 rabbitmq:3-management, both ports are open and I am able to access from a remote machine. However, when I run rabbitmq as part of a docker-compose file, it is not accessable from a remote machine. Here is my docker-compose.yml file:
version: "3.7"
services:
influxdb:
image: influxdb
volumes:
- ./influxdb/influxdb/data/:/var/lib/influxdb/
- ./influxdb/influxdb/config/:/etc/influxdb/
ports:
- "8086:8086"
rabbitmq:
image: rabbitmq:3-management
volumes:
- ./rabbitmq/data:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5627"
telegraf:
image: telegraf
volumes:
- ./influxdb/telegraf/config/:/etc/telegraf/
- /proc:/host/proc:ro
depends_on:
- "influxdb"
- "rabbitmq"
chronograf:
image: chronograf
volumes:
- ./influxdb/chronograf/data/:/var/lib/chronograf/
ports:
- "8888:8888"
depends_on:
- "telegraf"
More information: when I run this with docker-compose up -d the 8086 and 8888 are accessible from a remote machine (I confirm with using nmap command). Also, either way I am able to access the rabbitmq management console at http://localhost:15672.
How can I set this up so I can access rabbitmq from a remote machine using docker-compose?
Thank you.
Looks like just a typo in the port mapping in docker-compose.yml: 5672:5627 should actually be 5672:5672.
Otherwise the docker-compose configuration looks just fine.
I'm trying to run a script after Cassandra starts that will create the keyspace.
Here's my docker compose:
version: '3.6'
services:
cassandra:
container_name: cassandra
image: bitnami/cassandra:3.11.2
volumes:
- ./cassandra_data:/bitnami
- ./scripts/cassandra_init.sh:/cassandra_init.sh
environment:
- CASSANDRA_USER=${CASSANDRA_USERNAME}
- CASSANDRA_PASSWORD=${CASSANDRA_PASSWORD}
- CASSANDRA_CLUSTER_NAME=Testing
- CASSANDRA_PASSWORD_SEEDER=yes
entrypoint: ["/app-entrypoint.sh"]
command: ["nami","start","--foreground","cassandra","/cassandra_init.sh"]
volumes:
cassandra_data:
["nami","start","--foreground","cassandra"] starts Cassandra. If I start the container without adding my script, it works just fine.
However if I start the container including my script, I get this error after the container starts:
nami ERROR Unknown command '/cassandra_init.sh'
How can I achieve this?
I figured it out.
In docker.compose I had to call the script init.sh and call it:
version: '3.6'
services:
cassandra:
container_name: cassandra
image: bitnami/cassandra:3.11.2
volumes:
- ./cassandra_data:/bitnami
- ./scripts/cassandra_init.sh:/init.sh
environment:
- CASSANDRA_USER=${CASSANDRA_USERNAME}
- CASSANDRA_PASSWORD=${CASSANDRA_PASSWORD}
- CASSANDRA_CLUSTER_NAME=Testing
- CASSANDRA_PASSWORD_SEEDER=yes
entrypoint: ["/app-entrypoint.sh"]
command: ["/init.sh"]
volumes:
cassandra_data:
and the script should look like this:
#!/bin/bash
nami start cassandra
echo "script stuff here to run after cassandra starts"
I try to run services (mongo) in swarm mode with log collected to elasticsearch via fluentd. It's worked(!) with:
docker-compose up
But when I deploy via stack, services started, but logs not collected, and i don't know how to see what the reason.
docker stack deploy -c docker-compose.yml env_staging
docker-compose.yml:
version: "3"
services:
mongo:
image: mongo:3.6.3
depends_on:
- fluentd
command: mongod
networks:
- webnet
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: mongo
fluentd:
image: zella/fluentd-es
depends_on:
- elasticsearch
ports:
- 24224:24224
- 24224:24224/udp
networks:
- webnet
elasticsearch:
image: elasticsearch
ports:
- 9200:9200
networks:
- webnet
kibana:
image: kibana
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
- webnet
networks:
webnet:
upd
I remove fluentd-address: localhost:24224 and problem solves. But I don't understand what is "localhost"? Why we can't set "fluentd" host. If someone explain what is fluentd-address, I will accept answer.
fluentd-address is the address where fluentd daemon resides (default is localhost and you don't need to specify it in this case).
In your case (using stack) your fluentd daemon will run on a node, you should reach that service using the name of the service (in your case fluentd, have you tried?).
Remember to add to your options the fluentd-async-connect: "true"
Reference is at:
https://docs.docker.com/config/containers/logging/fluentd/#usage
You don't need to specify fluentd-address. When you set logging driver to fluentd, Swarm automatically discovers nearest fluentd instance and sends there all stdout of desired container.