Suppose I have 2 yml files which I run via docker-compose up.
efk.yml:
version: '3.3'
services:
fluentd:
image: fluentd
...
volumes:
- ./fluentd/etc:/fluentd/etc
depends_on:
- elasticsearch
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: amazon/opendistro-for-elasticsearch:1.13.3
expose:
- 9200
ports:
- "9200:9200"
environment:
- "discovery.type=single-node"
kibana:
...
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: micro.kibana
(I've omit some irrelevant part, I need just logging)
app.yml:
version: '3.3'
services:
mysql:
image: mysql
..
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24224
# fluentd-async-connect: "true"
tag: micro.db
networks:
- default
app:
...
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24224
# fluentd-async-connect: "true"
tag: micro.php-fpm
networks:
- default
networks:
default:
external:
name: efk_default
I plan to launch efk stack initially like
docker-compose -p efk -f efk.yml up -d
and then:
docker-compose -p app -f app.yml up -d
I assume that bridge network efk_default will be created and I can access it from app stack (see app.yml for details). But app stack couldn't resolve fluentd:24224 in bridge network, I get following error on command above for app:
ERROR: for app Cannot start service app: failed to initialize logging driver: dial tcp: lookup fluentd: Temporary failure in name resolution
ERROR: Encountered errors while bringing up the project.
If I use smth dumb like localhost:24224 just to make it launch, via docker network inspect I can see all containers in one network. I've tried to use ip addr. in bridged network but it didn't work either.
Is it possible to have common logging service within this configuration?
If yes, what I'm doing wrong?
Thanks in advance.
Here's what I did to test it:
compose1.yml
version: '3'
services:
app1:
image: nginx
compose2.yml
version: '3'
services:
app2:
image: curlimages/curl
command: http://app1/
networks:
default:
external:
name: efk_default
Commands run:
docker-compose -p efk -f compose1.yml up -d
docker-compose -p efk -f compose2.yml up
and it outputs the Nginx welcome page.
Related
I'm trying to run two Docker containers attached to a single Docker network using Docker Compose.
I'm running into the following error when I run the containers:
Error response from daemon: failed to add interface veth5b3bcc5 to sandbox:
error setting interface "veth5b3bcc5" IP to 172.19.0.2/16:
cannot program address 172.19.0.2/16 in sandbox
interface because it conflicts with existing
route {Ifindex: 10 Dst: 172.19.0.0/16 Src: 172.19.0.1 Gw: <nil> Flags: [] Table: 254}
My docker-compose.yml looks like this:
version: '3'
volumes:
dsn-redis-data:
driver: local
dsn-redis-conf:
driver: local
networks:
dsn-net:
driver: bridge
services:
duty-students-notifier:
image: duty-students-notifier:latest
network_mode: host
container_name: duty-students-notifier
build:
context: ../
dockerfile: ./docker/Dockerfile
env_file: ../.env
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- dsn-net
restart: always
dsn-redis:
image: redis:latest
expose:
- 5432
volumes:
- dsn-redis-data:/var/lib/redis
- dsn-redis-conf:/usr/local/etc/redis/redis.conf
networks:
- dsn-net
restart: always
Thanks!
The network_mode: host setting generally disables Docker networking, and can interfere with other options. In your case it looks like it might be trying to apply the networks: configuration to the host system network layer.
network_mode: host is almost never necessary, and deleting it may resolve this issue.
I'm trying to connect my custom web server to fluentd on Docker.
My docker-compose.yml is like below.
version: "2"
services:
web:
build:
context: ..
dockerfile: ./DockerTest/Dockerfile
container_name: web
depends_on: [ fluentd ]
networks:
test_net:
ipv4_address: 172.20.10.1
ports:
- "8080:80"
links:
- fluentd
logging:
driver: fluentd
options:
fluentd-address: localhost:24224
tag: "docker.{{.ID}}"
fluentd:
build:
context: ./fluentd
dockerfile: Dockerfile
container_name: fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
networks:
test_net:
ipv4_address: 172.20.10.2
ports:
- "24224:24224"
- "24224:24224/udp"
networks:
test_net:
ipam:
config:
- subnet: 172.20.0.0/16
When I run this first, so if fluentd container is newly created, it occurs an error: Error response from daemon: failed to initialize logging driver: dial tcp [::1]:24224: connect: connection refused. At this time, it works well if I set fluentd-address: 172.20.10.2:24224.
But when I run this again, so if remained fluentd container is changed into RUNNING status, it works well. At this time, it is not working with fluentd-address: 172.20.10.2:24224.
I wonder why fluentd address should be changed depending on container creation, and how can I solve this problem?
I don't how to run the docker-compose equivalent of my code
docker run -d --name=server --restart=always --net network --ip 172.18.0.5 -p 5003:80 -v $APP_PHOTO_DIR:/app/mysql-data -v $APP_CONFIG_DIR:/app/config webserver
I've done this:
version: '3'
services:
server:
image: app-dependencies
ports:
- "5003:80"
volumes:
- ./app:/app
command: python /app/app.py
restart: always
networks:
app_net:
ipv4_address: 172.18.0.5
Are you sure you need an IP address for container? It is not recommended practice, why do you want to set it explicitly?
docker-compose.yml
version: '3'
services:
server: # correct, this would be container's name
image: webserver # this should be image name from your command line
ports:
- "5003:80" # correct, but only if you need to communicate to service from ouside
volumes: # volumes just repeat you command line, you can use Env vars
- $APP_PHOTO_DIR:/app/mysql-data
- $APP_CONFIG_DIR:/app/config
command: ["python", "/app/app.py"] # JSON notation strongly recommended
restart: always
Then docker-compose up -d and that's it. You can access your service from host with localhost:5003, no need for internal IP.
For networks, I always include in the docker-compose file, the network specification. If the network already exists, docker will not create a new one.
version: '3'
services:
server:
image: app-dependencies
ports:
- "5003:80"
volumes:
- ./app:/app
command: python /app/app.py
restart: always
networks:
app_net:
ipv4_address: 172.18.0.5
networks:
app_net:
name: NETWORK_NAME
driver: bridge
ipam:
config:
- subnet: NETWORK_SUBNET
volumes:
VOLUME_NAME:
driver:local
And you will need to add the volumes separately to match the docker run command.
I'm having docker-compose.yaml as below:
My problem is that stdout prints or logs written to it are appearing in graylog- but just those under the command /usr/bin/tini -- foo1.start.
And when I try entering the docker container of the service using:
docker exec -it [container_hash] bash
Launching command such as echo "Hello" or running a python script in which I will do import sys; sys.stdout.write("Hello again") --> no message of these will appear in received messages in graylog UI.
Any idea why stdout is not being collected via executing shell command or script within container? and collecting prints only from what run as a result of the make file command?
I don't understand this behavior as I piped all the stdout to gelf log-driver in docker-compose.
Edit: instructions how to use graylog in compose are from here
version: '3.4'
services:
foo1:
ports:
- target: 8081
published: 8084
mode: host
networks:
- dev-net
command: make foo1.start
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
some-mongo:
image: "mongo:3"
networks:
- dev-net
some-elasticsearch:
image: "elasticsearch:2"
command: "elasticsearch -Des.cluster.name='graylog'"
networks:
- dev-net
graylog:
image: graylog2/server:2.1.1-1
environment:
GRAYLOG_PASSWORD_SECRET: somepasswordpepper
GRAYLOG_ROOT_PASSWORD_SHA2: 8c6976e5b5410415bde908bd4dee15dfb1
GRAYLOG_WEB_ENDPOINT_URI: http://127.0.0.1:9000/api
links:
- some-mongo:mongo
- some-elasticsearch:elasticsearch
ports:
- "9000:9000"
- "12201:12201/udp"
networks:
- dev-net
networks:
dev-net:
ipam:
config:
- subnet: 192.168.12.0/24
I try to run services (mongo) in swarm mode with log collected to elasticsearch via fluentd. It's worked(!) with:
docker-compose up
But when I deploy via stack, services started, but logs not collected, and i don't know how to see what the reason.
docker stack deploy -c docker-compose.yml env_staging
docker-compose.yml:
version: "3"
services:
mongo:
image: mongo:3.6.3
depends_on:
- fluentd
command: mongod
networks:
- webnet
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: mongo
fluentd:
image: zella/fluentd-es
depends_on:
- elasticsearch
ports:
- 24224:24224
- 24224:24224/udp
networks:
- webnet
elasticsearch:
image: elasticsearch
ports:
- 9200:9200
networks:
- webnet
kibana:
image: kibana
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
- webnet
networks:
webnet:
upd
I remove fluentd-address: localhost:24224 and problem solves. But I don't understand what is "localhost"? Why we can't set "fluentd" host. If someone explain what is fluentd-address, I will accept answer.
fluentd-address is the address where fluentd daemon resides (default is localhost and you don't need to specify it in this case).
In your case (using stack) your fluentd daemon will run on a node, you should reach that service using the name of the service (in your case fluentd, have you tried?).
Remember to add to your options the fluentd-async-connect: "true"
Reference is at:
https://docs.docker.com/config/containers/logging/fluentd/#usage
You don't need to specify fluentd-address. When you set logging driver to fluentd, Swarm automatically discovers nearest fluentd instance and sends there all stdout of desired container.