I try to run services (mongo) in swarm mode with log collected to elasticsearch via fluentd. It's worked(!) with:
docker-compose up
But when I deploy via stack, services started, but logs not collected, and i don't know how to see what the reason.
docker stack deploy -c docker-compose.yml env_staging
docker-compose.yml:
version: "3"
services:
mongo:
image: mongo:3.6.3
depends_on:
- fluentd
command: mongod
networks:
- webnet
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: mongo
fluentd:
image: zella/fluentd-es
depends_on:
- elasticsearch
ports:
- 24224:24224
- 24224:24224/udp
networks:
- webnet
elasticsearch:
image: elasticsearch
ports:
- 9200:9200
networks:
- webnet
kibana:
image: kibana
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
- webnet
networks:
webnet:
upd
I remove fluentd-address: localhost:24224 and problem solves. But I don't understand what is "localhost"? Why we can't set "fluentd" host. If someone explain what is fluentd-address, I will accept answer.
fluentd-address is the address where fluentd daemon resides (default is localhost and you don't need to specify it in this case).
In your case (using stack) your fluentd daemon will run on a node, you should reach that service using the name of the service (in your case fluentd, have you tried?).
Remember to add to your options the fluentd-async-connect: "true"
Reference is at:
https://docs.docker.com/config/containers/logging/fluentd/#usage
You don't need to specify fluentd-address. When you set logging driver to fluentd, Swarm automatically discovers nearest fluentd instance and sends there all stdout of desired container.
Related
Suppose I have 2 yml files which I run via docker-compose up.
efk.yml:
version: '3.3'
services:
fluentd:
image: fluentd
...
volumes:
- ./fluentd/etc:/fluentd/etc
depends_on:
- elasticsearch
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: amazon/opendistro-for-elasticsearch:1.13.3
expose:
- 9200
ports:
- "9200:9200"
environment:
- "discovery.type=single-node"
kibana:
...
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: micro.kibana
(I've omit some irrelevant part, I need just logging)
app.yml:
version: '3.3'
services:
mysql:
image: mysql
..
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24224
# fluentd-async-connect: "true"
tag: micro.db
networks:
- default
app:
...
logging:
driver: "fluentd"
options:
fluentd-address: fluentd:24224
# fluentd-async-connect: "true"
tag: micro.php-fpm
networks:
- default
networks:
default:
external:
name: efk_default
I plan to launch efk stack initially like
docker-compose -p efk -f efk.yml up -d
and then:
docker-compose -p app -f app.yml up -d
I assume that bridge network efk_default will be created and I can access it from app stack (see app.yml for details). But app stack couldn't resolve fluentd:24224 in bridge network, I get following error on command above for app:
ERROR: for app Cannot start service app: failed to initialize logging driver: dial tcp: lookup fluentd: Temporary failure in name resolution
ERROR: Encountered errors while bringing up the project.
If I use smth dumb like localhost:24224 just to make it launch, via docker network inspect I can see all containers in one network. I've tried to use ip addr. in bridged network but it didn't work either.
Is it possible to have common logging service within this configuration?
If yes, what I'm doing wrong?
Thanks in advance.
Here's what I did to test it:
compose1.yml
version: '3'
services:
app1:
image: nginx
compose2.yml
version: '3'
services:
app2:
image: curlimages/curl
command: http://app1/
networks:
default:
external:
name: efk_default
Commands run:
docker-compose -p efk -f compose1.yml up -d
docker-compose -p efk -f compose2.yml up
and it outputs the Nginx welcome page.
I'm trying to connect my custom web server to fluentd on Docker.
My docker-compose.yml is like below.
version: "2"
services:
web:
build:
context: ..
dockerfile: ./DockerTest/Dockerfile
container_name: web
depends_on: [ fluentd ]
networks:
test_net:
ipv4_address: 172.20.10.1
ports:
- "8080:80"
links:
- fluentd
logging:
driver: fluentd
options:
fluentd-address: localhost:24224
tag: "docker.{{.ID}}"
fluentd:
build:
context: ./fluentd
dockerfile: Dockerfile
container_name: fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
networks:
test_net:
ipv4_address: 172.20.10.2
ports:
- "24224:24224"
- "24224:24224/udp"
networks:
test_net:
ipam:
config:
- subnet: 172.20.0.0/16
When I run this first, so if fluentd container is newly created, it occurs an error: Error response from daemon: failed to initialize logging driver: dial tcp [::1]:24224: connect: connection refused. At this time, it works well if I set fluentd-address: 172.20.10.2:24224.
But when I run this again, so if remained fluentd container is changed into RUNNING status, it works well. At this time, it is not working with fluentd-address: 172.20.10.2:24224.
I wonder why fluentd address should be changed depending on container creation, and how can I solve this problem?
I am trying to set up a 2 node private IPFS cluster using docker. For that purpose I am using ipfs/ipfs-cluster:latest image.
My docker-compose file looks like :
version: '3'
services:
peer-1:
image: ipfs/ipfs-cluster:latest
ports:
- 8080:8080
- 4001:4001
- 5001:5001
volumes:
- ./cluster/peer1/config:/data/ipfs-cluster
peer-2:
image: ipfs/ipfs-cluster:latest
ports:
- 8081:8080
- 4002:4001
- 5002:5001
volumes:
- ./cluster/peer2/config:/data/ipfs-cluster
While starting the containers getting following error
ERROR ipfshttp: error posting to IPFS: Post http://127.0.0.1:5001/api/v0/repo/stat?size-only=true: dial tcp 127.0.0.1:5001: connect: connection refused ipfshttp.go:745
Please help with the problem.
Is there any proper documentation about how to setup a IPFS cluster on docker. This document misses on lot of details.
Thank you.
I figured out how to run a multi-node IPFS cluster on docker environment.
The current ipfs/ipfs-cluster which is version 0.4.17 doesn't run ipfs peer i.e. ipfs/go-ipfs in it. We need to run it separately.
So now in order to run a multi-node (2 node in this case) IPSF cluster in docker environment we need to run 2 IPFS peer container and 2 IPFS cluster container 1 corresponding to each peer.
So your docker-compose file will look as follows :
version: '3'
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
services:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs
ports:
- "4001:4001"
- "5001:5001"
- "8081:8080"
volumes:
- ./var/ipfs0-docker-data:/data/ipfs/
- ./var/ipfs0-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.5
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs
ports:
- "4101:4001"
- "5101:5001"
- "8181:8080"
volumes:
- ./var/ipfs1-docker-data:/data/ipfs/
- ./var/ipfs1-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.7
ipfs-cluster0:
container_name: ipfs-cluster0
image: ipfs/ipfs-cluster
depends_on:
- ipfs0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.5/tcp/5001
ports:
- "9094:9094"
- "9095:9095"
- "9096:9096"
volumes:
- ./var/ipfs-cluster0:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.6
ipfs-cluster1:
container_name: ipfs-cluster1
image: ipfs/ipfs-cluster
depends_on:
- ipfs1
- ipfs-cluster0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.7/tcp/5001
ports:
- "9194:9094"
- "9195:9095"
- "9196:9096"
volumes:
- ./var/ipfs-cluster1:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.8
This will spin 2 peer IPFS cluster and we can store and retrieve file using any of the peer.
The catch here is we need to provide the IPFS_API to ipfs-cluster as environment variable so that the ipfs-cluster knows its corresponding peer. And for both the ipfs-cluster we need to have the same CLUSTER_SECRET.
According to the article you posted:
The container does not run go-ipfs. You should run the IPFS daemon
separetly, for example, using the ipfs/go-ipfs Docker container. We
recommend mounting the /data/ipfs-cluster folder to provide a custom,
working configuration, as well as persistency for the cluster data.
This is usually achieved by passing -v
:/data/ipfs-cluster to docker run).
If in fact you need to connect to another service within the docker-compose, you can simply refer to it by the service name, since hostname entries are created in all the containers in the docker-compose so services can talk to each other by name instead of ip
Additionally:
Unless you run docker with --net=host, you will need to set $IPFS_API
or make sure the configuration has the correct node_multiaddress.
The equivalent of --net=host in docker-compose is network_mode: "host" (incompatible with port-mapping) https://docs.docker.com/compose/compose-file/#network_mode
I really don't get how to use traefik with docker networks.
I try to run "wekan" kanban. If I bind ports to host, it works perfectly, so it really is about adressing it through traefik. Here is my docker-config:
version: '2'
services:
wekandb:
image: mongo:3.2.14
container_name: wekan-db
command: mongod --smallfiles --oplogSize 128
networks:
- wekan-tier
expose:
- 27017
volumes:
- wekan-db:/data/db
- wekan-db-dump:/dump
wekan:
image: wekanteam/wekan:latest
container_name: wekan-app
networks:
- wekan-tier
# ports:
# - 8081:80
environment:
- MONGO_URL=mongodb://wekandb:27017/wekan
- ROOT_URL=https://wekan.domain.com
depends_on:
- wekandb
labels:
- "traefik.port=80"
- "traefik.backend=wekan"
- "traefik.frontend.rule=Host:wekan.domain.com"
- "traefik.docker.network=wekan_wekan-tier"
volumes:
wekan-db:
driver: local
wekan-db-dump:
driver: local
networks:
wekan-tier:
driver: bridge
I can't seem to find a way to access the damn thing... Your answer will be greatly appreciated, not only will it allow me to run Wekan, but also to update my older services where I used linking:linking instead of Docker Networks - Linking being now deprecated.
I believe you have more than one issue here.
First, in your compose you don't have Traefik service, it is OK, Traefik will be able to see containers from the services here, but Traefik will not be able to send the request to it, because Traefik service and wekan service does not share the same network.
So to fix that you need to create an specific network to Traefik and set it in your compose file also.
Example:
$ docker network create traefik-net
$ docker service --name traefik --network traefik-net .... traefik ....
Second, you need to define the network Traefik will use to connect with your service, this network must be one shared with Traefik service.
So your wekan service needs to be like this:
wekan:
image: wekanteam/wekan:latest
container_name: wekan-app
networks:
- wekan-tier
- traefik-net
environment:
- MONGO_URL=mongodb://wekandb:27017/wekan
- ROOT_URL=https://wekan.domain.com
depends_on:
- wekandb
labels:
- "traefik.port=80"
- "traefik.backend=wekan"
- "traefik.frontend.rule=Host:wekan.domain.com"
- "traefik.docker.network=traefik-net"
I have modified your docker-compose file to make it work:
version: '3'
services:
web:
image: wekanteam/wekan:latest
networks:
- wekan-tier
environment:
- MONGO_URL=mongodb://wekandb:27017/wekan
- ROOT_URL=https://wekan.domain.com
labels:
- "traefik.port=80"
- "traefik.docker.network=wekan_wekan-tier"
wekandb:
image: mongo:3.2
command: mongod --smallfiles --oplogSize 128
networks:
- wekan-tier
expose:
- 27017
volumes:
- wekan-db:/data/db
- wekan-db-dump:/dump
traefik:
image: 'traefik:1.6'
command: --web --docker --docker.watch --docker.domain=local --logLevel=DEBUG
labels:
- traefik.docker.network=wekan-tier
- traefik.port=8080
ports:
- '80:80'
- '8080:8080'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
networks:
- wekan-tier
volumes:
wekan-db:
driver: local
wekan-db-dump:
driver: local
networks:
wekan-tier:
driver: bridge
Now start containers with the following command:
$ docker-compose -p wekan up -d
To check that traefik is working go to http://localhost:8080/, if you have problems stop your apache server using $ service apache2 stop. On the other hand, if you can see the traefik interface then add the following line to your /etc/hosts file:
127.0.0.1 web.wekan.local
Now go to http://web.wekan.local and you should see Wekan login page :)
Using docker-compose v3 and deploying to a swarm:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
deploy:
replicas: 1
ports:
- "9200:9200"
tty: true
kibana:
image: docker.elastic.co/kibana/kibana:5.4.1
deploy:
mode: global
ports:
- "5601:5601"
depends_on:
- elasticsearch
tty: true
I see this in the kibana service log:
Unable to revive connection: http://elasticsearch:9200/
Elasticsearch service is running and can be reached.
Swarm consists of 3 nodes.
What am I missing?
Update:
I turns out that if I try to access kibana on the same swarm node where elasticsearch is running, it works. All other nodes either have a network problem or cannot resolve the elasticsearch name.
I found the reason, and the solution.
My swarm is running on AWS - All nodes are placed in the same security group and I assumed all ports were open internally in that security group. That's not the case.
I explicitly configured the security group to allow inbound traffic as per dockers routing mesh specs here: https://docs.docker.com/engine/swarm/ingress/
Docker-compose by default generates a network and puts all services within it. But I do not know if it changes in docker swarm. To define it you can do this.
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
deploy:
replicas: 1
ports:
- "9200:9200"
tty: true
networks:
- some-name
kibana:
image: docker.elastic.co/kibana/kibana:5.4.1
deploy:
mode: global
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
tty: true
networks:
- some-name
networks:
some-name:
driver: overlay
I hope it serves you, I will wait for news.