Telegraf and Inlfuxdb with docker-compose connection refused - docker

I am trying to set up a Telegraf and Influxdb on macOS 11.3.1 using docker compose.
But unfortunately, I am getting the following error:
telegraf | 2021-07-12T19:18:14Z E! [outputs.influxdb_v2] When writing to [http://localhost:8086]: Post "http://localhost:8086/api/v2/write?bucket=telegraf&org=telegraf": dial tcp 127.0.0.1:8086: connect: connection refused
telegraf | 2021-07-12T19:18:14Z E! [agent] Error writing to outputs.influxdb_v2: Post "http://localhost:8086/api/v2/write?bucket=telegraf&org=telegraf": dial tcp 127.0.0.1:8086: connect: connection refused
Here are my config files:
docker-compose.yml
version: "3.5"
services:
influxdb2:
image: influxdb:latest
network_mode: "bridge"
container_name: influxdb2
ports:
- "8086:8086"
volumes:
- type: bind
source: /Users/endryha/Docker/influxdb2/data
target: /var/lib/influxdb2
- type: bind
source: /Users/endryha/Docker/influxdb2/config
target: /etc/influxdb2
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=telegraf
- DOCKER_INFLUXDB_INIT_PASSWORD=P#ssw0rd
- DOCKER_INFLUXDB_INIT_ORG=telegraf
- DOCKER_INFLUXDB_INIT_BUCKET=telegraf
- DOCKER_INFLUXDB_INIT_RETENTION=1w
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=9859DAA6-3B3F-48FE-A981-AE9D31FBB334
restart: always
telegraf:
image: telegraf:latest
network_mode: "bridge"
pid: "host"
container_name: telegraf
ports:
- "8092:8092"
- "8094:8094"
- "8125:8125"
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /sys:/host/sys:ro
- /proc:/host/proc:ro
- /etc:/host/etc:ro
environment:
- HOST_PROC=/host/proc
- HOST_SYS=/host/sys
- HOST_ETC=/host/etc
restart: always
telegraf.conf
# # Configuration for sending metrics to InfluxDB
[[outputs.influxdb_v2]]
# ## The URLs of the InfluxDB cluster nodes.
# ##
# ## Multiple URLs can be specified for a single cluster, only ONE of the
# ## urls will be written to each interval.
# ## ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
urls = ["http://localhost:8086"]
#
# ## Token for authentication.
token = "9859DAA6-3B3F-48FE-A981-AE9D31FBB334"
# token = "sYbrP0VZ--MgCd0EpOpz9OctJN2e4kUbsfbPPUGcY9GYW8Ts27W2cI_d7YwlSf9nvHq4l-K6mJLqJiTT0Hh7Xw=="
#
# ## Organization is the name of the organization you wish to write to; must exist.
organization = "telegraf"
#
# ## Destination bucket to write into.
bucket = "telegraf"
#
# ## The value of this tag will be used to determine the bucket. If this
# ## tag is not set the 'bucket' option is used as the default.
# # bucket_tag = ""
#
# ## If true, the bucket tag will not be added to the metric.
# # exclude_bucket_tag = false
#
# ## Timeout for HTTP messages.
# # timeout = "5s"
#
# ## Additional HTTP headers
# # http_headers = {"X-Special-Header" = "Special-Value"}
#
# ## HTTP Proxy override, if unset values the standard proxy environment
# ## variables are consulted to determine which proxy, if any, should be used.
# # http_proxy = "http://corporate.proxy:3128"
#
# ## HTTP User-Agent
user_agent = "telegraf"
#
# ## Content-Encoding for write request body, can be set to "gzip" to
# ## compress body or "identity" to apply no encoding.
# # content_encoding = "gzip"
#
# ## Enable or disable uint support for writing uints influxdb 2.0.
# # influx_uint_support = false
#
# ## Optional TLS Config for use on HTTP connections.
# # tls_ca = "/etc/telegraf/ca.pem"
# # tls_cert = "/etc/telegraf/cert.pem"
# # tls_key = "/etc/telegraf/key.pem"
# ## Use TLS but skip chain & host verification
# # insecure_skip_verify = false
Full log output
docker-compose up
Starting telegraf ... done
Starting influxdb2 ... done
Attaching to influxdb2, telegraf
telegraf | 2021-07-12T19:18:04Z I! Starting Telegraf 1.19.1
telegraf | 2021-07-12T19:18:04Z I! Using config file: /etc/telegraf/telegraf.conf
telegraf | 2021-07-12T19:18:04Z I! Loaded inputs: cpu disk diskio docker kernel mem processes swap system
telegraf | 2021-07-12T19:18:04Z I! Loaded aggregators:
telegraf | 2021-07-12T19:18:04Z I! Loaded processors:
telegraf | 2021-07-12T19:18:04Z I! Loaded outputs: influxdb_v2
telegraf | 2021-07-12T19:18:04Z I! Tags enabled: host=d52a95d7ec9d
telegraf | 2021-07-12T19:18:04Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"d52a95d7ec9d", Flush Interval:10s
telegraf | 2021-07-12T19:18:04Z W! [inputs.docker] 'perdevice' setting is set to 'true' so 'blkio' and 'network' metrics will be collected. Please set it to 'false' and use 'perdevice_include' instead to control this behaviour as 'perdevice' will be deprecated
telegraf | 2021-07-12T19:18:04Z W! [inputs.docker] 'total' setting is set to 'false' so 'blkio' and 'network' metrics will not be collected. Please set it to 'true' and use 'total_include' instead to control this behaviour as 'total' will be deprecated
influxdb2 | 2021-07-12T19:18:04.687801589Z info found existing boltdb file, skipping setup wrapper {"system": "docker", "bolt_path": "/var/lib/influxdb2/influxd.bolt"}
influxdb2 | ts=2021-07-12T19:18:11.025589Z lvl=info msg="Welcome to InfluxDB" log_id=0VJ2KiKG000 version=2.0.7 commit=2a45f0c037 build_date=2021-06-04T19:17:40Z
influxdb2 | ts=2021-07-12T19:18:11.029935Z lvl=info msg="Resources opened" log_id=0VJ2KiKG000 service=bolt path=/var/lib/influxdb2/influxd.bolt
influxdb2 | ts=2021-07-12T19:18:11.036551Z lvl=info msg="Checking InfluxDB metadata for prior version." log_id=0VJ2KiKG000 bolt_path=/var/lib/influxdb2/influxd.bolt
influxdb2 | ts=2021-07-12T19:18:11.037662Z lvl=info msg="Using data dir" log_id=0VJ2KiKG000 service=storage-engine service=store path=/var/lib/influxdb2/engine/data
influxdb2 | ts=2021-07-12T19:18:11.037699Z lvl=info msg="Compaction settings" log_id=0VJ2KiKG000 service=storage-engine service=store max_concurrent_compactions=4 throughput_bytes_per_second=50331648 throughput_bytes_per_second_burst=50331648
influxdb2 | ts=2021-07-12T19:18:11.037705Z lvl=info msg="Open store (start)" log_id=0VJ2KiKG000 service=storage-engine service=store op_name=tsdb_open op_event=start
influxdb2 | ts=2021-07-12T19:18:11.037820Z lvl=info msg="Open store (end)" log_id=0VJ2KiKG000 service=storage-engine service=store op_name=tsdb_open op_event=end op_elapsed=0.116ms
influxdb2 | ts=2021-07-12T19:18:11.037863Z lvl=info msg="Starting retention policy enforcement service" log_id=0VJ2KiKG000 service=retention check_interval=30m
influxdb2 | ts=2021-07-12T19:18:11.037873Z lvl=info msg="Starting precreation service" log_id=0VJ2KiKG000 service=shard-precreation check_interval=10m advance_period=30m
influxdb2 | ts=2021-07-12T19:18:11.037900Z lvl=info msg="Starting query controller" log_id=0VJ2KiKG000 service=storage-reads concurrency_quota=1024 initial_memory_bytes_quota_per_query=9223372036854775807 memory_bytes_quota_per_query=9223372036854775807 max_memory_bytes=0 queue_size=1024
influxdb2 | ts=2021-07-12T19:18:11.038758Z lvl=info msg="Configuring InfluxQL statement executor (zeros indicate unlimited)." log_id=0VJ2KiKG000 max_select_point=0 max_select_series=0 max_select_buckets=0
influxdb2 | ts=2021-07-12T19:18:11.331457Z lvl=info msg=Listening log_id=0VJ2KiKG000 service=tcp-listener transport=http addr=:8086 port=8086
influxdb2 | ts=2021-07-12T19:18:11.331525Z lvl=info msg=Starting log_id=0VJ2KiKG000 service=telemetry interval=8h
telegraf | 2021-07-12T19:18:14Z E! [outputs.influxdb_v2] When writing to [http://localhost:8086]: Post "http://localhost:8086/api/v2/write?bucket=telegraf&org=telegraf": dial tcp 127.0.0.1:8086: connect: connection refused
telegraf | 2021-07-12T19:18:14Z E! [agent] Error writing to outputs.influxdb_v2: Post "http://localhost:8086/api/v2/write?bucket=telegraf&org=telegraf": dial tcp 127.0.0.1:8086: connect: connection refused
telegraf | 2021-07-12T19:18:24Z E! [outputs.influxdb_v2] When writing to [http://localhost:8086]: Post "http://localhost:8086/api/v2/write?bucket=telegraf&org=telegraf": dial tcp 127.0.0.1:8086: connect: connection refused
telegraf | 2021-07-12T19:18:24Z E! [agent] Error writing to outputs.influxdb_v2: Post "http://localhost:8086/api/v2/write?bucket=telegraf&org=telegraf": dial tcp 127.0.0.1:8086: connect: connection refused
telegraf | 2021-07-12T19:18:34Z E! [outputs.influxdb_v2] When writing to [http://localhost:8086]: Post "http://localhost:8086/api/v2/write?bucket=telegraf&org=telegraf": dial tcp 127.0.0.1:8086: connect: connection refused
As I can see there is an issue with the authorization, however, I think that I correctly provided the token and it suppose to work just fine. Also, I've tried to create a token manually using InfluxDB admin UI, but, it doesn't work as well.
Please advice.

Related

ElasticSearch Logstash not connecting "Connection refused" - Docker

I need help! (who would have thought, right? lol)
I have a job interview in few days and it would mean the world to me to be well prepared for it and have some working examples.
I am trying to set up an ELK pipeline to stream data from kafka, through logstash, elasticsearch and finally read it from Kibana. The usual.
I am making use of containers, but the duo logstash - elasticsearch are giving me an aneurism.
Everything else works perfectly fine. I've checked the logs off of kafka and that is working just fine. Kibana is collected to elasticsearch just fine as well. But logstash and es really don't want to match.
Here is the setup
docker-compose.yml
version: '3.6'
services:
elasticsearch:
image: elasticsearch:8.6.0
container_name: elasticsearch
#restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
cluster.name: elf-kafka-cluster
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
kibana:
image: kibana:8.6.0
container_name: kibana
#restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
networks:
- elk
logstash:
image: logstash:8.6.0
container_name: logstash
#restart: always
volumes:
- type: bind
source: ./logstash_pipeline/
target: /usr/share/logstash/pipeline
read_only: true
command: logstash -f /home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf
depends_on:
- elasticsearch
ports:
- '9600:9600'
environment:
xpack.monitoring.enabled: true
# LS_JAVA_OPTS: "-Xmx256m -Xms256m"
links:
- elasticsearch
networks:
- elk
volumes:
elastic_data: {}
networks:
elk:
driver: bridge
logstash.conf
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["topic"]
}
}
output {
elasitcsearch {
hosts => ["http://localhost:9200"]
index => "topic"
workers => 1
}
}
These are logstash error logs when I compose up:
logstash | [2023-01-17T13:59:02,680][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash | [2023-01-17T13:59:04,711][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash | [2023-01-17T13:59:05,373][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,379][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,436][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,444][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
logstash | [2023-01-17T13:59:05,449][WARN ][logstash.licensechecker.licensereader] Attempt to validate Elasticsearch license failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,477][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash | [2023-01-17T13:59:05,567][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash | [2023-01-17T13:59:05,661][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf"}
logstash | [2023-01-17T13:59:05,664][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
logstash | [2023-01-17T13:59:06,333][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
logstash | [2023-01-17T13:59:06,411][INFO ][logstash.runner ] Logstash shut down.
logstash | [2023-01-17T13:59:06,419][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
logstash | org.jruby.exceptions.SystemExit: (SystemExit) exit
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:790) ~[jruby.jar:?]
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:753) ~[jruby.jar:?]
logstash | at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:91) ~[?:?]
and this is to prove that everything is working as intended with es (or so it seems)
netstat -an | grep 9200
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN
tcp6 0 0 :::9200 :::* LISTEN
unix 3 [ ] STREAM CONNECTED 49200
I've looked through everything and this is 100% not a duplicate because I have tried it all. I really can't figure it out. Hope anyone can help.
Thank you for you time.
You should set logstash.yml
Create a logstash.yml with values below:
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://localhost:9200" ]
In your docker-compose.yml, add another volume in Logstash container as shown below:
./logstash.yml:/usr/share/logstash/config/logstash.yml
Additionally, its good to run with restart condition.

Traefik is ignoring docker with traefik.enable=true set

What did you expect to see?
I Expected to see a new front-end and back-end when I docker-compose up my wordpress stack.
What did you see instead?
No new front or back ends. 2 Log lines = level=info msg="Skipping same configuration for provider docker"
Also seeing "Filtering disabled container /testEXAMPLEcom_wordpress_1" even though traefik.enable=true is set in the labels.
Output of traefik version:
Traefik version v1.7.11 built on 2019-04-26_08:42:33AM
What is your environment & configuration?
cat traefik.toml
#debug = true
logLevel = "INFO" #DEBUG, INFO, WARN, ERROR, FATAL, PANIC
InsecureSkipVerify = true
defaultEntryPoints = ["https", "http"]
# WEB interface of Traefik - it will show web page with overview of frontend and backend configurations
[api]
entryPoint = "traefik"
dashboard = true
address = ":8080"
# Force HTTPS
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[entryPoints.https.redirect]
permanent=true
regex = "^https://www.(.*)"
replacement = "https://$1"
[retry]
[docker]
endpoint = "unix:///var/run/docker.sock"
watch = true
exposedByDefault = false
# Let's encrypt configuration
[acme]
email = "japayton42#gmail.com" #any email id will work
storage="/etc/traefik/acme/acme.json"
entryPoint = "https"
acmeLogging=true
onDemand = false #create certificate when container is created
onHostRule = true
caServer = "https://acme-v02.api.letsencrypt.org/directory"
[acme.dnsChallenge]
provider = "cloudflare"
delayBeforeCheck = 3
cat docker-compose.yml
version: "3.6"
services:
traefik:
hostname: traefik
image: traefik:latest
container_name: traefik
restart: always
domainname: ${DOMAINNAME}
networks:
- default
- traefik_proxy
ports:
- "80:80"
- "443:443"
- "8080:8080"
environment:
- CF_API_EMAIL=${CLOUDFLARE_EMAIL}
- CF_API_KEY=${CLOUDFLARE_API_KEY}
labels:
- "traefik.enable=true"
- "traefik.backend=traefik"
- "traefik.frontend.rule=Host:${DOMAINNAME}; PathPrefixStrip: /traefik"
- "traefik.port=8080"
- "traefik.docker.network=traefik_proxy"
- "traefik.frontend.headers.SSLRedirect=true"
- "traefik.frontend.headers.STSSeconds=0"
- "traefik.frontend.headers.browserXSSFilter=true"
- "traefik.frontend.headers.contentTypeNosniff=true"
- "traefik.frontend.headers.forceSTSHeader=false"
- "traefik.frontend.headers.SSLHost=host.EXAMPLE.com"
- "traefik.frontend.headers.STSIncludeSubdomains=false"
- "traefik.frontend.headers.STSPreload=false"
- "traefik.frontend.headers.frameDeny=true"
- "traefik.frontend.auth.basic.users=${HTTP_USERNAME}:${HTTP_PASSWORD}"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ${USERDIR}/traefik:/etc/traefik
- ${USERDIR}/shared:/shared
networks:
traefik_proxy:
external:
name: traefik_proxy
default:
driver: bridge
cat docker-compose.yml
version: '3.3'
services:
db:
image: mysql:latest
volumes:
- ./db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
labels:
- traefik.enable=false
networks:
- db
wordpress:
depends_on:
- db
image: wordpress:latest
restart: always
volumes:
- ./app:/var/www
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
labels:
- "traefik.enabled=true"
- "traefik.domain=test.EXAMPLE.com"
- "traefik.backend=test.EXAMPLE.com"
- "traefik.frontend.rule=Host:www.test.EXAMPLE.com,test.EXAMPLE.com"
- "traefik.docker.network=traefik_proxy"
- "traefik.port=80"
- "traefik.frontend.headers.SSLRedirect=true"
- "traefik.frontend.headers.STSSeconds=0"
- "traefik.frontend.headers.browserXSSFilter=true"
- "traefik.frontend.headers.contentTypeNosniff=true"
- "traefik.frontend.headers.forceSTSHeader=false"
- "traefik.frontend.headers.SSLHost=EXAMPLE.com"
- "traefik.frontend.headers.STSIncludeSubdomains=false"
- "traefik.frontend.headers.STSPreload=false"
- "traefik.frontend.headers.frameDeny=true"
networks:
- traefik_proxy
- db
networks:
traefik_proxy:
external: true
db:
external: false
If applicable, please paste the log output in DEBUG level
Attaching to traefik
traefik | time="2019-05-04T00:29:34Z" level=info msg="Using TOML configuration file /etc/traefik/traefik.toml"
traefik | time="2019-05-04T00:29:34Z" level=info msg="Traefik version v1.7.11 built on 2019-04-26_08:42:33AM"
traefik | time="2019-05-04T00:29:34Z" level=debug msg="Global configuration loaded {\"LifeCycle\":{\"RequestAcceptGraceTimeout\":0,\"GraceTimeOut\":10000000000},\"GraceTimeOut\":0,\"Debug\":false,\"CheckNewVersion\":true,\"SendAnonymousUsage\":false,\"AccessLogsFile\":\"\",\"AccessLog\":null,\"TraefikLogsFile\":\"\",\"TraefikLog\":null,\"Tracing\":null,\"LogLevel\":\"DEBUG\",\"EntryPoints\":{\"http\":{\"Address\":\":80\",\"TLS\":null,\"Redirect\":{\"entryPoint\":\"https\"},\"Auth\":null,\"WhitelistSourceRange\":null,\"WhiteList\":null,\"Compress\":false,\"ProxyProtocol\":null,\"ForwardedHeaders\":{\"Insecure\":true,\"TrustedIPs\":null}},\"https\":{\"Address\":\":443\",\"TLS\":{\"MinVersion\":\"\",\"CipherSuites\":null,\"Certificates\":null,\"ClientCAFiles\":null,\"ClientCA\":{\"Files\":null,\"Optional\":false},\"DefaultCertificate\":null,\"SniStrict\":false},\"Redirect\":{\"regex\":\"^https://www.(.*)\",\"replacement\":\"https://$1\",\"permanent\":true},\"Auth\":null,\"WhitelistSourceRange\":null,\"WhiteList\":null,\"Compress\":false,\"ProxyProtocol\":null,\"ForwardedHeaders\":{\"Insecure\":true,\"TrustedIPs\":null}},\"traefik\":{\"Address\":\":8080\",\"TLS\":null,\"Redirect\":null,\"Auth\":null,\"WhitelistSourceRange\":null,\"WhiteList\":null,\"Compress\":false,\"ProxyProtocol\":null,\"ForwardedHeaders\":{\"Insecure\":true,\"TrustedIPs\":null}}},\"Cluster\":null,\"Constraints\":[],\"ACME\":{\"Email\":\"EXAMPLE#gmail.com\",\"Domains\":null,\"Storage\":\"/etc/traefik/acme/acme.json\",\"StorageFile\":\"\",\"OnDemand\":false,\"OnHostRule\":true,\"CAServer\":\"https://acme-v02.api.letsencrypt.org/directory\",\"EntryPoint\":\"https\",\"KeyType\":\"\",\"DNSChallenge\":{\"Provider\":\"cloudflare\",\"DelayBeforeCheck\":3000000000,\"Resolvers\":null,\"DisablePropagationCheck\":false},\"HTTPChallenge\":null,\"TLSChallenge\":null,\"DNSProvider\":\"\",\"DelayDontCheckDNS\":0,\"ACMELogging\":true,\"OverrideCertificates\":false,\"TLSConfig\":null},\"DefaultEntryPoints\":[\"https\",\"http\"],\"ProvidersThrottleDuration\":2000000000,\"MaxIdleConnsPerHost\":200,\"IdleTimeout\":0,\"InsecureSkipVerify\":true,\"RootCAs\":null,\"Retry\":{\"Attempts\":0},\"HealthCheck\":{\"Interval\":30000000000},\"RespondingTimeouts\":null,\"ForwardingTimeouts\":null,\"AllowMinWeightZero\":false,\"KeepTrailingSlash\":false,\"Web\":null,\"Docker\":{\"Watch\":true,\"Filename\":\"\",\"Constraints\":null,\"Trace\":false,\"TemplateVersion\":2,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"unix:///var/run/docker.sock\",\"Domain\":\"\",\"TLS\":null,\"ExposedByDefault\":false,\"UseBindPortIP\":false,\"SwarmMode\":false,\"Network\":\"\",\"SwarmModeRefreshSeconds\":15},\"File\":null,\"Marathon\":null,\"Consul\":null,\"ConsulCatalog\":null,\"Etcd\":null,\"Zookeeper\":null,\"Boltdb\":null,\"Kubernetes\":null,\"Mesos\":null,\"Eureka\":null,\"ECS\":null,\"Rancher\":null,\"DynamoDB\":null,\"ServiceFabric\":null,\"Rest\":null,\"API\":{\"EntryPoint\":\"traefik\",\"Dashboard\":true,\"Debug\":false,\"CurrentConfigurations\":null,\"Statistics\":null},\"Metrics\":null,\"Ping\":null,\"HostResolver\":null}"
traefik | time="2019-05-04T00:29:34Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://docs.traefik.io/basics/#collected-data\n"
traefik | time="2019-05-04T00:29:34Z" level=debug msg="Setting Acme Certificate store from Entrypoint: https"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Preparing server traefik &{Address::8080 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc000376be0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
traefik | time="2019-05-04T00:29:35Z" level=debug msg="Creating entry point redirect http -> https"
traefik | time="2019-05-04T00:29:35Z" level=debug msg="Creating regex redirect https -> ^https://www.(.*) -> https://$1"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Preparing server http &{Address::80 TLS:<nil> Redirect:0xc00019b500 Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc000376c00} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
traefik | time="2019-05-04T00:29:35Z" level=debug msg="Creating entry point redirect http -> https"
traefik | time="2019-05-04T00:29:35Z" level=debug msg="Creating regex redirect https -> ^https://www.(.*) -> https://$1"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Preparing server https &{Address::443 TLS:0xc00042aab0 Redirect:0xc00019b680 Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc000376ba0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Starting server on :8080"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Starting server on :80"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Starting provider configuration.ProviderAggregator {}"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Starting server on :443"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Starting provider *docker.Provider {\"Watch\":true,\"Filename\":\"\",\"Constraints\":null,\"Trace\":false,\"TemplateVersion\":2,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"unix:///var/run/docker.sock\",\"Domain\":\"\",\"TLS\":null,\"ExposedByDefault\":false,\"UseBindPortIP\":false,\"SwarmMode\":false,\"Network\":\"\",\"SwarmModeRefreshSeconds\":15}"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Starting provider *acme.Provider {\"Email\":\"EXAMPLE#gmail.com\",\"ACMELogging\":true,\"CAServer\":\"https://acme-v02.api.letsencrypt.org/directory\",\"Storage\":\"/etc/traefik/acme/acme.json\",\"EntryPoint\":\"https\",\"KeyType\":\"\",\"OnHostRule\":true,\"OnDemand\":false,\"DNSChallenge\":{\"Provider\":\"cloudflare\",\"DelayBeforeCheck\":3000000000,\"Resolvers\":null,\"DisablePropagationCheck\":false},\"HTTPChallenge\":null,\"TLSChallenge\":null,\"Domains\":null,\"Store\":{}}"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Testing certificate renew..."
traefik | time="2019-05-04T00:29:35Z" level=debug msg="Configuration received from provider ACME: {}"
traefik | time="2019-05-04T00:29:35Z" level=debug msg="Provider connection established with docker 18.09.5 (API 1.39)"
traefik | time="2019-05-04T00:29:35Z" level=debug msg="Filtering disabled container /traefik"
traefik | time="2019-05-04T00:29:35Z" level=debug msg="Configuration received from provider docker: {}"
traefik | time="2019-05-04T00:29:35Z" level=debug msg="Adding certificate for domain(s) host.EXAMPLE.com"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Server configuration reloaded on :80"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Server configuration reloaded on :443"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Server configuration reloaded on :8080"
traefik | time="2019-05-04T00:29:35Z" level=debug msg="Adding certificate for domain(s) host.EXAMPLE.com"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Server configuration reloaded on :8080"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Server configuration reloaded on :80"
traefik | time="2019-05-04T00:29:35Z" level=info msg="Server configuration reloaded on :443"
traefik | time="2019-05-04T00:29:49Z" level=debug msg="Provider event received {Status:start ID:58b2b12336a7488012b68fed21f90476a1e791f4aa17fff7bd266ee9dfbe7a68 From:mysql:latest Type:container Action:start Actor:{ID:58b2b12336a7488012b68fed21f90476a1e791f4aa17fff7bd266ee9dfbe7a68 Attributes:map[image:mysql:latest com.docker.compose.config-hash:f4e61702252003fcfa77f925b989f7ea4933faa6e0735d189d265eabcb5fa799 com.docker.compose.container-number:1 com.docker.compose.service:db com.docker.compose.version:1.24.0 name:testEXAMPLEcom_db_1 traefik.enable:false com.docker.compose.oneoff:False com.docker.compose.project:testEXAMPLEcom]} Scope:local Time:1556929789 TimeNano:1556929789841010516}"
traefik | time="2019-05-04T00:29:49Z" level=debug msg="Filtering disabled container /testEXAMPLEcom_db_1"
traefik | time="2019-05-04T00:29:49Z" level=debug msg="Filtering disabled container /traefik"
traefik | time="2019-05-04T00:29:49Z" level=debug msg="Configuration received from provider docker: {}"
traefik | time="2019-05-04T00:29:49Z" level=info msg="Skipping same configuration for provider docker"
traefik | time="2019-05-04T00:29:50Z" level=debug msg="Provider event received {Status:start ID:6d816029188e7a2ff7b7d37fec254e21424cb3e2f69b42c81638d40d56d818f3 From:wordpress:latest Type:container Action:start Actor:{ID:6d816029188e7a2ff7b7d37fec254e21424cb3e2f69b42c81638d40d56d818f3 Attributes:map[com.docker.compose.version:1.24.0 traefik.docker.network:traefik_proxy traefik.domain:test.EXAMPLE.com traefik.enabled:true traefik.frontend.headers.SSLHost:EXAMPLE.com traefik.frontend.headers.browserXSSFilter:true com.docker.compose.container-number:1 traefik.frontend.headers.STSPreload:false traefik.frontend.headers.STSSeconds:0 traefik.frontend.headers.contentTypeNosniff:true traefik.frontend.headers.forceSTSHeader:false com.docker.compose.oneoff:False com.docker.compose.service:wordpress traefik.frontend.headers.SSLRedirect:true traefik.frontend.rule:Host:www.test.EXAMPLE.com,test.EXAMPLE.com traefik.port:80 com.docker.compose.config-hash:3e163655e60a2111f256d3abc3289244f21676737888f7f4340cd543d82300d0 com.docker.compose.project:testEXAMPLEcom image:wordpress:latest name:testEXAMPLEcom_wordpress_1 traefik.frontend.headers.STSIncludeSubdomains:false traefik.frontend.headers.frameDeny:true]} Scope:local Time:1556929790 TimeNano:1556929790976253470}"
traefik | time="2019-05-04T00:29:50Z" level=debug msg="Filtering disabled container /testEXAMPLEcom_wordpress_1"
traefik | time="2019-05-04T00:29:50Z" level=debug msg="Filtering disabled container /testEXAMPLEcom_db_1"
traefik | time="2019-05-04T00:29:50Z" level=debug msg="Filtering disabled container /traefik"
traefik | time="2019-05-04T00:29:50Z" level=debug msg="Configuration received from provider docker: {}"
traefik | time="2019-05-04T00:29:50Z" level=info msg="Skipping same configuration for provider docker"
I was using traefik.enabled instead of traefik.enable
I tried for a week before posting,

docker traefik letsencrypt DDNS duckdns behind dd-wrt router

I want to expose self-hosted service to access from internet (tinytinyrss, owncloud and other stuff). So I decided to use traefik as reverse proxy with letsencrypt for HTTPS certificat. Before jumping into a whole stack for each service a tried to test a simple stack with traefik and letsencrypt and a simple whoami container that respond a simple text. The docker is running on a odroid XU-4 board.
Here is my docker-compose :
version: '3.6'
services:
traefik:
container_name: traefik
image: traefik:1.6.1-alpine
ports:
- 80:80
- 443:443
- 8080:8080
networks:
- proxy
environment:
- DUCKDNS_TOKEN=my_duck_dns_token
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.toml:/traefik.toml
- ./traefik/acme/acme.json:/etc/traefik/acme.json
- ./log:/var/log/traefik
labels:
- traefik.enable=true
- traefik.port=8080
- traefik.frontend.rule=Host:my_duck_dns.duckdns.org
restart: always
whoami:
container_name: whoami
image: hypriot/rpi-whoami
ports:
- 8000
networks:
- proxy
labels:
- traefik.frontend.rule=Host:my_duck_dns.duckdns.org;PathPrefixStrip:/whoami/
- traefik.frontend.entryPoints=https
- traefik.docker.network=proxy
- traefik.protocol=http
- traefik.enable=true
- traefik.port=8000
restart: always
networks:
proxy:
name: proxy
And my traefik.toml :
debug = true
logLevel = "DEBUG"
checkNewVersion = true
defaultEntryPoints = ["http", "https"]
[proxy]
address = ":8080"
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
#[traefikLog]
# filePath = "/var/log/traefik/traefik.log"
# format = "json"
# logLevel = "DEBUG"
#[accessLog]
# filePath = "/var/log/traefik/access.log"
# format = "json"
# logLevel = "DEBUG"
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "my_duck_dns.duckdns.org"
exposedbydefault = false
watch = true
[acme]
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
email = "my_email_address#gmail.com"
storage = "/etc/traefik/acme.json"
entryPoint = "https"
acmeLogging = false
[acme.httpChallenge]
entryPoint = "http"
[acme.dnsChallenge]
provider = "duckdns"
delayBeforeCheck = 0
[[acme.domains]]
main = "my_duck_dns.duckdns.org"
sans = ["my_duck_dns.duckdns.org"]
My router is a dd-wrt flash, I forward 80, 8080 and 443 port to a debian computer with this dock-compose on it. The router hand the dynamic duck DNS update.
I ran my containers with the folowing command :
docker-compose build --no-cache && docker-compose up --build
And I got these logs when I try to hit the http://my_duck_dns.duckdns.org/whoami/ from the outside of my LAN. The 80 is redirected correctly to the 443 but with this log :
traefik | time="2018-05-18T16:15:05Z" level=debug msg="http: TLS handshake error from 151.58.32.33:65175: read tcp 172.27.0.3:443->154.47.32.66:64175: read: connection reset by peer"
The whole DEBUG stack is below :
whoami | Listening on :8000
traefik | time="2018-05-18T18:30:55Z" level=info msg="Using TOML configuration file /traefik.toml"
traefik | time="2018-05-18T18:30:55Z" level=info msg="Traefik version v1.6.1 built on 2018-05-14_07:16:56PM"
traefik | time="2018-05-18T18:30:55Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://docs.traefik.io/basics/#collected-data\n"
traefik | time="2018-05-18T18:30:55Z" level=debug msg="Global configuration loaded {\"LifeCycle\":{\"RequestAcceptGraceTimeout\":0,\"GraceTimeOut\":10000000000},\"GraceTimeOut\":0,\"Debug\":true,\"CheckNewVersion\":true,\"SendAnonymousUsage\":false,\"AccessLogsFile\":\"\",\"AccessLog\":null,\"TraefikLogsFile\":\"\",\"TraefikLog\":null,\"Tracing\":null,\"LogLevel\":\"DEBUG\",\"EntryPoints\":{\"http\":{\"Address\":\":80\",\"TLS\":null,\"Redirect\":{\"entryPoint\":\"https\"},\"Auth\":null,\"WhitelistSourceRange\":null,\"WhiteList\":null,\"Compress\":false,\"ProxyProtocol\":null,\"ForwardedHeaders\":{\"Insecure\":true,\"TrustedIPs\":null}},\"https\":{\"Address\":\":443\",\"TLS\":{\"MinVersion\":\"\",\"CipherSuites\":null,\"Certificates\":null,\"ClientCAFiles\":null,\"ClientCA\":{\"Files\":null,\"Optional\":false}},\"Redirect\":null,\"Auth\":null,\"WhitelistSourceRange\":null,\"WhiteList\":null,\"Compress\":false,\"ProxyProtocol\":null,\"ForwardedHeaders\":{\"Insecure\":true,\"TrustedIPs\":null}}},\"Cluster\":null,\"Constraints\":[],\"ACME\":null,\"DefaultEntryPoints\":[\"http\",\"https\"],\"ProvidersThrottleDuration\":2000000000,\"MaxIdleConnsPerHost\":200,\"IdleTimeout\":0,\"InsecureSkipVerify\":false,\"RootCAs\":null,\"Retry\":null,\"HealthCheck\":{\"Interval\":30000000000},\"RespondingTimeouts\":null,\"ForwardingTimeouts\":null,\"AllowMinWeightZero\":false,\"Web\":null,\"Docker\":{\"Watch\":true,\"Filename\":\"\",\"Constraints\":null,\"Trace\":false,\"TemplateVersion\":2,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"unix:///var/run/docker.sock\",\"Domain\":\"#.duckdns.org\",\"TLS\":null,\"ExposedByDefault\":false,\"UseBindPortIP\":false,\"SwarmMode\":false},\"File\":null,\"Marathon\":null,\"Consul\":null,\"ConsulCatalog\":null,\"Etcd\":null,\"Zookeeper\":null,\"Boltdb\":null,\"Kubernetes\":null,\"Mesos\":null,\"Eureka\":null,\"ECS\":null,\"Rancher\":null,\"DynamoDB\":null,\"ServiceFabric\":null,\"Rest\":null,\"API\":null,\"Metrics\":null,\"Ping\":null}"
traefik | time="2018-05-18T18:30:55Z" level=error msg="Failed to read new account, ACME data conversion is not available : permissions 664 for /etc/traefik/acme.json are too open, please use 600"
traefik | time="2018-05-18T18:30:55Z" level=info msg="Preparing server http &{Address::80 TLS:<nil> Redirect:0x13e3d980 Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0x13ea6c00} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
traefik | time="2018-05-18T18:30:55Z" level=info msg="Preparing server https &{Address::443 TLS:0x13b82080 Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0x13ea6c10} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
traefik | time="2018-05-18T18:30:55Z" level=info msg="Starting server on :80"
traefik | time="2018-05-18T18:30:56Z" level=info msg="Starting server on :443"
traefik | time="2018-05-18T18:30:56Z" level=info msg="Starting provider configuration.providerAggregator {}"
traefik | time="2018-05-18T18:30:56Z" level=info msg="Starting provider *docker.Provider {\"Watch\":true,\"Filename\":\"\",\"Constraints\":null,\"Trace\":false,\"TemplateVersion\":2,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"unix:///var/run/docker.sock\",\"Domain\":\"my_duck_dns.duckdns.org\",\"TLS\":null,\"ExposedByDefault\":false,\"UseBindPortIP\":false,\"SwarmMode\":false}"
traefik | time="2018-05-18T18:30:56Z" level=info msg="Starting provider *acme.Provider {\"Email\":\"my_email_address#gmail.com\",\"ACMELogging\":false,\"CAServer\":\"https://acme-staging-v02.api.letsencrypt.org/directory\",\"Storage\":\"/etc/traefik/acme.json\",\"EntryPoint\":\"https\",\"OnHostRule\":false,\"OnDemand\":false,\"DNSChallenge\":{\"Provider\":\"duckdns\",\"DelayBeforeCheck\":0},\"HTTPChallenge\":{\"EntryPoint\":\"http\"},\"Domains\":[{\"Main\":\"my_duck_dns.duckdns.org\",\"SANs\":[\"my_duck_dns.duckdns.org\"]}],\"Store\":{}}"
traefik | time="2018-05-18T18:30:56Z" level=error msg="Error starting provider *acme.Provider: unable to get ACME account : permissions 664 for /etc/traefik/acme.json are too open, please use 600"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Provider connection established with docker 18.05.0-ce (API 1.37)"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="originLabelsmap[com.docker.compose.service:traefik org.label-schema.docker.schema-version:1.0 org.label-schema.version:v1.6.1 traefik.port:8080 com.docker.compose.oneoff:False org.label-schema.description:A modern reverse-proxy traefik.frontend.rule:Host:my_duck_dns.duckdns.org com.docker.compose.config-hash:d0eee974d8ebe83a1e048b7e554fad562e4c3631785fe5dc2485f947910ffb90 com.docker.compose.container-number:1 org.label-schema.url:https://traefik.io org.label-schema.vendor:Containous traefik.enable:true com.docker.compose.project:odroidtests com.docker.compose.version:1.20.0 org.label-schema.name:Traefik]"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="allLabelsmap[:map[traefik.frontend.rule:Host:my_duck_dns.duckdns.org traefik.enable:true traefik.port:8080]]"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="originLabelsmap[com.docker.compose.config-hash:3586d1268056130cedb21e01704782c7d311fbcb286fd56b64e92ec8bb690e22 traefik.docker.network:proxy traefik.frontend.entryPoints:https traefik.port:8000 traefik.protocol:http traefik.frontend.rule:Host:my_duck_dns.duckdns.org;PathPrefixStrip:/whoami/ com.docker.compose.container-number:1 com.docker.compose.oneoff:False com.docker.compose.project:odroidtests com.docker.compose.service:whoami com.docker.compose.version:1.20.0 traefik.enable:true]"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="allLabelsmap[:map[traefik.port:8000 traefik.protocol:http traefik.docker.network:proxy traefik.enable:true traefik.frontend.rule:Host:my_duck_dns.duckdns.org;PathPrefixStrip:/whoami/ traefik.frontend.entryPoints:https]]"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="originLabelsmap[com.docker.compose.service:traefik org.label-schema.docker.schema-version:1.0 org.label-schema.version:v1.6.1 traefik.port:8080 com.docker.compose.oneoff:False org.label-schema.description:A modern reverse-proxy traefik.frontend.rule:Host:my_duck_dns.duckdns.org com.docker.compose.config-hash:d0eee974d8ebe83a1e048b7e554fad562e4c3631785fe5dc2485f947910ffb90 com.docker.compose.container-number:1 org.label-schema.url:https://traefik.io org.label-schema.vendor:Containous traefik.enable:true com.docker.compose.project:odroidtests com.docker.compose.version:1.20.0 org.label-schema.name:Traefik]"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="allLabelsmap[:map[traefik.port:8080 traefik.frontend.rule:Host:my_duck_dns.duckdns.org traefik.enable:true]]"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="originLabelsmap[com.docker.compose.project:odroidtests com.docker.compose.service:whoami com.docker.compose.version:1.20.0 traefik.enable:true traefik.frontend.rule:Host:my_duck_dns.duckdns.org;PathPrefixStrip:/whoami/ com.docker.compose.container-number:1 com.docker.compose.oneoff:False traefik.frontend.entryPoints:https traefik.port:8000 traefik.protocol:http com.docker.compose.config-hash:3586d1268056130cedb21e01704782c7d311fbcb286fd56b64e92ec8bb690e22 traefik.docker.network:proxy]"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="allLabelsmap[:map[traefik.enable:true traefik.docker.network:proxy traefik.frontend.entryPoints:https traefik.port:8000 traefik.protocol:http traefik.frontend.rule:Host:my_duck_dns.duckdns.org;PathPrefixStrip:/whoami/]]"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Validation of load balancer method for backend backend-traefik-odroidtests failed: invalid load-balancing method ''. Using default method wrr."
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Validation of load balancer method for backend backend-whoami-odroidtests failed: invalid load-balancing method ''. Using default method wrr."
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Configuration received from provider docker: {\"backends\":{\"backend-traefik-odroidtests\":{\"servers\":{\"server-traefik\":{\"url\":\"http://172.27.0.2:8080\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"backend-whoami-odroidtests\":{\"servers\":{\"server-whoami\":{\"url\":\"http://172.27.0.3:8000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}}},\"frontends\":{\"frontend-Host-my_duck_dns-duckdns-org-0\":{\"entryPoints\":[\"http\",\"https\"],\"backend\":\"backend-traefik-odroidtests\",\"routes\":{\"route-frontend-Host-my_duck_dns-duckdns-org-0\":{\"rule\":\"Host:my_duck_dns.duckdns.org\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":[]},\"frontend-Host-my_duck_dns-duckdns-org-PathPrefixStrip-whoami-1\":{\"entryPoints\":[\"https\"],\"backend\":\"backend-whoami-odroidtests\",\"routes\":{\"route-frontend-Host-my_duck_dns-duckdns-org-PathPrefixStrip-whoami-1\":{\"rule\":\"Host:my_duck_dns.duckdns.org;PathPrefixStrip:/whoami/\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":[]}}}"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating frontend frontend-Host-my_duck_dns-duckdns-org-0"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Wiring frontend frontend-Host-my_duck_dns-duckdns-org-0 to entryPoint http"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating route route-frontend-Host-my_duck_dns-duckdns-org-0 Host:my_duck_dns.duckdns.org"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating entry point redirect http -> https"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating backend backend-traefik-odroidtests"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating load-balancer wrr"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating server server-traefik at http://172.27.0.2:8080 with weight 1"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Wiring frontend frontend-Host-my_duck_dns-duckdns-org-0 to entryPoint https"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating route route-frontend-Host-my_duck_dns-duckdns-org-0 Host:my_duck_dns.duckdns.org"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating backend backend-traefik-odroidtests"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating load-balancer wrr"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating server server-traefik at http://172.27.0.2:8080 with weight 1"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating frontend frontend-Host-my_duck_dns-duckdns-org-PathPrefixStrip-whoami-1"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Wiring frontend frontend-Host-my_duck_dns-duckdns-org-PathPrefixStrip-whoami-1 to entryPoint https"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating route route-frontend-Host-my_duck_dns-duckdns-org-PathPrefixStrip-whoami-1 Host:my_duck_dns.duckdns.org;PathPrefixStrip:/whoami/"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating backend backend-whoami-odroidtests"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating load-balancer wrr"
traefik | time="2018-05-18T18:30:56Z" level=debug msg="Creating server server-whoami at http://172.27.0.3:8000 with weight 1"
traefik | time="2018-05-18T18:30:56Z" level=info msg="Server configuration reloaded on :80"
traefik | time="2018-05-18T18:30:56Z" level=info msg="Server configuration reloaded on :443"
traefik | time="2018-05-18T18:30:05Z" level=debug msg="http: TLS handshake error from 151.58.32.33:65175: read tcp 172.27.0.3:443->154.47.32.66:64175: read: connection reset by peer"
The acme.json is filled with these on the container :
{
"Account": {
"Email": "my_email_address#gmail.com",
"Registration": {
"body": {
"status": "valid",
"contact": [
"mailto:my_email_address#gmail.com"
]
},
"uri": "https://acme-staging-v02.api.letsencrypt.org/acme/acct/6100073"
},
"PrivateKey": "MIIJKQIBAAKCAgEAyg8Zb7v4cegwKpM1RxbCdrqsqfqsfdsqdfsdfgsdfgsdfg/EG0w4UP0MGD2mwZFQXAVsFjK8M3IRFP0xqNbNTQgN6nw/+pPqlxPl5kmWHIhnRp7iXGWMw/UCexT8P2eWGA9UUzXcLxwfsB74nc23nKewLBGsjRtbBXybrjtfNhLOhpPv5mF5T3s0QpT4JNlU4D3+AqnI4Vk09gVH2B+c4VK3a0Z6XsdXRPUP2oSesLyc3EeA0ayQRQyr/FAC89EDfM9BiOkONcTsCwzwQ7bstkfVXl7hZ09E0mYmdvQwMGLqTIns2EMPP3+i+/oF4SVuB/g0Sv8S06bQe54vQsHalU5EUv98ge2hfZdHLSsvgnpylTH6+TXxYIxdvqhavraxgRgirOrVV06rq5vXcHNkTW+lL/b2D23LDs/MDlJ7N5iwlycGABlNNospiYc20YLldDlY1YyZ7ToVu437+Y86gQ+msOYWlbezgE9tNbe8/kuK1zlRS8U2t4yj74YYQq96pmZTGmdnrDKnPU45JhnL6KQD1W7joLEl7UuSVvD7iVo5TpUo8VLgasKrV3WCs/FDFq1GD5S1Rbp070ZNPeA6cJaRE8cJV2fvGxQJXZUCAwEAAQKCAgB2ZsCuA8TC4p8O47INlR2guYgGVO+gAY7ZXdE1dcqD/5r6z9eRe/X4O575sNDUKu3lWDt3twkpXwKPqFSjf4DpmuJYF5Y2tSVbRlPudMESARH9UjvxJIlywYgilRV4QvArbz5anebHXzV/I/E0jmWCHVSyNe0JoYNQD0sg+kz2U9p9n8R80m+sWulQjorEmvhxIv8F3D26gM4kYOOErbFbrwdP6cdHG5g0G3qLzYfgYfQZ2I1iuN+T7TxLwdNOzMJXvW6J9GmbH+iRfFeXbmK6F1QziOfoGaVtUnwl8UA3+QyQXgXA4rlupCLqA1HoiO4h9zGG+l/ZnU9xXJJAq3VB/E38tBCBCMUB8Ms88ydsR2twq2THxRexiojCNbwxMjPV9qKi8hLU9ryl/ja7Pyu3ZgxsDNWKGla+UEdw/cw0CkmRKPIOPFmFbRiL58nSG+OLpAHpBjDkhWGH+wS/A3+snUHVPYhNzQPGwiEpunw/Yp5sbiqB4QkJAE3ThBByg3JdBsEaz0kZHyBNC3DcrqG8V4kBG0I1uMjddXfyJZd+J11RV6wPGY0hHKIdgaSvDCNIQ3A0WotnteQjG1MFWALgMaf1fxp+SQXoYVX051c6fgnCb3t+RT+CNBTnhx6tKf94DwY+TcGWdXL19Jlrp9ADPoKHOww8GbfAAkSj53F95QKCAQEA//0zCOVcwT6qoTKzD5RGcxY8XA5qy7ElMYp0cnxZt3FhMEgqzfVXmUF7JECvnRg0BhpAvW8mRbBFTvHnFuST89GAIrxH8FY4WdF+u2a82fd6fDYYGuJrWhWeMG4tl3jaj/CtdQLcfZmeP9QEPw5BGMRFHgckNrIsUGxu1CLbqDMg1kRxFdpRVrHaLQoLRoM5ClS+uskzrvXrU7yiRbH380CmIdB7wyXaS08coRmVNEFIdjNMbUw+fy5ZxotO31N2a7ACb1djKu/AhSDQbj9EwKuXD87xLO6pfHObaFe93N8zkS3UCF3CR4tnjVsAu3DZ7NFCd7tBN4y4T7CT0LRn8wKCAQEAyhFPWzGlrl4Qd9PBcxYaWo9BveiFDRqcdkaId3QI76PB7eeLgAAs6NEYH2QrqgKs46fx5f7Wqqj2PRGIImXaSAB66pzPKd+rlYYIuyKT4iYFZysRy59B3RNw2nsMF0v3WRYOOl2QA42Ziy9We1fe2E9ohPK8fUpT6ZOe5hv8rCA5iUOhDppl8dzEepPlqkEoE25DoFZR30HZGRL7JT64KK5xQtfjyyXwdFthwLj27btnUSx5ldYN8PUkALrf38YKaoiCfj3EB2jPfdul0YtmrirdvneGhpEXQDPloznmbP2/U73znfSSqZSHudI1g+x0mXB87H1J1pGkQsbPVjpOVwKCAQA0DzohxQNoCWaKAdWIhY8OOKdt0UDGy+/Uc2PbJI7aT6SEPSj3Wb3G3Ro99SnBuPpbg1tHKyONaJuvwmJMtY+hNino5oF6zw4GtiQf2HTvnvS57gZY8VMDrwHMt5tuApXwT/H2qe5NXMBiGqwCZtO2RbQIt0sWFIYOlP61BaHGQx+ac7DL0OpZxzGnlzNT07v17eYb9m8cVcbV8LbPlbHnNm6S0eNZfIk4Z45a9OjzB5PE9gnE8IyFMNfxGMOhh0e9/r2ABzWTtc5hRJse0J8az8qY3G0PxjmRpbElNzLViE7kZ32Hdgncou0cQjWT6Q9oqeXqk5pfwa56Bl8JQqchAoIBAQChl/o4WZm/ueW9jhB0MsbciRfwAVT1x8Q8KefUb2z+B5183eCHepxvi1eZMwhgK0eLv7EJVyTg0cIp0C1oJL/NOOUTXleliwOyzb+Jt/s/rVxAxwayKigH3hYwApsGvm+ORL8YGd6jmMejsTWd6gWCQu64802dfKVic/Vs3BDSreqVRQo1nW/NXdmalU/jObwM3e8i+CT9P7GYBb/mZyPrFKXq6K94tFx5EOM5tjFyqJ3VIpYRJ196xPAHzWpfkAagb4672jU8H6tfYRpYWvzAZ/Nw8DEayEkpxNbuE82cd8hb9dovBXmMOAXaqqq1V5Ffa7/bd85m043jAQ6qTHJ9AoIBAQCaz4ooInQ8eRF0umZ6NmW3j8cKNT0Avkqt2zXwgvjGTrKaud6smH8qmhS2JNAUcLSHqAGJULROWTKaJiT3UOggPEAdAC6+3pxfn3k9rIEeNU1OMTgwxNcX8JdtDw+WA+WMDk6LM/ZPZkoT6YSGcvk1V9ocp0gzFpL5prlkMbXxR6d8dcI1elTDXbHQ/zTAQiPxhjSMvwR/8HRHzEnfxU5bSQD9sKt6FZhTdi322Crjx9PmVOE6AR+kqqy9GPZ9S8oYthwLCgfuZNIuy3O1u9V5epg4vNoNEB7WDtHIxYQqfhOPHaqsfhY5t0JQBYcUoSWhFskRDvVrD1f6eZxVspcZ"
},
"Certificates": [
{
"Domain": {
"Main": "my_duck_dns.duckdns.org",
"SANs": null
},
"Certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUcvVENDQmVXZ0F3SUsgdfgsdfm16eDNycmNsVHltTnNDK0ZDTkRBTkJna3Foa2lHOXcwQkFRc0YKQURBaU1TQXdIZ1lEVlFRRERCZEdZV3RsSUV4RklFbHVkR1Z5YldWa2FXRjBaU0JZTVRBZUZ3MHhPREExTVRZeApPREV6TXpsYUZ3MHhPREE0TVRReE9ERXpNemxhTUNNeElUQWZCZ05WQkFNVEdHRjNaWE52YldWemFHOWxjeTVrCmRXTnJaRzV6TG05eVp6Q0NBaUl3RFFZSktvWklodmNOQVFFQkJRQURnZ0lQQURDQ0Fnb0NnZ0lCQU1kaVE3VDEKNjZuOXR2MWphRHJ0ZnBwRGc1SzZBbkw3OGNzS2c3SEFCTFRUSlBzRlFtNkFTckpITXBhMW5iUk5IQTMydjQvTwpDZzB5RlZvUnFZUjRtT0djK01DSlpicVhERmhUMmRwcUJIdG9WWFBuTWV1OWRBTTdDZENNSjVYaW4wbk02NHhNCmwvZm96UWJsRWVxOUpXU2JxVjAwMzhtb2tuMzlQWmFBRUc1N2pQbDUwaUpnU0VvU0xEeXQwTXZjeGZSdDE1WXIKeUhMZGhpMkM5emxOUnlYbXdod1d4cXpGTDBDcDFlb01hcG5GNldYT1NQcDBEZTJIR3RoNjMxQ2hRcm02L1dIRgpUVkZwa3JVUGlpbzhwZ2dEd1pUQk5qbE91TFRNVWVnVHhsVkt4Q0pRRUdhNVBXVzFvWXM1L05veGNvdEpiVVJXCkdHTlh6YTF5YlZ3RFJoUUV6TEhpWGZlTi9KZ3dEY2JvTFdXQUp1dUlGZUdCVENBUVFINDljSVV5NjhqZzh0WnQKUXBUMm1hbmRFRFgvVDZXSFNWVUdnTGtXbGtmN1hwMW82dUozc1h0ZHpBT2E3MlVLRHVxcFIraXNPVnluQ1h6MgpCNUVRbTc1QjNYTmdEUGdQYnZnK1hHQkJVSDdyK0FCSVJwSDgwaVVDdnd5WVdKSTZGNUhQajZOTmQrZTExd0dVCkhTU2FQTUxBaHJGWmUyenNUN3VXTHp3bmVEeExrYWI2MzZpekpuS0tNWUlmZ2I0WVEwUXpqQU1yVUZML05ITTIKeXBrMmhCL2dYKzFibFVqNk1TTU1CMG9wRmk0ZlJlQ1JZc0l4Y0dEU1hxcGtVSXdmVUdlRUFXb3BjYWp3TTJaRwpoR1lKUW5NUDVKVjd1VVBBamdXVHd2Nk4rSlRJRHJweXgwVTFBZ01CQUFHamdnTXBNSUlESlRBT0JnTlZIUThCCkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIKL3dRQ01BQXdIUVlEVlIwT0JCWUVGS3F4dUMyV1kyZnZCR3hGOGdPTXR4YmNzV2l0TUI4R0ExVWRJd1FZTUJhQQpGTURNQTBhNVdDRE1YSEp3OCtFdXl5Q205V2c2TUhjR0NDc0dBUVVGQndFQkJHc3dhVEF5QmdnckJnRUZCUWN3CkFZWW1hSFIwY0RvdkwyOWpjM0F1YzNSbkxXbHVkQzE0TVM1c1pYUnpaVzVqY25sd2RDNXZjbWN3TXdZSUt3WUIKQlFVSE1BS0dKMmgwZEhBNkx5OWpaWEowTG5OMFp5MXBiblF0ZURFdWJHVjBjMlZ1WTNKNWNIUXViM0puTHpBagpCZ05WSFJFRUhEQWFnaGhoZDJWemIyMWxjMmh2WlhNdVpIVmphMlJ1Y3k1dmNtY3dnZjRHQTFVZElBU0I5akNCCjh6QUlCZ1puZ1F3QkFnRXdnZVlHQ3lzR0FRUUJndDhUQVFFQk1JSFdNQ1lHQ0NzR0FRVUZCd0lCRmhwb2RIUncKT2k4dlkzQnpMbXhsZEhObGJtTnllWEIwTG05eVp6Q0Jxd1lJS3dZQkJRVUhBZ0l3Z1o0TWdadFVhR2x6SUVObApjblJwWm1sallYUmxJRzFoZVNCdmJteDVJR0psSUhKbGJHbGxaQ0IxY0c5dUlHSjVJRkpsYkhscGJtY2dVR0Z5CmRHbGxjeUJoYm1RZ2IyNXNlU0JwYmlCaFkyTnZjbVJoYm1ObElIZHBkR2dnZEdobElFTmxjblJwWm1sallYUmwKSUZCdmJHbGplU0JtYjNWdVpDQmhkQ0JvZEhSd2N6b3ZMMnhsZEhObGJtTnllWEIwTG05eVp5OXlaWEJ2YzJsMApiM0o1THpDQ0FRVUdDaXNHQVFRQjFua0NCQUlFZ2ZZRWdmTUE4UUIzQUNoMkdoaVFKL3Z2UE5EV0dnR05kckJRClZ5bkhwMEViekwzMkJQUmRRbUZUQUFBQlkycGZTRWdBQUFRREFFZ3dSZ0loQUxWTVlBNTVCdFNXZUNOSFg4cXgKMFV0dmlNU0kzdVVueCtQNlJUN05yRWlRQWlFQXBDbFAvOXhnQXIzU0xZL1hWdWMwb2ZVVnBkSkFNWDBjMVFjWgpiTEtmbkRvQWRnQ3d6SVBscGZsOWE2OThDY3dvU1FTSEtzZm9peE1zWTFDM3h2MG00V3hzZHdBQUFXTnFYMGtwCkFBQUVBd0JITUVVQ0lDMDRmYnpaM3BFQlRVUGhLa0JNQk9kTUJIakh1dmRVdm81MUNiajFraEhUQWlFQW5ZTG0KZTE1RHI2SFl0RStGRnpmdmtQSGxWZ3RLZnVEcUF2Mzl0VVhnanJ3d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQgpBSlBmVmc5SHJkQ05WdDQ4STg5cC9CTmlMbktMa3c5T1NkSmpBd095emxNZ21oMjJZdWJFay9XQ0dhRUJ6a0g5CjBSOWF5WlQ3RkhjU1NrYVdqT2MrQmpuWUswV2dGanVGaVZwdkdIMkJ1WWtZM3NXM2tsR2xPQlkxYmltZW1mdUYKU0U5Vlo5Nk5WQ21hbEI1NnVSS3dDRjJwQzFKeE16VnQ3MEtrbEk5V2hBQUd1RDZqWWJyZWhiYU5TMy9ObmV1SQp5U3lXeFBPOHVSTE44Y2VlTUJ1cnh5eHZMRWpXdXpGVkFld0RmMkFvSlpyQ3ppNERCNnZDbFdMOHhEYU52eE5hCjBlKzhSZEs1cmpzMWh6T1dRb05NWExBb0FSM1VtYWVmZkdINVVqbGlvRk11dW84TjhQdnNmVGRlU3ROaG40d1IKdi9idWJlTGRseHJZbk02OVdFTExxTzQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlFcXpDQ0FwT2dBd0lCQWdJUkFJdmhLZzVaUk8wOFZHUXg4SmRoVCtVd0RRWUpLb1pJaHZjTkFRRUxCUUF3CkdqRVlNQllHQTFVRUF3d1BSbUZyWlNCTVJTQlNiMjkwSUZneE1CNFhEVEUyTURVeU16SXlNRGMxT1ZvWERUTTIKTURVeU16SXlNRGMxT1Zvd0lqRWdNQjRHQTFVRUF3d1hSbUZyWlNCTVJTQkpiblJsY20xbFpHbGhkR1VnV0RFdwpnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEdFdLeVNEbjdyV1pjNWdnanozWkIwCjhqTzR4dGkzdXpJTmZENXNRN0xqN2h6ZXRVVCt3UW9iK2lYU1praG52eCtJdmRiWEY1L3l0OGFXUHBVS25QeW0Kb0x4c1lpSTVnUUJMeE5EekllYzBPSWFmbFdxQXIyOW03SjgrTk50QXBFTjhuWkZuZjNiaGVoWlc3QXhtUzFtMApablNzZEh3MEZ3K2JnaXhQZzJNUTlrOW9lZkZlcWErN0txZGx6NWJiclVZVjJ2b2x4aERGdG5JNE1oOEJpV0NOCnhESDFIaXpxK0dLQ2NIc2luRFpXdXJDcWRlci9hZkpCblFzK1NCU0w2TVZBcEh0K2QzNXpqQkQ5MmZPMkplNTYKZGhNZnpDZ09LWGVKMzQwV2hXM1RqRDF6cUxaWGVhQ3lVTlJuZk9tV1pWOG5FaHRIT0ZiVUNVN3IvS2tqTVpPOQpBZ01CQUFHamdlTXdnZUF3RGdZRFZSMFBBUUgvQkFRREFnR0dNQklHQTFVZEV3RUIvd1FJTUFZQkFmOENBUUF3CkhRWURWUjBPQkJZRUZNRE1BMGE1V0NETVhISnc4K0V1eXlDbTlXZzZNSG9HQ0NzR0FRVUZCd0VCQkc0d2JEQTAKQmdnckJnRUZCUWN3QVlZb2FIUjBjRG92TDI5amMzQXVjM1JuTFhKdmIzUXRlREV1YkdWMGMyVnVZM0o1Y0hRdQpiM0puTHpBMEJnZ3JCZ0VGQlFjd0FvWW9hSFIwY0RvdkwyTmxjblF1YzNSbkxYSnZiM1F0ZURFdWJHVjBjMlZ1ClkzSjVjSFF1YjNKbkx6QWZCZ05WSFNNRUdEQVdnQlRCSm5Ta2lrU2c1dm9nS05oY0k1cEZpQmg1NERBTkJna3EKaGtpRzl3MEJBUXNGQUFPQ0FnRUFCWVN1NElsK2ZJME1ZVTQyT1RtRWorMUhxUTVEdnlBZXlDQTZzR3VaZHdqRgpVR2VWT3YzTm5MeWZvZnVVT2pFYlk1aXJGQ0R0bnYrMGNrdWtVWk45bHo0UTJZaldHVXBXNFRUdTNpZVRzYUM5CkFGdkNTZ05ISnlXU1Z0V3ZCNVhEeHNxYXdsMUt6SHp6d3IxMzJiRjJydEd0YXpTcVZxSzlFMDdzR0hNQ2YrenAKRFFWRFZWR3RxWlBId1gzS3FVdGVmRTYyMWI4Ukk2VkNsNG9EMzBPbGY4cGp1ekc0SktCRlJGY2x6TFJqby9oNwpJa2tmalo4d0RhN2ZhT2pWWHg2bitlVVEyOWNJTUN6cjgvck5XSFM5cFlHR1FLSmlZMnhtVkM5aDEySDk5WHlmCnpXRTl2YjV6S1AzTVZHNm5lWDFoU2RvN1BFQWI5ZnFSaEhrcVZzcVV2SmxJUm12WHZWS1R3TkNQM2VDalJDQ0kKUFRBdmpWKzRuaTc4NmlYd3dGWU56OGwzUG1QTEN5UVhXR29obko4aUJtKzVuazdPMnluYVBWVzBVMlcrcHQydwpTVnV2ZERNNXpHdjJmOWx0TldVaVlaSEoxbW1POTdqU1kvNllmZE9VSDY2aVJ0UXREa0hCUmRrTkJzTWJEK0VtCjJUZ0JsZHRITlNKQmZCM3BtOUZibGdPY0owRlNXY1VEV0o3dk8wK05UWGxnclJvZlJUNnBWeXd6eFZvNmRORDAKV3pZbFRXZVVWc080MHhKcWhnVVFSRVI5WUxPTHhKME82QzhpMHhGeEFNS090U2RvZE1CM1JJd3Q3UkZRMHV5dApuNVo1TXFrWWhsTUkzSjF0UFJUcDFuRXQ5ZnlHc3BCT08wNWdpMTQ4UWFzcCszTitzdnFLb21vUWdsTm9BeFU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"Key": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLqsqsdfqsdfqsdfFJQkFBS0NBZ0VBeDJKRHRQWHJxZjIyL1dOb091MStta09Ea3JvQ2N2dnh5d3FEc2NBRXROTWsrd1ZDCmJvQktza2N5bHJXZHRFMGNEZmEvajg0S0RUSVZXaEdwaEhpWTRaejR3SWxsdXBjTVdGUFoybW9FZTJoVmMrY3gKNjcxMEF6c0owSXdubGVLZlNjenJqRXlYOStqTkJ1VVI2cjBsWkp1cFhUVGZ5YWlTZmYwOWxvQVFibnVNK1huUwpJbUJJU2hJc1BLM1F5OXpGOUczWGxpdkljdDJHTFlMM09VMUhKZWJDSEJiR3JNVXZRS25WNmd4cW1jWHBaYzVJCituUU43WWNhMkhyZlVLRkN1YnI5WWNWTlVXbVN0UStLS2p5bUNBUEJsTUUyT1U2NHRNeFI2QlBHVlVyRUlsQVEKWnJrOVpiV2hpem44MmpGeWkwbHRSRllZWTFmTnJYSnRYQU5HRkFUTXNlSmQ5NDM4bURBTnh1Z3RaWUFtNjRnVgo0WUZNSUJCQWZqMXdoVExyeU9EeTFtMUNsUGFacWQwUU5mOVBwWWRKVlFhQXVSYVdSL3RlbldqcTRuZXhlMTNNCkE1cnZaUW9PNnFsSDZLdzVYS2NKZlBZSGtSQ2J2a0hkYzJBTStBOXUrRDVjWUVGUWZ1djRBRWhHa2Z6U0pRSy8KREpoWWtqb1hrYytQbzAxMzU3WFhBWlFkSkpvOHdzQ0dzVmw3Yk94UHU1WXZQQ2Q0UEV1UnB2cmZxTE1tY29veApnaCtCdmhoRFJET01BeXRRVXY4MGN6YkttVGFFSCtCZjdWdVZTUG94SXd3SFNpa1dMaDlGNEpGaXdqRndZTkplCnFtUlFqQjlRWjRRQmFpbHhxUEF6WmthRVpnbENjdy9rbFh1NVE4Q09CWlBDL28zNGxNZ091bkxIUlRVQ0F3RUEKQVFLQ0FnQldVbzdwekFjS0JCU3p3OVFlbnpCTzdhZ0xZSWtxNnpXV0tLazN6ZUM3d1Nham4zVlJqaTNJM2RaagpOYUpmcTNyWCtOcWJFaU43N3hFYmU4WWUybSttVG1YTVJqQkxCcGFMcjFJRXBCM29xQlZISnZPUUV1Z2xkZXdiCjVISkhER1RXZU9nS1NDY0xhRGxNSU9VTzhuRThDOERaMzhoNzhJWHNFallWOE1Bc2RVVmx4WDVhNzhDY2dSMngKNzdjVWJETXdUbFltYURKU3VPSWMxalRmRkR3WGhyN0hsbnpSMUZWTzg3anZxZ3lGSXhDWHlTWURlVGVHZlJYOApYOFpMakdYdEw2NEFKSUlERzJndkI5bFR6QW8rTWhJZnF6OGt0SlozZ0haOXVnSUdiMlpYVEw2dEdzb2dQUEVCCjdFc3kxSEc1S0VNc2NQSUNJTU9sc29MeWNXQm5Dc0pZNDF4RjJreWdMbnJtMGwyakFaeTNDNGIrS25zbDQvSTUKRXcrek1BU3ZFSXBycnBiaUFoajNETWIvek1CYmlnWjJCY2V6Y0FjQW15QlRwRkZ3N1c5NDZZMnFKeXVTQkxXbQo5Mkd3ZHdxZ09DeTVjbnlIdWRBbnN6eEtpUys0T0J6TEtEMStEZW5PalJtZmNncVBPcDJUOC8wdWNnRmZaWVYzCmtyMzk3QnhzT3VGclNtdHlnY2VrNkpnTkNraTdQYmx4T0VOaWpVN0RDNm9ta2E4eWZuWjhMWVNoSVVOWTlaZ1YKYXR4ZWpsZlJKb0tkQTAxQ3g4RjBiQ1dmdzZnd3dWbitJRno4VmJWMEo2SjVPS1lacUJqQkF6RUdMYWhqNklWeQpQY3hldHNiRVRwZVgxT2pRTUZNYUthOVNqVXUwTU1wNU5CQnJjSG1lQU4rRTd4MzZ5UUtDQVFFQTQ0d2R3ZUMrCkpGaFRLNTU2V0pVa29qNG1KaUJEaERFNzJVWThkdktoSXI1NW16WnEzYzdpcU0ya04xM28vNDRyRFc4MkNuVWoKV1BtaWJrbE00WkdwaUN5ZjA3bmlIeDVlK3BFdnpwOVFuMENQRGVCRnBRWDNxdG1PaXNsTEVyRW8yWVQrYzBldQpZczFUcloyNjBWcnNSV3R3QnpmQ3h2bzV3R24vQ2RXQW11U1NqeE84T2dOTDdVTjZPU1dQbHBDSFNsTGQwNFBOCjFHaEV3bkVUMURzWTNQTTRISVB0ZeFRvRWJIZnFWck41aFpZUTV1cHVTcy91MmdmQnJJaG4zbWV1S2hUbWwxZwp3RGxadWZtYmEzSVhTaEw3a09PWnp6SWV1UDZDL0xZM2syRFY0bDRTeWt1cTUxNE1sUC9kQ01VYk5sU3hqclc1MUpraGNtYm5uUTVTN2dVY24vSkRkcEE3ST0KLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K"
}
],
"HTTPChallenges": null
}
I searched everywhere on web but no one seems to get that king of issue. The trafik documentation is pretty short and well explain. Is someone can help me ? I feel that there is a little glitch but I can't put the finger on it.
Thanks !
It's not possible to use both challenge at the same time
[acme.httpChallenge]
entryPoint = "http"
[acme.dnsChallenge]
provider = "duckdns"
delayBeforeCheck = 0
When you use this configuration, in fact, only the DNS challenge is used.
You need to change the permissions of the acme.json to 600.
traefik | time="2018-05-18T18:30:55Z" level=error msg="Failed to read new account, ACME data conversion is not available : permissions 664 for /etc/traefik/acme.json are too open, please use 600"

Docker/zookeeper Will not attempt to authenticate using SASL

Good Day,
I wanted to test the config store which is built using spring boot. The instruction given to me is run the project using docker-compose.yml files. I'm new with this,I've tired to execute but while running those commands on iMAC terminal I'm facing the following exception.
platform-config-store | 2018-03-05 11:55:12.167 INFO 1 --- [ main] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#22bbbe6
platform-config-store | 2018-03-05 11:55:12.286 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
platform-config-store | 2018-03-05 11:55:12.314 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
platform-config-store | java.net.ConnectException: Connection refused
platform-config-store | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_144]
platform-config-store | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_144]
platform-config-store | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
platform-config-store | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
platform-config-store |
platform-config-store | 2018-03-05 11:55:13.422 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
platform-config-store | 2018-03-05 11:55:13.424 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
I've googled this problem and on some posts it was mentioned that zookeeper client server is not available that's why this error is occurring. So for this I've configured the zookeeper local instance on my machine and made changes in docker-compose.yml file. Instead of getting the image from docker, I tried to get it from local machine. It didn't work and faced the same issue.
Also some of them posted that this related to the firewall. I've verified and firewall's turned off.
Following is the docker-compose file I'm executing.
docker-compose.yml
version: "3.0"
services:
zookeeper:
container_name: zookeeper
image: docker.*****.net/zookeeper
#image: zookeeper // tired to connect with local zookeeper instance
ports:
- 2181:2181
postgres:
container_name: postgres
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=p3rmission
redis:
container_name: redis
image: redis
ports:
- 6379:6379
Could anyone please guide me, what I'm missing here. Help will be appreciated. Thanks

Error starting hyperledger fabcar sample application

I am trying to install hyperledger-fabric sample application from http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html
I am getting error similar to post mentioned here: hyperledger fabric fabcar error
2017-08-24 07:47:16.826 UTC [grpc] Printf -> DEBU 005 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 172.18.0.5:7051: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7051 <nil>}
Error: Error getting endorser client channel: PER:404 - Error trying to connect to local peer
Below are the logs for docker logs peer0.org1.example.com, apperantly peer is not able to connect to couchdb
2017-08-24 07:47:03.728 UTC [couchdb] handleRequest -> DEBU 011 HTTP Request: GET / HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2017-08-24 07:47:04.073 UTC [couchdb] handleRequest -> WARN 012 Retrying couchdb request in 125ms. Attempt:1 Error:Get http://couchdb:5984/: dial tcp 109.234.109.83:5984: getsockopt: connection refused
2017-08-24 07:47:04.199 UTC [couchdb] handleRequest -> DEBU 013 HTTP Request: GET / HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2017-08-24 07:47:04.385 UTC [couchdb] handleRequest -> WARN 014 Retrying couchdb request in 250ms. Attempt:2 Error:Get http://couchdb:5984/: dial tcp 109.234.109.77:5984: getsockopt: connection refused
I can see listening socket on port 5984
From docker exec -it couchdb bash
docker exec -it couchdb bash
couchdb#57c8996a4ba6:~$ netstat -ntulpa
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5984 0.0.0.0:* LISTEN 6/beam.smp
tcp 0 0 127.0.0.1:5986 0.0.0.0:* LISTEN 6/beam.smp
tcp 0 0 127.0.0.11:43471 0.0.0.0:* LISTEN -
udp 0 0 127.0.0.11:52081 0.0.0.0:* -
From command shell without docker
# netstat -ntulpa | grep 5984
tcp6 0 0 :::5984 :::* LISTEN 12877/docker-proxy
Why peer is not able to connect to couchdb?
Based on the comments, I think that your host system is configured to use a DNS search domain which automatically resolves unknown hostnames. You may need to modify the basic-network/docker-compose.yml and add dns_search: . as a config value for the peer:
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer:x86_64-1.0.0
dns_search: .
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.org1.example.com
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_basic
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7051:7051
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
- orderer.example.com
networks:
- basic

Resources