I am trying to build a simple application that sends traces to OpenTelemetry Collector, which exports the traces to Jaeger Backend.
But while I spin up the collector and Jaeger Backend, I get the following message,
info jaegerexporter/exporter.go:186 State of the connection with the Jaeger Collector backend {"kind": "exporter", "name": "jaeger", "state": "TRANSIENT_FAILURE"}
When I run the go application, I see no traces on the Jaeger UI. Also, no logs from the collector the shell.
main.go
package main
import (
"context"
"fmt"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
)
func initialize() {
traceExp, err := otlptracehttp.New(
context.TODO(),
otlptracehttp.WithEndpoint("0.0.0.0:55680"),
otlptracehttp.WithInsecure(),
)
if err != nil {
fmt.Println(err)
}
bsp := sdktrace.NewBatchSpanProcessor(traceExp)
tracerProvider := sdktrace.NewTracerProvider(
sdktrace.WithSpanProcessor(bsp),
)
otel.SetTracerProvider(tracerProvider)
}
func main() {
initialize()
tracer := otel.Tracer("demo-client-tracer")
ctx, span := tracer.Start(context.TODO(), "span-name")
defer span.End()
time.Sleep(time.Second)
fmt.Println(ctx)
}
Following are the collector config and docker-compose file.
otel-collector-config
receivers:
otlp:
protocols:
http:
processors:
batch:
exporters:
jaeger:
endpoint: "http://jaeger-all-in-one:14250"
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger]
docker-compose.yaml
version: "2"
services:
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250:14250"
# Collector
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317"
- "55680:55680"
depends_on:
- jaeger-all-in-one
Additional Logs while running docker-compose up,
Starting open-telemetry-collector-2_jaeger-all-in-one_1 ... done
Starting open-telemetry-collector-2_otel-collector_1 ... done
Attaching to open-telemetry-collector-2_jaeger-all-in-one_1, open-telemetry-collector-2_otel-collector_1
jaeger-all-in-one_1 | 2021/09/02 09:26:58 maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0155272,"caller":"flags/service.go:117","msg":"Mounting metrics handler on admin server","route":"/metrics"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.015579,"caller":"flags/service.go:123","msg":"Mounting expvar handler on admin server","route":"/debug/vars"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.016236,"caller":"flags/admin.go:106","msg":"Mounting health check on admin server","route":"/"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0163133,"caller":"flags/admin.go:117","msg":"Starting admin HTTP server","http-addr":":14269"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0163486,"caller":"flags/admin.go:98","msg":"Admin server started","http.host-port":"[::]:14269","health-status":"unavailable"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.017912,"caller":"memory/factory.go:61","msg":"Memory storage initialized","configuration":{"MaxTraces":0}}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.018202,"caller":"static/strategy_store.go:138","msg":"Loading sampling strategies","filename":"/etc/jaeger/sampling_strategies.json"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0273001,"caller":"server/grpc.go:82","msg":"Starting jaeger-collector gRPC server","grpc.host-port":":14250"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0273921,"caller":"server/http.go:48","msg":"Starting jaeger-collector HTTP server","http host-port":":14268"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0276191,"caller":"server/zipkin.go:49","msg":"Not listening for Zipkin HTTP traffic, port not configured"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0276558,"caller":"grpc/builder.go:70","msg":"Agent requested insecure grpc connection to collector(s)"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0276873,"caller":"channelz/logging.go:50","msg":"[core]parsed scheme: \"\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0277174,"caller":"channelz/logging.go:50","msg":"[core]scheme \"\" not registered, fallback to default scheme","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0277457,"caller":"channelz/logging.go:50","msg":"[core]ccResolverWrapper: sending update to cc: {[{:14250 <nil> 0 <nil>}] <nil> <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0277772,"caller":"channelz/logging.go:50","msg":"[core]ClientConn switching balancer to \"round_robin\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0277963,"caller":"channelz/logging.go:50","msg":"[core]Channel switches to new LB policy \"round_robin\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0278597,"caller":"grpclog/component.go:55","msg":"[balancer]base.baseBalancer: got new ClientConn state: {{[{:14250 <nil> 0 <nil>}] <nil> <nil>} <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0279217,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.028044,"caller":"channelz/logging.go:50","msg":"[core]Subchannel picks a new address \":14250\" to connect","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0284538,"caller":"grpclog/component.go:71","msg":"[balancer]base.baseBalancer: handle SubConn state change: 0xc000688840, CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.028513,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0280442,"caller":"grpc/builder.go:109","msg":"Checking connection to collector"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.028587,"caller":"grpc/builder.go:120","msg":"Agent collector connection state change","dialTarget":":14250","status":"CONNECTING"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0294988,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to READY","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.029561,"caller":"grpclog/component.go:71","msg":"[balancer]base.baseBalancer: handle SubConn state change: 0xc000688840, READY","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0296533,"caller":"grpclog/component.go:71","msg":"[roundrobin]roundrobinPicker: newPicker called with info: {map[0xc000688840:{{:14250 <nil> 0 <nil>}}]}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0297205,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to READY","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0297425,"caller":"grpc/builder.go:120","msg":"Agent collector connection state change","dialTarget":":14250","status":"READY"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0298278,"caller":"./main.go:233","msg":"Starting agent"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0298927,"caller":"querysvc/query_service.go:137","msg":"Archive storage not created","reason":"archive storage not supported"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0299237,"caller":"app/flags.go:124","msg":"Archive storage not initialized"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0300004,"caller":"app/agent.go:69","msg":"Starting jaeger-agent HTTP server","http-port":5778}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0303733,"caller":"channelz/logging.go:50","msg":"[core]parsed scheme: \"\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0304158,"caller":"channelz/logging.go:50","msg":"[core]scheme \"\" not registered, fallback to default scheme","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0304341,"caller":"channelz/logging.go:50","msg":"[core]ccResolverWrapper: sending update to cc: {[{:16685 <nil> 0 <nil>}] <nil> <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0304427,"caller":"channelz/logging.go:50","msg":"[core]ClientConn switching balancer to \"pick_first\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0304537,"caller":"channelz/logging.go:50","msg":"[core]Channel switches to new LB policy \"pick_first\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0305033,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0305545,"caller":"channelz/logging.go:50","msg":"[core]Subchannel picks a new address \":16685\" to connect","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"warn","ts":1630574818.0307937,"caller":"channelz/logging.go:75","msg":"[core]grpc: addrConn.createTransport failed to connect to {:16685 localhost:16685 <nil> 0 <nil>}. Err: connection error: desc = \"transport: Error while dialing dial tcp :16685: connect: connection refused\". Reconnecting...","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.030827,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to TRANSIENT_FAILURE","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0308597,"caller":"grpclog/component.go:71","msg":"[core]pickfirstBalancer: UpdateSubConnState: 0xc00061fd40, {CONNECTING <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0308924,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0309658,"caller":"grpclog/component.go:71","msg":"[core]pickfirstBalancer: UpdateSubConnState: 0xc00061fd40, {TRANSIENT_FAILURE connection error: desc = \"transport: Error while dialing dial tcp :16685: connect: connection refused\"}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0309868,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to TRANSIENT_FAILURE","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0314078,"caller":"app/static_handler.go:181","msg":"UI config path not provided, config file will not be watched"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0315406,"caller":"app/server.go:197","msg":"Query server started"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0315752,"caller":"healthcheck/handler.go:129","msg":"Health Check state change","status":"ready"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0315914,"caller":"app/server.go:276","msg":"Starting GRPC server","port":16685,"addr":":16685"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0316222,"caller":"app/server.go:257","msg":"Starting HTTP server","port":16686,"addr":":16686"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.031331,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0314019,"caller":"channelz/logging.go:50","msg":"[core]Subchannel picks a new address \":16685\" to connect","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0315094,"caller":"grpclog/component.go:71","msg":"[core]pickfirstBalancer: UpdateSubConnState: 0xc00061fd40, {CONNECTING <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0315537,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0323153,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to READY","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0325227,"caller":"grpclog/component.go:71","msg":"[core]pickfirstBalancer: UpdateSubConnState: 0xc00061fd40, {READY <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0325499,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to READY","system":"grpc","grpc_log":true}
otel-collector_1 | 2021-09-02T09:26:59.628Z info service/collector.go:303 Starting otelcol... {"Version": "v0.33.0", "NumCPU": 8}
otel-collector_1 | 2021-09-02T09:26:59.628Z info service/collector.go:242 Loading configuration...
otel-collector_1 | 2021-09-02T09:26:59.630Z info service/collector.go:258 Applying configuration...
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/exporters_builder.go:264 Exporter was built. {"kind": "exporter", "name": "jaeger"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/pipelines_builder.go:214 Pipeline was built. {"pipeline_name": "traces", "pipeline_datatype": "traces"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/receivers_builder.go:227 Receiver was built. {"kind": "receiver", "name": "otlp", "datatype": "traces"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info service/service.go:143 Starting extensions...
otel-collector_1 | 2021-09-02T09:26:59.630Z info service/service.go:188 Starting exporters...
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/exporters_builder.go:93 Exporter is starting... {"kind": "exporter", "name": "jaeger"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info jaegerexporter/exporter.go:186 State of the connection with the Jaeger Collector backend {"kind": "exporter", "name": "jaeger", "state": "CONNECTING"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/exporters_builder.go:98 Exporter started. {"kind": "exporter", "name": "jaeger"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info service/service.go:193 Starting processors...
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/pipelines_builder.go:52 Pipeline is starting... {"pipeline_name": "traces", "pipeline_datatype": "traces"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/pipelines_builder.go:63 Pipeline is started. {"pipeline_name": "traces", "pipeline_datatype": "traces"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info service/service.go:198 Starting receivers...
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/receivers_builder.go:71 Receiver is starting... {"kind": "receiver", "name": "otlp"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info otlpreceiver/otlp.go:93 Starting HTTP server on endpoint 0.0.0.0:4318 {"kind": "receiver", "name": "otlp"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info otlpreceiver/otlp.go:159 Setting up a second HTTP listener on legacy endpoint 0.0.0.0:55681 {"kind": "receiver", "name": "otlp"}
otel-collector_1 | 2021-09-02T09:26:59.631Z info otlpreceiver/otlp.go:93 Starting HTTP server on endpoint 0.0.0.0:55681 {"kind": "receiver", "name": "otlp"}
otel-collector_1 | 2021-09-02T09:26:59.631Z info builder/receivers_builder.go:76 Receiver started. {"kind": "receiver", "name": "otlp"}
otel-collector_1 | 2021-09-02T09:26:59.631Z info service/collector.go:206 Setting up own telemetry...
otel-collector_1 | 2021-09-02T09:26:59.631Z info service/telemetry.go:99 Serving Prometheus metrics {"address": ":8888", "level": 0, "service.instance.id": "0fe56a33-e40e-4251-9a82-100fa600c4a0"}
otel-collector_1 | 2021-09-02T09:26:59.631Z info service/collector.go:218 Everything is ready. Begin running and processing data.
otel-collector_1 | 2021-09-02T09:27:00.631Z info jaegerexporter/exporter.go:186 State of the connection with the Jaeger Collector backend {"kind": "exporter", "name": "jaeger", "state": "TRANSIENT_FAILURE"}
Thanks!
Updating otel-collector-config.yaml to the following endpoint should work:
receivers:
otlp:
protocols:
http:
processors:
batch:
exporters:
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger]
I'm trying to get a consul server cluster up and running. I have 3 dockerized consul servers running, but I can't access the Web UI, the HTTP API nor the DNS.
$ docker logs net-sci_discovery-service_consul_1
==> WARNING: Expect Mode enabled, expecting 3 servers
==> Starting Consul agent...
==> Consul agent running!
Version: 'v0.8.5'
Node ID: 'ccd38897-6047-f8b6-be1c-2aa0022a1483'
Node name: 'consul1'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 172.20.0.2 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2017/07/07 23:24:07 [INFO] raft: Initial configuration (index=0): []
2017/07/07 23:24:07 [INFO] raft: Node at 172.20.0.2:8300 [Follower] entering Follower state (Leader: "")
2017/07/07 23:24:07 [INFO] serf: EventMemberJoin: consul1 172.20.0.2
2017/07/07 23:24:07 [INFO] consul: Adding LAN server consul1 (Addr: tcp/172.20.0.2:8300) (DC: dc1)
2017/07/07 23:24:07 [INFO] serf: EventMemberJoin: consul1.dc1 172.20.0.2
2017/07/07 23:24:07 [INFO] consul: Handled member-join event for server "consul1.dc1" in area "wan"
2017/07/07 23:24:07 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2017/07/07 23:24:07 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2017/07/07 23:24:07 [INFO] agent: Started HTTP server on 127.0.0.1:8500
2017/07/07 23:24:09 [INFO] serf: EventMemberJoin: consul2 172.20.0.3
2017/07/07 23:24:09 [INFO] consul: Adding LAN server consul2 (Addr: tcp/172.20.0.3:8300) (DC: dc1)
2017/07/07 23:24:09 [INFO] serf: EventMemberJoin: consul2.dc1 172.20.0.3
2017/07/07 23:24:09 [INFO] consul: Handled member-join event for server "consul2.dc1" in area "wan"
2017/07/07 23:24:10 [INFO] serf: EventMemberJoin: consul3 172.20.0.4
2017/07/07 23:24:10 [INFO] consul: Adding LAN server consul3 (Addr: tcp/172.20.0.4:8300) (DC: dc1)
2017/07/07 23:24:10 [INFO] consul: Found expected number of peers, attempting bootstrap: 172.20.0.2:8300,172.20.0.3:8300,172.20.0.4:8300
2017/07/07 23:24:10 [INFO] serf: EventMemberJoin: consul3.dc1 172.20.0.4
2017/07/07 23:24:10 [INFO] consul: Handled member-join event for server "consul3.dc1" in area "wan"
2017/07/07 23:24:14 [ERR] agent: failed to sync remote state: No cluster leader
2017/07/07 23:24:17 [WARN] raft: Heartbeat timeout from "" reached, starting election
2017/07/07 23:24:17 [INFO] raft: Node at 172.20.0.2:8300 [Candidate] entering Candidate state in term 2
2017/07/07 23:24:17 [INFO] raft: Election won. Tally: 2
2017/07/07 23:24:17 [INFO] raft: Node at 172.20.0.2:8300 [Leader] entering Leader state
2017/07/07 23:24:17 [INFO] raft: Added peer 172.20.0.3:8300, starting replication
2017/07/07 23:24:17 [INFO] raft: Added peer 172.20.0.4:8300, starting replication
2017/07/07 23:24:17 [INFO] consul: cluster leadership acquired
2017/07/07 23:24:17 [INFO] consul: New leader elected: consul1
2017/07/07 23:24:17 [WARN] raft: AppendEntries to {Voter 172.20.0.3:8300 172.20.0.3:8300} rejected, sending older logs (next: 1)
2017/07/07 23:24:17 [WARN] raft: AppendEntries to {Voter 172.20.0.4:8300 172.20.0.4:8300} rejected, sending older logs (next: 1)
2017/07/07 23:24:17 [INFO] raft: pipelining replication to peer {Voter 172.20.0.3:8300 172.20.0.3:8300}
2017/07/07 23:24:17 [INFO] raft: pipelining replication to peer {Voter 172.20.0.4:8300 172.20.0.4:8300}
2017/07/07 23:24:18 [INFO] consul: member 'consul1' joined, marking health alive
2017/07/07 23:24:18 [INFO] consul: member 'consul2' joined, marking health alive
2017/07/07 23:24:18 [INFO] consul: member 'consul3' joined, marking health alive
2017/07/07 23:24:20 [INFO] agent: Synced service 'consul'
2017/07/07 23:24:20 [INFO] agent: Synced service 'messaging-service-kafka'
2017/07/07 23:24:20 [INFO] agent: Synced service 'messaging-service-zookeeper'
$ curl http://127.0.0.1:8500/v1/catalog/service/consul
curl: (52) Empty reply from server
dig #127.0.0.1 -p 8600 consul.service.consul
; <<>> DiG 9.8.3-P1 <<>> #127.0.0.1 -p 8600 consul.service.consul
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
$ dig #127.0.0.1 -p 8600 messaging-service-kafka.service.consul
; <<>> DiG 9.8.3-P1 <<>> #127.0.0.1 -p 8600 messaging-service-kafka.service.consul
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
I can't get my services to register via the HTTP API either; those shown above are registered using a config script when the container launches.
Here's my docker-compose.yml:
version: '2'
services:
consul1:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_1"
hostname: "consul1"
ports:
- "8400:8400"
- "8500:8500"
- "8600:8600"
volumes:
- ./etc/consul.d:/etc/consul.d
command: "agent -server -ui -bootstrap-expect 3 -config-dir=/etc/consul.d -bind=0.0.0.0"
consul2:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_2"
hostname: "consul2"
command: "agent -server -join=consul1"
links:
- "consul1"
consul3:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_3"
hostname: "consul3"
command: "agent -server -join=consul1"
links:
- "consul1"
I'm relatively new to both docker and consul. I've had a look around the web and the above options are my understanding of what is required. Any suggestions on the way forward would be very welcome.
Edit:
Result of docker container ps -all:
$ docker container ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e0a1c3bba165 consul:latest "docker-entrypoint..." 38 seconds ago Up 36 seconds 8300-8302/tcp, 8500/tcp, 8301-8302/udp, 8600/tcp, 8600/udp net-sci_discovery-service_consul_3
7f05555e81e0 consul:latest "docker-entrypoint..." 38 seconds ago Up 36 seconds 8300-8302/tcp, 8500/tcp, 8301-8302/udp, 8600/tcp, 8600/udp net-sci_discovery-service_consul_2
9e2dedaa224b consul:latest "docker-entrypoint..." 39 seconds ago Up 38 seconds 0.0.0.0:8400->8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp, 8300-8302/tcp, 8600/udp, 0.0.0.0:8600->8600/tcp net-sci_discovery-service_consul_1
27b34c5dacb7 messagingservice_kafka "start-kafka.sh" 3 hours ago Up 3 hours 0.0.0.0:9092->9092/tcp net-sci_messaging-service_kafka
0389797b0b8f wurstmeister/zookeeper "/bin/sh -c '/usr/..." 3 hours ago Up 3 hours 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp net-sci_messaging-service_zookeeper
Edit:
Updated docker-compose.yml to include long format for ports:
version: '3.2'
services:
consul1:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_1"
hostname: "consul1"
ports:
- target: 8400
published: 8400
mode: host
- target: 8500
published: 8500
mode: host
- target: 8600
published: 8600
mode: host
volumes:
- ./etc/consul.d:/etc/consul.d
command: "agent -server -ui -bootstrap-expect 3 -config-dir=/etc/consul.d -bind=0.0.0.0 -client=127.0.0.1"
consul2:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_2"
hostname: "consul2"
command: "agent -server -join=consul1"
links:
- "consul1"
consul3:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_3"
hostname: "consul3"
command: "agent -server -join=consul1"
links:
- "consul1"
From the Consul Web Gui page, make sure you have launched an agent with the -ui parameter.
The UI is available at the /ui path on the same port as the HTTP API.
By default this is http://localhost:8500/ui
I do see 8500 mapped to your host on broadcast (0.0.0.0).
Check also (as in this answer) if the client_addr can help (at least for testing)
I've spend hours looking to solve this issue, however I'm unable to find any topics related to this issue, since all I find is custom registeries.
When running any of the docker commands that connect to docker hub, either through https://registry-1.docker.io/v2/ or https://index.docker.io/v1, all requests end up in "x509: certificate signed by unknown authority". However using curl to run query the same endpoints seem to function just fine.
I've reinstalled docker completely, purging all configuration files, however it does not seem to make a difference.
Anything I'm missing?
docker info:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.05.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.35-1-lts
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.34GiB
ID: 5Q4D:TLJF:3I3U:O522:VQMK:24BU:H5ND:UPOU:MWYS:WGTB:XFXR:BQES
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Ena
Using docker:
[user#hostname]$ docker search ubunut
Error response from daemon: Get https://index.docker.io/v1/search?q=ubunut&n=25: x509: certificate signed by unknown authority
Using curl:
[user#hostname]$ curl -v https://index.docker.io/v1/search?q=ubunut&n=25
[1] 2152
[user#hostname]$ * Trying 34.200.194.233...
* TCP_NODELAY set
* Connected to index.docker.io (34.200.194.233) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: OU=GT98568428; OU=See www.rapidssl.com/resources/cps (c)15; OU=Domain Control Validated - RapidSSL(R); CN=*.docker.io
* start date: Mar 19 17:34:32 2015 GMT
* expire date: Apr 21 01:51:52 2018 GMT
* subjectAltName: host "index.docker.io" matched cert's "*.docker.io"
* issuer: C=US; O=GeoTrust Inc.; CN=RapidSSL SHA256 CA - G3
* SSL certificate verify ok.
> GET /v1/search?q=ubunut HTTP/1.1
> Host: index.docker.io
> User-Agent: curl/7.54.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.6.2
< Date: Wed, 05 Jul 2017 12:10:22 GMT
< Content-Type: application/json
< Transfer-Encoding: chunked
< Vary: Cookie
< X-Frame-Options: SAMEORIGIN
< Strict-Transport-Security: max-age=31536000
<
{"num_pages": 1, "num_results": 21, "results": [{"is_automated": true, "name": "han4wluc/try-docker-ubunut-node", "is_trusted": true, ... *truncated*
I solved the problem as follows:
I removed the file /etc/ssl/cert/ca-certificates.crt.
I ran the command sudo pacman -S ca-certificates-utils.
I restarted docker with the systemctl restart docker command.
I got this hint from this link:
https://unix.stackexchange.com/questions/339613/arch-linux-ca-certificates-crt-not-found/396169#396169