Getting connection failed error when trying to connect to geckowebdriver 0.16 and mozilla version 53 - geckodriver

1495111288785 geckodriver INFO Listening on 127.0.0.1:28965
Exception in thread "main" org.openqa.selenium.WebDriverException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:28965 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect
Build info: version: 'unknown', revision: 'unknown', time: 'unknown'
System info: host: 'MANAS', ip: '192.168.2.7', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_71'
Driver info: driver.version: FirefoxDriver

Related

RabbitMQ fails to boot from docker-compose

Im trying to set up rabbitmq instance from docker-compose command.
My docker compose yaml
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management
hostname: rabbit
container_name: 'rabbitmq'
volumes:
- ./etc/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
- ./data:/var/lib/rabbitmq/mnesia/rabbit#rabbit
- ./logs:/var/log/rabbitmq/log
- ./etc/ssl/CERT_LAB_CA.pem:/etc/rabbitmq/ssl/cacert.pem
- ./etc/ssl/CERT_LAB_RABBITMQ.pem:/etc/rabbitmq/ssl/cert.pem
- ./etc/ssl/KEY_LAB_RABBITMQ.pem:/etc/rabbitmq/ssl/key.pem
ports:
- 5672:5672
- 15672:15672
- 15671:15671
- 5671:5671
environment:
- RABBITMQ_DEFAULT_USER=secret
- RABBITMQ_DEFAULT_PASS=secret
When I run docker compose up for the first time, everything works fine. But when I add queues and exchanged(loaded from definitions.json), shut down and remove container and try to docker compose up again, I got this error
2022-09-29 13:32:09.522956+00:00 [notice] <0.44.0> Application mnesia exited with reason: stopped
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0>
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0> BOOT FAILED
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0> ===========
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0> Error during startup: {error,
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0> {schema_integrity_check_failed,
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0> [{table_missing,rabbit_listener}]}}
2022-09-29 13:32:09.523096+00:00 [error] <0.229.0>
BOOT FAILED
===========
Error during startup: {error,
{schema_integrity_check_failed,
[{table_missing,rabbit_listener}]}}
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> crasher:
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> initial call: application_master:init/4
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> pid: <0.228.0>
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> registered_name: []
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> exception exit: {{schema_integrity_check_failed,
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> [{table_missing,rabbit_listener}]},
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> {rabbit,start,[normal,[]]}}
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> in function application_master:init/4 (application_master.erl, line 142)
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> ancestors: [<0.227.0>]
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> message_queue_len: 1
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> messages: [{'EXIT',<0.229.0>,normal}]
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> links: [<0.227.0>,<0.44.0>]
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> dictionary: []
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> trap_exit: true
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> status: running
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> heap_size: 2586
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> stack_size: 28
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> reductions: 180
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0> neighbours:
2022-09-29 13:32:10.524073+00:00 [error] <0.228.0>
And here is my rabbitmq.conf file
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_options.cacertfile = /etc/rabbitmq/ssl/cacert.pem
ssl_options.certfile = /etc/rabbitmq/ssl/cert.pem
ssl_options.keyfile = /etc/rabbitmq/ssl/key.pem
#Generate client cert and uncomment this if client has to provide cert.
#ssl_options.verify = verify_peer
#ssl_options.fail_if_no_peer_cert = true
collect_statistics_interval = 10000
#load_definitions = /path/to/exported/definitions.json
#definitions.skip_if_unchanged = true
management.tcp.port = 15672
management.ssl.port = 15671
management.ssl.cacertfile = /etc/rabbitmq/ssl/cacert.pem
management.ssl.certfile = /etc/rabbitmq/ssl/cert.pem
management.ssl.keyfile = /etc/rabbitmq/ssl/key.pem
management.http_log_dir = /var/log/rabbitmq/http
What am I missing?
Try to substitute ./data:/var/lib/rabbitmq/mnesia/rabbit#rabbit in your config with ./data:/var/lib/rabbitmq.
I had the same error and spent quite time trying to figure out the problem. My configuration was slightly different from yours and looked like this:
rabbitmq:
image: rabbitmq:3.11.2-management-alpine
hostname: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: tester
RABBITMQ_DEFAULT_PASS: qwerty
RABBITMQ_MNESIA_DIR: /my-custom-data-folder-path-inside-container
RABBITMQ_NODENAME: rabbitmq
volumes:
- type: bind
source: /my-custom-data-folder-path-on-host
target: /my-custom-data-folder-path-inside-container
I'm not an expert in RabbitMQ, and my idea was just to make RabbitMQ to persist it's database in the /my-custom-data-folder-path-on-host folder on host. And just like in your case on the first run it was able to start successfully, but after container restart I was getting the following error:
BOOT FAILED
Error during startup: {error, {schema_integrity_check_failed, [{table_missing,rabbit_listener}]}}
I learned from the documentation is that rabbit_listener is a table inside the Mnesia database that is used by RabbitMQ and that "listeners" are the TCP-listeners that are configured in RabbitMQ to accept client connections.
For RabbitMQ to accept client connections, it needs to bind to one or more interfaces and listen on (protocol-specific) ports. One such interface/port pair is called a listener in RabbitMQ parlance. Listeners are configured using the listeners.tcp.* configuration option(s).
I wanted to dig into the Mnesia database to troubleshoot but not managed to do that without Erlang knowledge. It seems that for some reason on the first run RabbitMQ does not create "rabbit_listener" table, but on subsequent runs requires it.
Finally, I managed to workaround the problem by changing my initial configuration as follows:
service-bus:
image: rabbitmq:3.11.2-management-alpine
hostname: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: tester
RABBITMQ_DEFAULT_PASS: qwerty
RABBITMQ_NODENAME: rabbitmq
volumes:
- type: bind
source: /my-custom-data-folder-path-on-host
target: /var/lib/rabbitmq
Instead of overriding just the RABBITMQ_MNESIA_DIR folder I've overridden the entire /var/lib/rabbitmq. This did the trick and now my RabbitMQ successfully endures restarts.
I hit this problem and I changed my docker-compose.yml file to use rabbitmq:3.9-management rather than rabbitmq:3-management.
The problem happened for me when I restarted the stack and the rabbitmq image went to 3.11.

rails puma nginx 111: Connection refused

/shared/tmp/sockets/autostudy-puma.sock failed (111: Connection refused) while connecting to upstream, client: 37.188.181.170, server: www.autostudy.cz
how to fix this error?
nginx.conf
https://gist.github.com/iLucker93/23491671f7ef7b3eed063b87b7312992
puma.error.log
Listening on unix:///home/deploy/autostudy.cz/shared/tmp/sockets/autostudy-puma.sock
Gracefully stopping, waiting for requests to finish
=== puma shutdown: 2018-03-26 10:22:56 -0400 ===
Goodbye!

Connect consul agent to consul

I'm trying to setup the consul server and connect an agent to it for 2 or 3 days already. I'm using docker-compose.
But after performing a join operation, agent gets a message "Agent not live or unreachable".
Here are the logs:
root#e33a6127103f:/app# consul agent -join 10.1.30.91 -data-dir=/tmp/consul
==> Starting Consul agent...
==> Joining cluster...
Join completed. Synced with 1 initial agents
==> Consul agent running!
Version: 'v1.0.1'
Node ID: '0e1adf74-462d-45a4-1927-95ed123f1526'
Node name: 'e33a6127103f'
Datacenter: 'dc1' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 172.17.0.2 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2017/12/06 10:44:43 [INFO] serf: EventMemberJoin: e33a6127103f 172.17.0.2
2017/12/06 10:44:43 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2017/12/06 10:44:43 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2017/12/06 10:44:43 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2017/12/06 10:44:43 [INFO] agent: (LAN) joining: [10.1.30.91]
2017/12/06 10:44:43 [INFO] serf: EventMemberJoin: consul1 172.19.0.2 2017/12/06 10:44:43 [INFO] consul: adding server consul1 (Addr: tcp/172.19.0.2:8300) (DC: dc1)
2017/12/06 10:44:43 [INFO] agent: (LAN) joined: 1 Err: <nil>
2017/12/06 10:44:43 [INFO] agent: started state syncer
2017/12/06 10:44:43 [WARN] manager: No servers available
2017/12/06 10:44:43 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:44:54 [INFO] memberlist: Suspect consul1 has failed, no acks received
2017/12/06 10:44:55 [ERR] consul: "Catalog.NodeServices" RPC failed to server 172.19.0.2:8300: rpc error getting client: failed to get conn: dial tcp <nil>->172.19.0.2:8300: i/o timeout
2017/12/06 10:44:55 [ERR] agent: failed to sync remote state: rpc error getting client: failed to get conn: dial tcp <nil>->172.19.0.2:8300: i/o timeout
2017/12/06 10:44:58 [INFO] memberlist: Marking consul1 as failed, suspect timeout reached (0 peer confirmations)
2017/12/06 10:44:58 [INFO] serf: EventMemberFailed: consul1 172.19.0.2
2017/12/06 10:44:58 [INFO] consul: removing server consul1 (Addr: tcp/172.19.0.2:8300) (DC: dc1)
2017/12/06 10:45:05 [INFO] memberlist: Suspect consul1 has failed, no acks received
2017/12/06 10:45:06 [WARN] manager: No servers available
2017/12/06 10:45:06 [ERR] agent: Coordinate update error: No known Consul servers
2017/12/06 10:45:12 [WARN] manager: No servers available
2017/12/06 10:45:12 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:45:13 [INFO] serf: attempting reconnect to consul1 172.19.0.2:8301
2017/12/06 10:45:28 [WARN] manager: No servers available
2017/12/06 10:45:28 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:45:32 [WARN] manager: No servers available . `
My settings are:
docker-compose SERVER:
consul1:
image: "consul.1.0.1"
container_name: "consul1"
hostname: "consul1"
volumes:
- ./consul/config:/config/
ports:
- "8400:8400"
- "8500:8500"
- "8600:53"
- "8300:8300"
- "8301:8301"
command: "agent -config-dir=/config -ui -server -bootstrap-expect 1"
Help please solve the problem.
I think you using wrong ip-address of consul-server
"consul agent -join 10.1.30.91 -data-dir=/tmp/consul"
10.1.30.91 this is not docker container ip it might be your host address/virtualbox.
Get consul-container ip and use that to join in consul-agent command.
For more info about how consul and agent works follow the link
https://dzone.com/articles/service-discovery-with-docker-and-consul-part-1
Try to get the right IP address by executing this command:
docker inspect <container id> | grep "IPAddress"
Where the is the container ID of the consul server.
Than use the obtained address instead of "10.1.30.91" in the command
consul agent -join <IP ADDRESS CONSUL SERVER> -data-dir=/tmp/consul

Bidirectional UDP Port for docker container

I have a consul running in a docker container.
When I start another consul agent (not on docker), it says:
[WARN] memberlist: Was able to reach container_name via TCP but not UDP, network may be misconfigured and not allowing bidirectional UDP
I am trying to form a cluster here, but leader election keeps failing.
How can I fix this?
My port specification in docker-compose.yml (docker-compose version: 1)
ports:
- "8300:8300"
- "8301:8301"
- "8301:8301/udp"
- "8302:8302"
- "8302:8302/udp"
- "8400:8400"
- "8500:8500"
- "8600:8600"
- "8600:8600/udp"
Log of Consul1 running in Docker Container:
Node name: '<host>'
Datacenter: 'dc1'
Server: true (bootstrap: true)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
Cluster Addr: <host_ip> (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>
==> Log data will now stream in as it occurs:
2017/06/08 03:39:44 [INFO] raft: Restored from snapshot 13-35418-1496826625488
2017/06/08 03:39:44 [INFO] serf: EventMemberJoin: <host> <host_ip>
2017/06/08 03:39:44 [INFO] raft: Node at <host_ip>:8300 [Follower] entering Follower state
2017/06/08 03:39:44 [INFO] consul: adding LAN server <host> (Addr: <host_ip>:8300) (DC: dc1)
2017/06/08 03:39:44 [INFO] serf: EventMemberJoin: <host>.dc1 <host_ip>
2017/06/08 03:39:44 [INFO] consul: adding WAN server <host>.dc1 (Addr: <host_ip>:8300) (DC: dc1)
2017/06/08 03:39:44 [ERR] agent: failed to sync remote state: No cluster leader
2017/06/08 03:39:45 [WARN] raft: Heartbeat timeout reached, starting election
2017/06/08 03:39:45 [INFO] raft: Node at <host_ip>:8300 [Candidate] entering Candidate state
2017/06/08 03:39:45 [INFO] raft: Election won. Tally: 1
2017/06/08 03:39:45 [INFO] raft: Node at <host_ip>:8300 [Leader] entering Leader state
2017/06/08 03:39:45 [INFO] consul: cluster leadership acquired
2017/06/08 03:39:45 [INFO] consul: New leader elected: <host>
2017/06/08 03:39:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2017/06/08 03:39:45 [INFO] raft: Added peer <host_ip>:9300, starting replication
2017/06/08 03:39:45 [INFO] raft: Removed peer <host_ip>:9300, stopping replication (Index: 36201)
2017/06/08 03:39:45 [INFO] raft: Added peer <host_ip>:9300, starting replication
2017/06/08 03:39:45 [INFO] raft: Added peer <host_ip>:10300, starting replication
2017/06/08 03:39:45 [INFO] raft: Removed peer <host_ip>:10300, stopping replication (Index: 36228)
2017/06/08 03:39:45 [INFO] raft: Removed peer <host_ip>:9300, stopping replication (Index: 36230)
2017/06/08 03:39:45 [ERR] raft: Failed to AppendEntries to <host_ip>:10300: dial tcp <host_ip>:10300: getsockopt: connection refused
2017/06/08 03:39:45 [ERR] raft: Failed to AppendEntries to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:39:45 [ERR] raft: Failed to AppendEntries to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:39:45 [ERR] raft: Failed to AppendEntries to <host_ip>:10300: dial tcp <host_ip>:10300: getsockopt: connection refused
2017/06/08 03:39:45 [ERR] raft: Failed to AppendEntries to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:39:49 [WARN] agent: Check 'vault::8200:vault-sealed-check' missed TTL, is now critical
2017/06/08 03:39:50 [INFO] serf: EventMemberJoin: server2 <host_ip>
2017/06/08 03:39:50 [INFO] consul: adding LAN server server2 (Addr: <host_ip>:9300) (DC: dc1)
2017/06/08 03:39:50 [INFO] raft: Added peer <host_ip>:9300, starting replication
2017/06/08 03:39:50 [WARN] raft: AppendEntries to <host_ip>:9300 rejected, sending older logs (next: 36231)
2017/06/08 03:39:50 [INFO] raft: pipelining replication to peer <host_ip>:9300
2017/06/08 03:39:50 [INFO] consul: member 'server2' joined, marking health alive
2017/06/08 03:39:52 [INFO] agent: Synced service 'vault::8200'
2017/06/08 03:39:52 [INFO] agent: Synced check 'vault::8200:vault-sealed-check'
2017/06/08 03:40:06 [INFO] agent: Synced check 'vault::8200:vault-sealed-check'
2017/06/08 03:40:18 [ERR] raft: Failed to heartbeat to <host_ip>:9300: EOF
2017/06/08 03:40:18 [INFO] raft: aborting pipeline replication to peer <host_ip>:9300
2017/06/08 03:40:19 [ERR] raft: Failed to AppendEntries to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:19 [ERR] raft: Failed to heartbeat to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:19 [ERR] raft: Failed to AppendEntries to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:19 [ERR] raft: Failed to heartbeat to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:19 [ERR] raft: Failed to AppendEntries to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:19 [ERR] raft: Failed to AppendEntries to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:19 [ERR] raft: Failed to heartbeat to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:19 [WARN] raft: Failed to contact <host_ip>:9300 in 501.593114ms
2017/06/08 03:40:19 [WARN] raft: Failed to contact quorum of nodes, stepping down
2017/06/08 03:40:19 [INFO] raft: Node at <host_ip>:8300 [Follower] entering Follower state
2017/06/08 03:40:19 [INFO] consul: cluster leadership lost
2017/06/08 03:40:19 [ERR] raft: Failed to AppendEntries to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:20 [WARN] raft: Heartbeat timeout reached, starting election
2017/06/08 03:40:20 [INFO] raft: Node at <host_ip>:8300 [Candidate] entering Candidate state
2017/06/08 03:40:20 [ERR] raft: Failed to make RequestVote RPC to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:21 [INFO] memberlist: Suspect server2 has failed, no acks received
2017/06/08 03:40:22 [WARN] raft: Election timeout reached, restarting election
2017/06/08 03:40:22 [INFO] raft: Node at <host_ip>:8300 [Candidate] entering Candidate state
2017/06/08 03:40:22 [ERR] raft: Failed to make RequestVote RPC to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:23 [INFO] memberlist: Suspect server2 has failed, no acks received
2017/06/08 03:40:23 [WARN] dns: Query results too stale, re-requesting
2017/06/08 03:40:23 [ERR] dns: rpc error: No cluster leader
2017/06/08 03:40:23 [WARN] raft: Election timeout reached, restarting election
2017/06/08 03:40:23 [INFO] raft: Node at <host_ip>:8300 [Candidate] entering Candidate state
2017/06/08 03:40:23 [ERR] raft: Failed to make RequestVote RPC to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:24 [WARN] raft: Election timeout reached, restarting election
2017/06/08 03:40:24 [INFO] raft: Node at <host_ip>:8300 [Candidate] entering Candidate state
2017/06/08 03:40:24 [ERR] raft: Failed to make RequestVote RPC to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:24 [ERR] http: Request PUT /v1/session/renew/8c4efe65-07c3-f93e-6679-f2bc95f8e92c, error: No cluster leader from=172.17.0.4:57031
2017/06/08 03:40:25 [INFO] memberlist: Suspect server2 has failed, no acks received
2017/06/08 03:40:25 [ERR] http: Request PUT /v1/session/renew/8c4efe65-07c3-f93e-6679-f2bc95f8e92c, error: No cluster leader from=172.17.0.4:57061
2017/06/08 03:40:26 [INFO] memberlist: Suspect server2 has failed, no acks received
2017/06/08 03:40:26 [INFO] memberlist: Marking server2 as failed, suspect timeout reached
2017/06/08 03:40:26 [INFO] serf: EventMemberFailed: server2 <host_ip>
2017/06/08 03:40:26 [INFO] consul: removing LAN server server2 (Addr: <host_ip>:9300) (DC: dc1)
2017/06/08 03:40:26 [WARN] raft: Election timeout reached, restarting election
2017/06/08 03:40:26 [INFO] raft: Node at <host_ip>:8300 [Candidate] entering Candidate state
2017/06/08 03:40:26 [ERR] raft: Failed to make RequestVote RPC to <host_ip>:9300: dial tcp <host_ip>:9300: getsockopt: connection refused
2017/06/08 03:40:26 [ERR] agent: coordinate update error: No cluster leader
2017/06/08 03:40:26 [ERR] http: Request PUT /v1/session/renew/8c4efe65-07c3-f93e-6679-f2bc95f8e92c, error: No cluster leader from=172.17.0.4:57064
2017/06/08 03:40:27 [WARN] dns: Query results too stale, re-requesting
2017/06/08 03:40:27 [ERR] dns: rpc error: No cluster leader
2017/06/08 03:40:27 [WARN] raft: Election timeout reached, restarting election
Log of consul2:
==> WARNING: Expect Mode enabled, expecting 2 servers
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
Node name: 'server2'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 0.0.0.0 (HTTP: 9500, HTTPS: -1, DNS: 9600, RPC: 9400)
Cluster Addr: <host_ip> (LAN: 9301, WAN: 9302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>
==> Log data will now stream in as it occurs:
2017/06/08 09:09:50 [INFO] raft: Restored from snapshot 13-35418-1496892834061
2017/06/08 09:09:50 [INFO] serf: EventMemberJoin: server2 <host_ip>
2017/06/08 09:09:50 [INFO] serf: EventMemberJoin: server2.dc1 <host_ip>
2017/06/08 09:09:50 [INFO] raft: Node at <host_ip>:9300 [Follower] entering Follower state
2017/06/08 09:09:50 [INFO] consul: adding LAN server server2 (Addr: <host_ip>:9300) (DC: dc1)
2017/06/08 09:09:50 [INFO] consul: adding WAN server server2.dc1 (Addr: <host_ip>:9300) (DC: dc1)
2017/06/08 09:09:50 [ERR] agent: failed to sync remote state: No cluster leader
2017/06/08 09:09:50 [INFO] agent: Joining cluster...
2017/06/08 09:09:50 [INFO] agent: (LAN) joining: [<host_ip>:8301 <host_ip>:10301]
2017/06/08 09:09:50 [INFO] serf: EventMemberJoin: <host> <host_ip>
2017/06/08 09:09:50 [INFO] consul: adding LAN server <host> (Addr: <host_ip>:8300) (DC: dc1)
2017/06/08 09:09:50 [INFO] agent: (LAN) joined: 1 Err: <nil>
2017/06/08 09:09:50 [INFO] agent: Join completed. Synced with 1 initial agents
2017/06/08 09:09:50 [WARN] raft: Failed to get previous log: 36233 log not found (last: 36230)
2017/06/08 09:09:50 [INFO] raft: Removed ourself, transitioning to follower
2017/06/08 09:09:50 [INFO] raft: Removed ourself, transitioning to follower
2017/06/08 09:09:52 [WARN] memberlist: Was able to reach <host> via TCP but not UDP, network may be misconfigured and not allowing bidirectional UDP
==> Newer Consul version available: 0.8.3
2017/06/08 09:09:54 [WARN] memberlist: Was able to reach <host> via TCP but not UDP, network may be misconfigured and not allowing bidirectional UDP
2017/06/08 09:09:56 [WARN] memberlist: Was able to reach <host> via TCP but not UDP, network may be misconfigured and not allowing bidirectional UDP
2017/06/08 09:09:57 [WARN] memberlist: Was able to reach <host> via TCP but not UDP, network may be misconfigured and not allowing bidirectional UDP
What consul means regarding bidirectional UDP is that consul agent needs to see it's consul server and vice versa, consul server needs to see it's agent.
Consul agent -- [UDP] --> Consul Server
Consul agent <--[UDP] -- Consul Server
They are two different communications, unlike TCP, which uses the same channel that the Agent already initiated.
So, if your consul's agent and server are not in the same network (i.e. docker network) you need to expose ports in both ends. And take in account the concept of advertise that is the address that the agent announces to be contacted to.

Rails - Elastick Beanstalk nginx/error.log

Trying to upload my rails app to elastic beanstalk. I have successfully deployed my app and created postgres database. My app works on sqlite3 on development server.
my eb status is ready and health is green.
my eb logs file;
/var/log/nginx/error.log
-------------------------------------
2016/05/27 11:15:44 [warn] 2797#0: conflicting server name "localhost" on 0.0.0.0:80, ignored
2016/05/27 11:27:26 [crit] 2805#0: *140 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.26.77, server: _, request: "GET / HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/", host: "viravira-env.bu2eqpbwny.us-west-2.elasticbeanstalk.com"
2016/05/27 11:27:26 [crit] 2805#0: *140 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.26.77, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/favicon.ico", host: "viravira-env.bu2eqpbwny.us-west-2.elasticbeanstalk.com", referrer: "http://viravira-env.bu2eqpbwny.us-west-2.elasticbeanstalk.com/"
2016/05/27 11:34:45 [crit] 2805#0: *262 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.46.145, server: _, request: "GET / HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/", host: "viravira-env.bu2eqpbwny.us-west-2.elasticbeanstalk.com"
2016/05/27 11:34:45 [crit] 2805#0: *262 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.46.145, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/favicon.ico", host: "viravira-env.bu2eqpbwny.us-west-2.elasticbeanstalk.com", referrer: "http://viravira-env.bu2eqpbwny.us-west-2.elasticbeanstalk.com/"
2016/05/27 11:40:48 [crit] 2805#0: *353 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.46.145, server: _, request: "GET / HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/", host: "viravira-env.bu2eqpbwny.us-west-2.elasticbeanstalk.com"
2016/05/27 11:40:49 [crit] 2805#0: *353 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.46.145, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/favicon.ico", host: "viravira-env.bu2eqpbwny.us-west-2.elasticbeanstalk.com", referrer: "http://viravira-env.bu2eqpbwny.us-west-2.elasticbeanstalk.com/"
-------------------------------------
/var/log/puma/puma.log
-------------------------------------
=== puma startup: 2016-05-27 11:52:07 +0000 ===
=== puma startup: 2016-05-27 11:52:07 +0000 ===
[23871] - Worker 0 (pid: 23875) booted, phase: 0
[23871] - Gracefully shutting down workers...
[23871] === puma shutdown: 2016-05-27 12:36:32 +0000 ===
[23871] - Goodbye!
=== puma startup: 2016-05-27 12:36:35 +0000 ===
=== puma startup: 2016-05-27 12:36:35 +0000 ===
[24886] - Worker 0 (pid: 24890) booted, phase: 0
I am fairly new to eb so I wonder if the problem occurs because of the followings;
I have not installed node that is why it can not connect
Or I have problems with the security groups. I have 4 total as seen in the picture.
my network interfaces;
when I try to detach RDS security group, it gives an error no authorization, even though I signed in as root.
I have been trying to solve the problem for hours now and really appreciate any help!
EDIT
I think I m having same issue as here. But could not understand how to solve it
Your problem is very clear from the Nginx log:
connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream
It reads:
No such file or directory
This means that your socket doesn't not exist at this path:
/var/run/puma/my_app.sock
You need to setup the path to be the same as in your Rails/Puma
upstream(i.e. configuration)
When you will do that don't forget to make sure that Nginx user
can access that socket it will need RW access.

Resources