So I have an issue with docker-compose and rabbitmq.
I run docker-compose up. Everything spins up. Docker-compose:
services:
rabbitmq3:
image: "rabbitmq:3-management"
hostname: "localhost"
command: rabbitmq-server
ports:
- 5672:5672
- 15672:15672
Then I do sudo rabbitmqctl status to check connection with node. I get this error:
Error: unable to perform an operation on node 'rabbit#localhost'. Please see diagnostics information and suggestions below.
Most common reasons for this are:
* Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
* CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
* Target node is not running
In addition to the diagnostics info below:
* See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
* Consult server logs on node rabbit#localhost
* If target node is configured to use long node names, don't forget to use --longnames with CLI tools
DIAGNOSTICS
===========
attempted to contact: [rabbit#localhost]
rabbit#localhost:
* connected to epmd (port 4369) on localhost
* epmd reports: node 'rabbit' not running at all
no other nodes on localhost
* suggestion: start the node
Current node details:
* node name: 'rabbitmqcli-25456-rabbit#localhost'
* effective user's home directory: /Users/olof.grund
* Erlang cookie hash: d1oONiVA/qogGxkf6vs9Rw==
When I do it in the container docker-compose exec -T rabbitmq3 rabbitmqctl status it works.
Do I need to expose something from docker somehow? Some rabbitmq client or node maybe?
I used all the tips that I have found in other sources. (adding IP to /etc/hosts/, restarts of containers, services). Took me a day to finally get this to work and it boils down to this.
<wait for 60secs since the rabbit container has been started>
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl force_boot
rabbitmqctl start_app
Rabbitmq uses Erlang's distribution protocol, which requires port 4369 open for the EPMD (Erlang Port Mapper Daemon), expose it in the docker-compose and stop the EPMD running in your host.
I want to use the schema registry docker (image owned by confluent) with my open-source Kafka I installed locally on my PC.
I am using the following command to run the image :
docker run -p 8081:8081 \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=PLAINTEXT://127.0.0.1:9092 \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:latest
but I am getting the following connection errors:
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
I have Kafka installed on my localhost.
Any idea to solve this, please?
I have Kafka installed on my localhost
As commented, localhost is unclear when you're actually using multiple machines (one physical and at least one virtual)
You need to use host.docker.internal:9092
https://docs.docker.com/docker-for-windows/networking/ (removed because host is not windows)
On a Linux host, you need to use host networking mode
https://docs.docker.com/network/host/
Although, realistically, running Kafka in a container would be simpler for connecting the two
I have newly installed Jenkins on my Amazon-AWS server. I changed the Jenkins port to 8081 and started it. I verified that the Jenkins is running on port 8081.
[ec2-user#ip-172-31-28-247 ~]$ ps -eaf | grep 8081
jenkins 1370 1 0 03:42 ? 00:00:11 /etc/alternatives/java -Dcom.sun.akuma.Daemon=daemonized -Djava.awt.headless=true -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenkins.war --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war --daemon --httpPort=8081 --debug=5 --handlerCountMax=100 --handlerCountMaxIdle=20
ec2-user 1611 1585 0 04:34 pts/1 00:00:00 grep --color=auto 8081
[ec2-user#ip-172-31-28-247 ~]$
However when I try to access the Jenkins using this url:
http://ec2-54-214-126-0.us-west-2.compute.amazonaws.com:8081
it says that
This site can’t be reached ec2-54-214-126-0.us-west-2.compute.amazonaws.com took too long to respond.
On the same server I am running tomcat on port: 8080 which I can access and it shows the tomcat homepage.
http://ec2-54-214-126-0.us-west-2.compute.amazonaws.com:8080/
I am new to Jenkins.
Check the guide "Set Up a Jenkins Build Server -- Quickly create a build server for continuous integration (CI) on AWS"
It does has a section about a security group for the Amazon EC2 instance, which acts as a
firewall that controls the traffic allowed to reach one or more EC2 instances.
Maybe 8081 is not part of that security group.
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
In the left-hand navigation bar, choose Security Groups.
Click Add Rule, and then choose Custom TCP Rule from the Type list.
Under Port Range enter 8081.
I'm using docker version of neo4j (v3.1.0) and I'm having difficulties connecting to neo4j server using neo4j-shell.
After running an instance of neo4r:3.1.0 docker, I run a bash inside the container:
$ docker exec -it neo4j /bin/bash
And from there I try to run the neo4j-shell like this:
/var/lib/neo4j/bin/neo4j-shell
But it errors:
$ /var/lib/neo4j/bin/neo4j-shell
ERROR (-v for expanded information):
Connection refused
-host Domain name or IP of host to connect to (default: localhost)
-port Port of host to connect to (default: 1337)
-name RMI name, i.e. rmi://<host>:<port>/<name> (default: shell)
-pid Process ID to connect to
-c Command line to execute. After executing it the shell exits
-file File containing commands to execute, or '-' to read from stdin. After executing it the shell exits
-readonly Connect in readonly mode (only for connecting with -path)
-path Points to a neo4j db path so that a local server can be started there
-config Points to a config file when starting a local server
Example arguments for remote:
-port 1337
-host 192.168.1.234 -port 1337 -name shell
-host localhost -readonly
...or no arguments for default values
Example arguments for local:
-path /path/to/db
-path /path/to/db -config /path/to/neo4j.config
-path /path/to/db -readonly
I also tried other hosts like: localhost, 127.0.0.1 and 172.17.0.6 (the container IP). Since it didn't work I tried to list open ports on my container:
$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 :::7687 :::* LISTEN
tcp 0 0 :::7473 :::* LISTEN
tcp 0 0 :::7474 :::* LISTEN
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
As you can see there's no 1337 open! I've looked into the config file and the line for specifying port is commented out which means it should be set to its default value (1337).
Can anyone help me connect to neo4j using neo4j-shell?
BTW, the neo4j server is up and running and I can use its web access through port :7474.
In 3.1 it seems the shell is not enabled by default.
You will need to pass your own configuration file with the shell enabled :
Uncomment
# Enable a remote shell server which Neo4j Shell clients can log in to.
dbms.shell.enabled=true
(I find the amount of worker for changing one value in docker quite heavy but yeah..)
Or use the new cypher-shell :
ikwattro#graphaware-team ~> docker ps -a | grep 'neo4j'
34b3c6718504 neo4j:3.1.0 "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 7473-7474/tcp, 7687/tcp compassionate_easley
2395bd0b1fe9 neo4j:3.1.0 "/docker-entrypoint.s" 5 minutes ago Exited (143) 3 minutes ago cranky_goldstine
949feacbc0f9 neo4j:3.1.0 "/docker-entrypoint.s" 5 minutes ago Exited (130) 5 minutes ago modest_boyd
c38572b078de neo4j:3.0.6-enterprise "/docker-entrypoint.s" 6 weeks ago Exited (0) 6 weeks ago fastfishpim_neo4j_1
ikwattro#graphaware-team ~> docker exec --interactive --tty compassionate_easley bin/cypher-shell
username: neo4j
password: *****
Connected to Neo4j 3.1.0 at bolt://localhost:7687 as user neo4j.
Type :help for a list of available commands or :exit to exit the shell.
Note that Cypher queries must end with a semicolon.
neo4j>
NB: Cypher-shell supports begin and commit :
neo4j> :begin
neo4j# create (n:Node);
Added 1 nodes, Added 1 labels
neo4j# :commit;
neo4j>
-
neo4j> :begin
neo4j# create (n:Person {name:"John"});
Added 1 nodes, Set 1 properties, Added 1 labels
neo4j# :rollback
neo4j> :commit
There is no open transaction to commit
neo4j>
http://neo4j.com/docs/operations-manual/current/tools/cypher-shell/
I have configure and installed couchdb 1.6.1 on ubuntu 10.04. While configuring and installing it said successfully done. But when I start couchdb it gives me error.
When start first..
Apache CouchDB 1.6.1 (LogLevel=info) is starting.
Apache CouchDB has started. Time to relax.
**[info] [<0.32.0>] Apache CouchDB has started on http://127.0.0.1:5984/**
When starting again
Apache CouchDB 1.6.1 (LogLevel=info) is starting.
**Failure to start Mochiweb: eaddrinuse**
[error] [<0.127.0>] {error_report,<0.31.0>,
{<0.127.0>,crash_report,[[{initial_call,
{mochiweb_socket_server,init,['Argument__1']}},
{pid,<0.127.0>},
{registered_name,[]},
{error_info,
{exit,eaddrinuse,
[{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]}},
{ancestors,
[couch_secondary_services,couch_server_sup,
<0.32.0>]},
{messages,[]},
{links,[<0.95.0>]},
{dictionary,[]},
{trap_exit,true},
{status,running},
{heap_size,987},
{stack_size,24},
{reductions,467}],
[]]}}
{"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/usr/local/etc/couchdb/default.ini","/usr/local/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,shutdown}},[{couch_server_sup,start_server,1},{application_master,start_it_old,4}]}}}}}},[{couch,start,0},{init,start_it,1},{init,start_em,1}]}}
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
in local.ini I changed
Port to 5983 and also ip to 0.0.0.0 from 127.0.0.1
Nothing helped
When i run sudo netstat -tulpn
I get following output
tcp 0 0 127.0.0.1:5984 0.0.0.0:* LISTEN 21688/beam.smp
couchdb -s says Apache CouchDB is not running.
Thanks in advance
If you came here because you recently upgrade you CouchDB 2.0, make sure you updated your local.ini config file. The section for the web server configuration has changed (the name changed from something like httpd to chttpd). Because of that the port directive wasn't being picked up and it was defaulting to 80 which was the same as the web server.
Based on the error message entries provided, CouchDB is already up & running on your machine/ server with port 5984
**Failure to start Mochiweb: eaddrinuse**
That's why When you run sudo netstat -tulpn command on Linux based machine, you are getting the following output :
tcp 0 0 127.0.0.1:5984 0.0.0.0:* LISTEN 21688/beam.smp
Solutions :
couchdb is not a service script in the way you're trying to use it. It is just trying to start up again and failing because one is already running (erlang beam.smp process is still running).
Check & kill all CouchDB processes using following commands :
/bin/ps aux | grep couchdb | grep -v grep | awk '{print $2}' | xargs killl
Start CouchDB again using following command using root linux user :
service couchdb start
Note :
When you enter couchdb -s command, You are getting the message like Apache CouchDB is not running.
When you enter couchdb -d command, You are getting the message like Apache CouchDB is not running.
Reason for above outputs, You are executing above mentioned commands as non privileged linux users accounts.
Try to execute above mentioned commands as root Or couchDB running linux users accounts.
Even i faced same problem. I just did..
sudo apt-get update
sudo apt-get install couchdb
Note:i just re-installed couchdb now it's working fine. before re-install i did below configuration.
Install ICU and use locate to find the icu-config command:
locate icu-config
sudo apt-get install libicu-dev
for more details refer below link
https://wiki.apache.org/couchdb/Error_messages
I face the same problem...
Luckily there's a fix in the Couchdb Error messages documentation.
https://wiki.apache.org/couchdb/Error_messages
Problem
$ couchdb
Apache CouchDB 0.9.0a747640 (LogLevel=info) is starting.
Failure to start Mochiweb: eaddrinuse
{"init terminating in do_boot",{{badmatch,{error,shutdown}},[{couch_server_sup,start_server,1},{erl_eval,do_apply,5},{erl_eval,exprs,5},{init,start_it,1},{init,start_em,1}]}}
Solution
Edit your /etc/couchdb/couch.ini file and change the Port setting to an available port.
But mine didn't have couch.ini but had default.ini.
I changed that to another port that i guessed should be free and it worked
I have encountered this error in starting couchdb on ubuntu 16.04
The reason being erlang running on the port 5984, you would be something like below when running the command "netstat -tulpn"
0 0 127.0.0.1:5984 0.0.0.0:* LISTEN 13967/beam.smp
open the file /etc/couchdb/default.ini in super user mode and change the httpd port to some other port which is available and you would be able to start couchdb without any fail.