How to connect to Neo4j inside Docker with Spring? - docker

I have Neo4j running in Docker container:
docker run --publish=7474:7474 --publish=7687:7687 --name=neo4j -e NEO4J_AUTH=neo4j/psswd neo4j:latest
I can access Neo4j with this URL: http://localhost:7474/browser/.
And also I can connect to Neo4j with Spring outside Docker with this URI: bolt://localhost:7687/mydb.
But when I try to connect to Neo4j with Spring inside Docker with another URI bolt://neo4j:7687/mydb:
docker run -p 8080:8080 -t myapp --link neo4j:neo4j
I get the exception:
java.net.UnknownHostException: neo4j
And when I try the same with localhost or 127.0.0.1, I get the exception:
java.net.ConnectException: Connection refused
What URI should I use? And what am I doing wrong?
Neo4j logs look like this:
======== Neo4j 3.3.4 ========
Starting...
Bolt enabled on 0.0.0.0:7687.
Started.
Remote interface available at http://localhost:7474/

You can use a user-defined bridge network, so that you have DNS resolution between containers.
From the docs:
User-defined bridges provide automatic DNS resolution between containers.
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
And when I try the same with localhost or 127.0.0.1, I get the exception...
When you are inside a container and try to access "localhost" or "127.0.0.1" you are referring to the container itself, not the host.

Related

How to connect to dockerized Redis Server?

I am unable to connect (timeout) to a dockerized redis server1 which was started with the command of:
docker run -it --rm -p 6379:6379 redis:alpine
I've tried to start the server with configs:
set bind to 0.0.0.0
protected-mode no
I am however able to connect to my redis server with another docker container with the following command:
docker run -it --rm redis:alpine redis-cli -h host.docker.internal -p 6379
and also tried configuring the same parameters through cli as well.
When I try to connect the connection times out, I tried with both internal ip
172.17.0.x
and with the internal domain name:
host.docker.internal
to no avail. Further note that I was able to connect to redis server when installed with
brew install redis
on the host.
What am I missing, how can I resolve the issue so that I can connect to redis-server that is within a docker container from the container's host?
Environment details
OS: MacOS Monterey Version: 12.6 (21G115)
Docker version 20.10.17, build 100c701
1 More specifically I've tried with both
rdcli -h host.docker.internal in the mac terminal and also on application side with StackExchange.Redis.
More specifically I've tried with both rdcli -h host.docker.internal in the mac terminal
The host.docker.internal is a DNS name to access the docker host from inside a container. It's a bad practice to use this inside one container to talk to another container. Instead you'd create a network, create both containers in that network, and use the container name to connect. When that's done, the port doesn't even need to be published.
From the host, that's when you connect to the published port. Since the container is deployed with -p 6379:6379, you should be able to access that port from the hostname of your host, or localhost on that host:
rdcli -h localhost

Docker Gremlin client cannot connect to gremlin server?

I'm running locally a gremlin-client container and a gremlin-server container in 2 separate containers. I'm starting the following like so:
docker network create -o com.docker.network.bridge.enable_icc=true hacker
docker run --network hacker -p 8182:8182 tinkerpop/gremlin-server:3.4
docker run --network hacker -it tinkerpop/gremlin-console
When I try and connect to the remote server from the client like so:
:remote connect tinkerpop.server conf/remote.yaml
I get the following error:
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:8182
Why is this? I tried to share the network, but still doesn't work. Any ideas? The port is forwarded and matching what is in the remote.yaml file.
Edit
I got it working by modifying the host in the conf file on the client to read as host.docker.internal
I got it working by modifying the host in the conf file on the client to read as host.docker.internal

App running in container cannot connect to couchbase on VM localhost

Ive a simple webapp running on docker container which makes DB connection to couchbase.
My couchbase is currently running on a VM localhost. (not another container).
I tried issue a command
docker run --net=host -p 8081:8081 {**image-name-one**} // This connects without issue
Now, I need another instance of the same app but different port and for that I created a bridge network with ip - 192.168.0.1 then modified connection string to use network ip
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 test
Now, I tried running 2nd container with below ports
docker run --net=test -p 8083:8081 {**2nd-image-name**} // This will never connect to the database
Any insight would be greatly helpful.
Im using Ubuntu 16.04.
I found a work around by adding the subnet to my firewall to allow connections to any ports.
Now, I can get my services connect to Couchbase.

Stuck exposing a port of Docker

I'm stuck on port mapping in Docker.
I want to map port 8090 on the outside of a container to port 80 on the inside of the container.
Here is the container running:
ea41c430105d tag-xx "/usr/local/openrest…" 4 minutes ago Up 4 minutes 8090/tcp, 0.0.0.0:8090->80/tcp web
Notice that it says that port 8090 is mapped to port 80.
Now inside another container I do
curl web
I get a 401 response. Which means that the container responds. So far so good.
But when I do curl web:8090 I get:
curl: (7) Failed to connect to web port 8090: Connection refused
Why is port mapping not working for me?
Thanks
P.S. I know that specifically my container responds to curl web with a 401 because when I stop docker stop web and do curl web again, I get could not resolve host: web.
You cannot connect to a published port from inside another container because those are only available on the host. In your case:
From host:
curl localhost:8090 will connect to your container
curl localhost:80 won't connect to your container because the port isn't published
From another container in the same network
curl web will work
curl web:8090 won't work because the only port exposed and listening for the web service is the 80.
Docker containers unless specified connects to the default bridge network. This default bridge network does not support automatic DNS resolution between containers. It looks like you are most likely on the default bridge network. However, on a default bridge network, you could connect using the container IP Address which can be found out using the following command
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container name>
So, curl <IP Address of web container>:8090 should work.
It is always better to create a user defined bridge network and attach the containers to this network. On a user defined bridge network, the containers that are connected have their ports exposed to each other and not to the outside world. The user defined bridge network also support automatic DNS resolution and you could refer to the container's name instead of IP Address. You could try the following commands to create a user defined bridge network and attach your containers to it.
docker network create --driver bridge my-net
docker attach web
docker attach <other container name>
Now, from the other container you should be able to run curl on the 'web' container.
You can create network to connect between containers.
Or you can use --link :
docker run --name container1 -p 80:???? -d image (expose on port 80)
docker run --name container2 --links lcontainer1:container1
and inside container2 you can use :
curl lcontainer1
Hope it helps

cqlsh to Cassandra single node running in docker

I am learning cassandra 2.1.1:
I have a docker container running a cassandra node. I am able to csqlsh into the cassandra node itself from within the node. It says "127.0.0.1:4096". I am aware it is localhost and has something to do with "listen_address" in the cassandra.yml setting.
When using the boot2docker ip address it does not work. I have started the container with -p port 4096 & 9160 but with no luck. I have tried to change "listen_address" to the boot2docker ip address, but same error when cqlsh
Info:
1. the cqlsh client and the cassandra node is running cassandra 2.1.1
2. I have started cassandra on the node by running ./cassandra
Any suggestions?
Thanks
Happened to me, able to cqlsh within container, but unable to connect from outside the docker container host, when I realized using v2.1 that DOES NOT use 9160:
From the cqlsh documentation:
So, you should get clqsh client to use IP of the host containing the docker container at 9042 and not 9106. Use docker ps and netstat -atun | grep LIST and telnet to confirm the right port is in LISTEN status.
docker run -d -p 9042:9042 cassandra:2.2
docker run -d -p 9042:9042 poklet/cassandra
Requirements
In Cassandra 2.1, the cqlsh utility uses the native protocol. In Cassandra 2.1, which uses the Datastax python driver, the default cqlsh listen port is 9042.
In Cassandra 2.0, the cqlsh utility uses the Thrift transport. In
Cassandra 2.0.x, the default cqlsh listen port is 9160.
Cassandra cqlsh - connection refused
You should set the listen_address to the IP address assigned to the container. You can find this using 'boot2docker inspect' against the container.
Normally the container's IP address will be on the private network established by boot2docker, and so will not be accessible outside the boot2docker VM unless you route traffic through the VM something like this: https://gist.github.com/bhyde/be920f1a390db5f4148e

Resources