contact points for a local cassandra instance - docker

I have created 2 cassandra instances by deploying it on docker. One on port 9042 other one on 9043.
I have 2 applications, one is to be connected to 9042 other one to 9043.
1st application is connected to 9042 and is running successfully.
The properties i've given for the db are :
contactpoints = localhost,
port = 9042
The 2nd application which is to be brought up by the second db instance i.e., 9043 is throwing error :
om.datastax.driver.core.Cluster - You listed localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found in the control host's system.peers at startup
The properties i am giving for the db are :
contactpoints = localhost,
port = 9043
How can i connect to the cassandra intsance 9043 while the first application is running?

You're specifying localhost, but inside Docker, every localhost is local to the running image, but not to the host machine. As I see you have ports bound to the host network, so you need to specify the IP address of your machine instead of localhost.
P.S. Also, why are you packing the application with Cassandra? It's not how the Docker works - every process should run in separate container...

Every node in Cassandra should bind to a separate ip address,even on physical servers or docker on which 2 instances/nodes are running.

Related

How do I access an API on my host machine from a Docker container?

I have a docker-compose that spins up 2 Docker containers. One is an application (on port:8090) and the other is a database (on port:5432). I also have a Windows application that has an API accessible through localhost:8002. I want to use my container that has the application to read data from the localhost:8002 API, then move that data to my database in my other Docker container.
For docker-compose, I mapped port 5432:5432 and port 8090:8090 for the database and application containers, respectively. I have tested my application non-dockerized where I call the Windows API and then write it to port:5432 and it works properly.
However, after Dockerizing my application, localhost:8002 is now no longer localhost for my Dockerized application and is now unreachable. I am wondering how I can reach my host localhost:8002, hit that API, then move that data to my other container.
After 16 hours of blood, sweat, and tears. Mostly tears, here's the answer for whoever may be using this in the future.
TL;DR expose your local nat network to your docker containers in docker-compose by:
networks:
default:
external: true
name: nat
Note, on Windows, running a docker network ls should give something like:
NETWORK ID NAME DRIVER SCOPE
6b30a7dcf6e0 Default Switch ics local
26305680ad62 WSL ics local
b52f5e497eba nat nat local
4a4fd550398f none null local
The docker-compose is simply connecting the docker network to your host network (required for auto-restart of containers after computer restarts as well, but off-topic).
Afterwards, I had to run an ipconfig to find the IPv4 port that corresponds to my nat port. Here, there can be many IPv4 IP addresses, find the one that says Ethernet adapter vEthernet (nat).
Then, use the IPv4 Address corresponding to the nat network for any applications running on your local machine.
For example, my application ran on localhost:8002. Here, I changed my host to http://172.31.160.1:8002 and it worked.

Can we use a DNS name for a service running on a Docker Swarm on my local system?

Say I have a Swarm of 3 nodes on my local system. And I create a service say Drupal with a replication of 3 in this swarm. Now, say each of the node has one container each running Drupal. Now when I have to access this in my browser I will have to use the IP address of one of the nodes <IP Address>:8080 to access Drupal.
Is there a way I can set a DNS name for this service and access it using DNS name instead of having to use IP Address and port number?
You need to configure the DNS server that you use on the host making the query. So if your laptop queries the public DNS, you need to create a public DNS entry that would resolve from the internet (on a domain you own). This should resolve to the docker host IPs running the containers, or an LB in front of those hosts. And then you publish the port on the host to the container you want to access.
You should not be trying to talk directly to the container IP, these are not routeable from outside of the docker host. And the docker DNS used for service discovery is for container to container communication. This is separate from communication outside of docker that goes through a published port.

How to connect to localhost inside the docker container?

I have mysql and an application running on the docker. I want the application to connect to mysql localhost inside the docker.
Every container in Docker is a different host with its own IP and hostname, that's why you can't connect to your DB from your app using 127.0.0.1, they are not running on the same host.
You can see the IP assigned to a container with docker inspect <container-id>, but more easily you can refer to a service running on a container by its host, which by default is the name of the container (db in your case). You can also customize the hostname using hostname as you did.
Set db (or hybris_dev depending on how you prefer to configure your container) as the hostname to establish the connection to your DB from your app and it should work.

Adding NS record to docker net's DNS server

When running a docker container inside a docker network (i.e. docker network create $DOCKERNETNAME and then using --net=$DOCKERNETNAME when running the container). The net creates a DNS server at 127.0.0.11.
I want to create a NS record inside this DNS server (the one running at 127.0.0.1), so I can have a separate DNS server inside the docker net for some fake domain. How can I do that?
Please note that all this is being done for educational purposes and has no other goal.

Can we have two or more container running on docker at the same time

I have not done any practical with the docker and container, But as per my knowledge.
As per the documents available online I did not get the details about the running two or more containers at the same time.
Docker allows container to map port address of container to the host machine.
Now, the question is can we run multiple container at the same time on docker? if yes then if two containers are mapped to same port number then how does the port is handled in this case?
Also out of curiosity, can two containers on docker communicate with each other?
Yes you can run multiple containers on a single host; docker is designed for exactly that.
You cannot map two containers of different images to the same port number; you get an error response if you try. However, if your containers run the same image (e.g.2 instances of a webapp) you could run them as a service, and have them exposed on the same port. Docker will load-balance the requests. You can read more about services here or follow the Get Started (Part 3, services) here
Yes, the containers on a single host can communicate with each other, by container name. For example if you have one container running MongoDB called mongo, and another one running Node.js called webserver, the webserver container can connect to the database by using the name mongo e.g. db.Connect("mongodb://mongo:27017/testdb").
We can run more one than one Docker at a time in a host but yes we will hit the limitation of binding the same port to the docker; so to resolve this we need to bind different port in the host to docker that is if you are running mongo-db then its default port is 27017 so we can run two mongo-db as -p 27017:27017 for Docker D1 and -p 27018:27017 for Docker D2 and 5000:27017 for docker D3; Like this you can bind different host port to map to 27017 for mongo-db port; Now your question is how to manage this ports from host then I would recommend you to use nginx for port managing in the host machine.
Coming to your next question all dockers are connected to default docker0 bridge network so we can connect to any of the dockers connected to default bridge 'docker0' network; If I am right it will come with ipaddress of 172.x.x.x network. Get inside to the docker and run 'ip addr' to see the ip-address assigned to the dockers and you can test connection by running ping command.
Yes two containers can run same time, they can also communicate with each other also, you can define your own network and they can communicate with each other. if two containers have their private ports, they are their internal ports, one container port does not collide with another container port. if you want to expose the port to host, then you have to publish the port(s).

Resources