I just started yesterday and following tutorials for using GCP.
I have a Cassandra docker container running in google compute engine. I would like to connect to the Cassandra docker container from my local machine and load data into it.
I tried using the IP address of the compute instance and Cassandra port. But the java program which loads data into Cassandra throws an error NoHostAvailableException
I appreciate your time.
From my understanding, unless you expose the docker container's port publicly, you cannot access the port of the container anyway. This is where the concept of services comes in cloud architectures, to publicly expose container/s. Detailed instruction is given in "configuring endpoints" and following sections in the following article https://cloud.google.com/endpoints/docs/openapi/get-started-compute-engine-docker .
Related
I have been able to set up containerised RabbitMQ server, and reach into it with basic .NET Core clients and check message send and receive working using management portal on http://localhost:15672/.
But I am having real frustrations when I also Containerise my Sender/Receiver .NET Core clients, on being able to establish a connection. I have set up an explicit "shipnetwork", so all containers in the following docker-compose deployment should see each other.
This is the Error I get in the sender attempting the connection:
My SendRabbit .NET core App is as follows. This code was working on my local Windows 10 development machine, with a host of 'localhost' against the RabbitMQ server running as a container. But when I change this to a [linux] docker project, and set the host to "rabbitmq", to correspond to the service name in the docker compose. Now I just get Endpoint Connection errors exceptions within my Sender container.
I have also attempted the same RabbitMQ server and Sender Image with the same docker-compose on a Google Cloud Linux Virtual Machine, and get the same errors. So I do not think it is the Windows 10 docker hosting VM environment hassles.
I thought docker was going to make development and deployment of microservices, but setting up a basic RabbitMQ connections is proving to be a real pain.
I have thought that maybe the rabbitmq server is not up and running, so perhaps ambitious to put in the same docker-compose. But I have checked running my SendRabbit container
$docker run --network shipnetwork sendrabbit
some minutes later. But I still get the same connection error
docker networks **** networks !
When I checked the actual docker networks, I had:
bridge
host
shipnetwork
rabbitship_shipnetwork
The docker compose was actually creating the 'new' network: rabbitship_shipnetwork every time it was spun up, and placing the rabbimq server on that network. The netwrok is named from appending the directory name, with the name in the compsos yaml. So I was using the wrong network in my senders. So I should have been using
$docker run --network rabbitship_shipnetwork sendrabbit
This works fine, and creates messages into the rabbitmq server
So I don't feel that docker-compose is actually very helpful in creating networks, since it is sensitive to the directory name it is run in ! Its unlikely that I can build an app .docker files, and deploy all Apps from a single directory, especially when rabbitmq has to be started separately, before senders and receivers can use it.
docker-compose 0
Does exist any way to do this:
run one service (container) with main application - server (flask application);
server allows to run another services, them are also flask applications;
but I want to run each new service in separate container ?
For example, I have endpoint /services/{id}/run at the server, each id is some service id. Docker image is the same for all services, each service is running on separate port.
I would like something like this:
request to server - <host>//services/<id>/run -> application at server make some magic command/send message to somewhere -> service with id starts in new container.
I know that at least locally I can use docker-in-docker or simply mount docker socket in container and work with docker inside this container. But I would like to find way to work across multiple machines (each service can run on another machine).
For Kubernetes: I know how create and run pods and deployments, but I can't find how to run new container on command from another container. Can I somehow communicate with k8s from container to run new container?
Generally:
can I run new container from another without docker-in-docker and mounting docker socket;
can I do it with/without Kubernetes?.
Thanks for advance.
I've compiled all of the links that were in the comments under the question. I would advise taking a look into them:
Docker:
StackOverflow control Docker from another container.
The link explaining the security considerations is not working but I've managed to get it with the Webarchive: Don't expose the Docker socket (not even to a container)
Exposing dockerd API
Docker Engine Security
Kubernetes:
Access Clusters Using the Kubernetes API
Kubeflow in the spite of machine learning deployments
I'm building a application with microservices architecture.
So basically, my app look like this
API GATEWAY(port 3000) => USERS-SERVICE(port 9090), AUTH-SERVICE(port 8080), SEND-SMS-SERVICE(port 7070).
all work fine until now.
now I try to implement docker in my project. I build an image for each service
and run container instance for each on my local machine.
now I want to develop new service Customer-Service. and this service run on
http://localhost:3030
.
question:
1) How i can request http://localhost:3030 from api gateway, if in development I run api-gateway from container.
You must understand the network concept, when you start independent docker instance and you don't define the network they will be unreachable between them.
There is other things, you CAN'T access to one micro service hosted in a Docker to other Micro services hosted in other docker image using localhost, localhost is a 127.0.0.1. This is a call for the local machine. Then the concept of docker is like "diferent machines running on a same machine" is like a virtual machine but docker shares the host machine kernel.
You can access to another docker image in 2 ways.
Configure in a host network, which i do not recommend
Create a network, add every docker image instance to this network and call other micro services using the container name. IE you can use http://my-service-1:3400/api/v1/post
I recommend you to use docker-compose.
This is one of my repositories, I created with the propuse of share an Node App using JWT, but this project use Docker and docker-compose
https://github.com/camiloperezv/jwt-template
how you can see, i define an Network attribute in the docker-compose.ymland use this network in all of my services.
In the service section you will put all your micro-services, and in the code you will make the http request using the container name instead of using localhost or an IP address.
In my services y use the build: . this is for development propuse, in production you should use the pre build docker image instead of building it on the production server.
Feel free to use my github code.
Regards
As far as I understand from the question, a new service Costumer-Service runs on http://localhost:3030 on the host machine.
If yes, api-gateway docker container should be started in the host network:
docker run --network host -d <api-gateway_image_name>
After this Costumer-Service will be reachable on localhost:3030 from the api-gateway container.
I have built a docker image for kafka(wurstmeister/kafka-docker). Inside docker container I am able to create topic, produce messages and consume messages using the builtin shell scripts. Now I am using code hosted at https://github.com/mapr-demos/kafka-sample-programs to connect to kafka broker from my host machine. After building and running the program nothing happens and program stucks. I guess producer.send is not able to connect to kafka broker. Any help is appreciated
You can see that both the consumer.properties and the producer.properties files in that project specify bootstrap.servers=localhost:9092.
Since you cannot connect to the dockerized kafka service using localhost:9092, you might try finding the IP address of the docker container, by using, for example, docker inspect kafka | grep IPA (assuming that the name of your container is kafka). Then replace localhost with that IP address in those two properties files.
I am using ches/kafka docker image. Have a look at the explanation of KAFKA_ADVERTISED_HOST_NAME.
I am having a problem in all my Dockers running in IBM Container service. The application (the docker it self, I mean) is started when the service still has not configured the red for that docker. After some seconds (maybe 20 or 30) the docker has full network connectivity. This is generating a lot of problems, as it takes about that time to have both internal and external IP interfaces correctly configured by system.
Currently I am inserting a sleep thread in all my dockers applications so they wait for that time before starting to work but I wonder if there is a way to instruct the host not to start the container until the network is ready.
Thanks
Note: This question is related to IBM Containers service, not generic Docker. That is why I don't specify any Docker version, as it is a CaaS service. Anyway, and to be precise, we run the container service using the cloud foundry extensions, not the docker command:
cf ic run --name CONTAINER_NAME -m 512 registry.ng.bluemix.net/MY_ZONE/MY_DOCKER_IMAGE