publishing message to external Kafka Broker from docker container - docker

In my IDE, I am able to utilize a spring-boot application that would produce message(with Kafkaproducer) to an external kafka broker. But once I have hosted my spring-boot application in docker container, my application can no longer submit to the broker.
Here is the error message:
o.s.k.support.LoggingProducerListener: Exception thrown when sending a message with key='null' and payload='....' to topic Category:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
org.springframework.kafka.core.KafkaProducerException:
Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
what I used to run docker was: docker run -p 9001:9001 -d image_name in which 9001 is my spring-boot port, I am able to Post to the port, just that once my message was posted, it won't get to the external broker.
I think I have the general concept that Docker containers are living in isolated land where you have to open/map the port in order to access it(like my -p 9002:9002), but does it work the same way to access out from container? if so, can someone please show me how I can run the docker container in order to be able to access to the external broker(let's say the broker URL is "192.168.1.1:9000")? I don't think I am able to modify anything on the broker now, but my assumption is that if I can access via my IDE, why not in docker? Thanks for all the help!

it was due to my ip-forwarding = 0, once that is turned on I am able to do outgoing request.

Related

From inside of a Docker container, how do I connect to the endpoint?

I have a cloudwatch agent installed in EC2 instance and also a docker image on the instance.
From the EC2 instance, I could successfully send out logs to endpoint(0.0.0.0:25888) to cloudwatch. But when I get into the docker image using docker exec -it <container id> bash, I tried to publish same logs from inside container but it failed with following error:
2861 2022-07-21 00:11:18,686 ERROR (10.0.1.124,1385:MainThread) aws_embedded_metrics.sinks.tcp_client: Failed to connect to the socket. [Errno 111] Connection refused
2862 2022-07-21 00:11:18,686 INFO (10.0.1.124,1385:MainThread) aws_embedded_metrics.sinks.agent_sink: Parsed agent endpoint (tcp) 0.0.0.0:25888
Wondering if anyone knows the root cause here or any debugging clue? Thanks in advance!
I run into this as well. My solution (workaround?) was to:
Make sure the cloudwatch agent is listening on udp://0.0.0.0:25888 and not 127.0.0.1 (the default). The CWAgent docs I have seen don't have any examples on how to achieve this.
Once inside the container, use the Docker host IP to send messages. For me this was export AWS_EMF_AGENT_ENDPOINT=udp://172.17.0.1:25888, as I was using aws-embedded-metrics-python. YMMV depending on the underlying library that you use.

Docker context defined with https resulting an error reaching out to port 80

I have setup a docker registry using docker-compose, largely following the recipe published by Docker here: https://docs.docker.com/registry/recipes/nginx/
Nginx and my registry start, and I am able to issue docker login from a different machine:
docker login https://myhost.mydomain.net
Once logged in I can push and pull images as expected.
Now I need a way to manage content in the remote registry. To that end, I defined a context:
docker context create myregistry-prod --docker "host=https://myhost.mydomain.net"
The command results in this message, which appears to arise during basic authentication:
error during connect: Post "http://myhost.mydomain.net/v1.24/auth": dial tcp 192.168.176.71:80: connectex: No connection could be made because the target machine actively refused it.
I assumed that a context using https would operate inside a TLS connection, so I'm surprised to see the client attempting to open port 80. By design, I have no program listening on port 80, hence the connection is refused.
Note that I am able to fetch the catalog using this URL in a browser, https://myhost.mydomain.net/v2/_catalog . The browser prompts for basic credentials, I supply them and get back the expected result. It appears that the Docker API working as expected passing through the Nginx container and being serviced by the registry container.
So, the question is, how do I go about diagnosing the issue? Did I make an error defining the context?
I'm quite sure I have a misunderstanding. This is my first attempt at docker compose and my first attempt at using nginx in front of Docker Registry. I will redact and post nginx.conf and docker-compose.yml if you need them but I am guessing it's a client-side problem. Any help you might offer will be greatly appreciated.

docker: Error response from daemon: Ports are not available: listen tcp 0.0.0.0:50075

when i try to install a sandbox-hdp version 2.6.5 by hortonworks on docker in my system by running docker-deploy-hdp256.sh script with sh command i recived the error at the end of all the pulling and some verification checksums are done.
error:
docker: Error response from daemon: Ports are not available: listen tcp 0.0.0.0:50075: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
but i could see the container with name sandbox-hdp. so i opened it and try to run hive. but it is giving some error related to connectivity.
i need to work on hive and for that i need to get this fixed.
In this case, sometimes is hard to turn off or close a port running because some of them is used by our computer or even a range of ports is reserved randomly. So in this opportunity try to delete this port in your sandbox-proxy script because you don't need to expose all ports. Ambari-hortonworks expose a lot of ports with docker and securely you won't use all of them.

SSH Tunnel within docker container

I have a client running in a docker container that subscribes to a MQTT broker and then writes the data into a database.
To connect to the MQTT Broker i will have to set up port forwarding.
While developing the client on my local machine the following worked fine:
SSH -L <mqtt-port:1883>:localhost:<9000> <user>#<ip-of-server-running-broker>
The client is then configured to subscribe to the MQTT broker via localhost:9000.
This all works fine on my local machine.
Within the container it wont, unless I run the container with --net=host but I'd rather not do that due to security concerns.
I tried the following:
Create docker network "testNetwork"
Run a ssh_tunnel container within "testNetwork" and implement port forwarding inside this container.
Run the database_client container within "testNetwork" and subscribe to the mqtt broker via the bridge network like ("ssh_tunnel.testNetwork:")
(I want 2 seperate containers for this because the ip address will have to be altered quite often and I don't want to re-build the client container all the time)
But all of my attempts have failed so far. The forwarding seems to work (I can access the shell on the server in the ssh container) but I haven't found a way to actually subscribe to the mqtt broker from within the client container.
Maybe this is actually quite simple and I just don't see how it works, but I've been stuck on this problem for hours by now...
Any help or hints are appreciated!
The solution was actually quite simple and works without using -net=host.
I needed to bind to 0.0.0.0 and use the Gateway Forwarding Option to allow remote hosts (the database client) to connect to the forwarded ports.
ssh -L -g *:<hostport>:localhost:<mqtt-port/remote port> <user>#<remote-ip>
Other containers within the same Docker bridge network can then simply use the connection string <name-of-ssh-container>:<hostport>.

Connecting to Docker container connection refused - but container is running

I am running 2 spring boot applications: A client and rest-api. The client communicates to the rest-api which communicates to a mongodb database. All 3 tiers are running inside docker containers.
I launch the containers normally specifying the exposed ports in the dockerfile and mapping them to a port on the host machine such as: -p 7070:7070, where 7070 is a port exposed in the Dockerfile.
When I run the applications through the java -jar [application_name.war] command, the application works fine and they all can communicate.
However, when I run the applications in a Docker container I get connection refused error, such as when the client tries to connect to the rest-api I get a connection refused error at http://localhost:7070.
But the command docker ps shows that the containers are all running and listening on the exposed and mapped ports.
I have no clue why the containers aren't recognizing that the other containers are running and listening on their ports.
Does this have anything to do with iptables?
Any help is appreciated.
Thanks
EDIT 1: The applications when ran inside containers work fine on my machine, and they don't throw any connection refused errors. The error only happens on that particular different machine.
I used container linking to solve this problem. Make sure you add --link <name>:<alias> at run-time to the container you want linked. <name> is the name of the container you want to link to and <alias> will be the host/domain of an entry in Spring's application.properties file.
Example:
spring.data.mongodb.host=mongodb if the alias supplied at run-time is 'mongodb':
--link myContainerName:mongodb

Resources