I have a macbook and I have locally running zookeeper and kafka instances on port 2181 and 9092 respectively. I am able to bring up kowl locally using the following command mentioned in the git repo
kowl % docker run -p 8081:8081 -e KAFKA_BROKERS=host.docker.internal:19092 quay.io/cloudhut/kowl:master
I see that the app is running successfully on docker, please find the logs attached below
{"level":"info","msg":"config filepath is not set, proceeding with
options set from env variables and flags"}
{"level":"info","ts":"2022-03-23T17:48:25.475Z","msg":"started
Kowl","version":"master","git_sha":"5ff6e3c4dea98737b661186519eb310f2a898d06","built":"2022-03-09T15:34:53Z"}
{"level":"info","ts":"2022-03-23T17:48:25.478Z","msg":"connecting to
Kafka seed brokers, trying to fetch cluster metadata"}
{"level":"info","ts":"2022-03-23T17:48:25.489Z","msg":"successfully
connected to kafka
cluster","advertised_broker_count":1,"topic_count":2,"controller_id":0,"kafka_version":"v3.0"}
{"level":"info","ts":"2022-03-23T17:48:25.489Z","msg":"creating Kafka
connect HTTP clients and testing connectivity to all clusters"}
{"level":"info","ts":"2022-03-23T17:48:25.489Z","msg":"tested Kafka
connect cluster
connectivity","successful_clusters":0,"failed_clusters":0}
{"level":"info","ts":"2022-03-23T17:48:25.490Z","msg":"successfully
create Kafka connect service"}
{"level":"info","ts":"2022-03-23T17:48:25.549Z","msg":"Server
listening on address","address":"[::]:8080","port":8080}
I am unable to access the application from the browser at https://localhost:8081. I tried accessing this via the container IP using docker inspect as well as host.docker.internal ip but to no avail. I recently moved to a mac, is it a known issue ? Any solutions would be greatly appreciated.
the following command worked, I had the ports mixed up
docker run -p 8081:8080 -e KAFKA_BROKERS=host.docker.internal:19092 quay.io/cloudhut/kowl:master
now I am able to access the ui at http://localhost:8081/
Related
I've got following two containers:
mysql - docker run -d -e MYSQL_ROOT_PASSWORD=secretpassword -p 3306:3306 --restart=unless-stopped -v /var/lib/mysql:/var/lib/mysql --name mysql mysql:8.0.29
and springapp with a Spring Boot app which tries to connect to it:
spring.datasource.url=jdbc:mysql://<HOST_IP_ADDRESS>:3306/databaseschema
This ends up in with an error message:
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
I am able to connect to MySQL using MySQL Workbench from my PC without any issues. I was able to fix this connection issue in two ways:
adding both containers to the same network and using mysql container name
adding spring app to the host network
My question is: Why is the connection not possible? I thought if I can connect to the mysql instance from the "external" world, than it should be also possible from the container. Does someway differentiate if the port is managed by Docker container and then access is restricted for other containers?
By using -p 3306:3306 with your MySQL container you've exposed port 3306 to your host machine which then, in turn, exposes it to other machines on its network (effectively). It doesn't expose it to other containers running on the same machine because containers are meant to be isolated.
Of course you can effectively disable this behaviour by running your spring app container with --network=host.
You can try usign the name of the container, for example:
spring.datasource.url=jdbc:mysql://mysql:3306/databaseschema
If it doen't work try usign the same network for both containers
I have a Zookeeper running on port 2181 (default) and a Kafka server listening on port 9090 on my local machine.
When I run kafka CLI locally, or consumer/producer apps locally, I have no problem connecting.
I then try to bundle a Kafka consumer into a Docker container, and run that Docker container locally, e.g.:
docker run -p 9092:9092 --rm <DOCKER_IMAGE>
This gives the error:
(Error starting userland proxy: Bind for 0.0.0.0:9090 failed: port is already allocated.)
This makes sense since Kafka Server is bound to 9092, as shown in nmap -p 9092 localhost:
PORT STATE SERVICE
9092/tcp open XmlIpcRegSvc
I'd have no problem mapping the Docker container to a different port via -p XXX:9090, but how do I get the local Kafka server to listen on that new port without binding to it?
So after some digging I found a few options. (Note: I'm on a mac so #2 may not be applicable to everyone).
Include --network=host in the docker run command (as seen here).
Don't change the docker run command at all, and instead connect to broker at host.docker.internal:9092 inside the container consumer/publisher code. As seen here.
I wasn't able to get #1 to work out for me (I'm sure it's user error). However #2 worked perfectly and just required a config change inside the container.
I am trying to run a small test server with MS SQL Server running on a Mac in a Linux docker container. Maybe I have the terminology wrong so please correct me if necessary:
host - the macOS desktop with docker installed (ip 10.0.1.73)
container - the Linux instance running in the docker container with SQL Server running in it
remote desktop - another computer on the local area network trying to connect to SQL Server
I followed the MS installation instructions and everything seems to be running fine, except I can't connect to SQL Server from the Remote Desktop
I can connect to the docker host(10.0.1.73) and can ping the IP address
I can connect to SQL Server from the docker host and see the databases etc.
I used the following command to create the docker container
sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<XXXXXX>" -p 1433:1433 --name sqlserver1 -d microsoft/mssql-server-linux:2017-latest
Thinking that the -p 1433:1433 would map the linux port to the macOS host port and allow the remote computer to access the docker container when connecting to that port on the macOS host from the local area network
This is not working and I assume this may be to do with the network routing on the macOS host
Most solutions I have seen seem to indicate that one should use the VirtualBox UI to modify the network settings - but I don't have that installed
The others seem to have pages and pages of command line instructions that are required
Is there an easy solution somewhere I have missed?
EDIT:
Some more research and I found this explanation about how by default the Docker networking is set up for single host networking. Good explanation for anyone else struggling with the Docker concepts.
It is also worth reading up about the differences between docker containers and virtual machines...
https://youtu.be/Js_140tDlVI
Still trying to find some explanation on multi host networking.
try disabeling the firewall on the host you want to connect to.
port 1433 will be forwarded to the docker container, but your host (MAC) should have port 1433 open to be able to connect to your host.
Using NAT:
Assign the target address to your host interface:
sudo ifconfig en1 alias 10.0.1.74/21 up
Create the docker container and map the port to the second IP address assigned to the host interface
sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<XXXXXXXXX>" -p 10.0.1.74:1433:1433 --name sqlserver1 -d microsoft/mssql-server-linux:2017-latest
In short - Can I run Elasticsearch & Dropwizard app in separate docker containers and allow them to see each other?
I am running Elasticsearch 6.2.2 from Docker (on mac). using the command..
docker run -p 9200:9200 -p 9300:9300 -e "network.host=0.0.0.0" \
-e "http.port=9200" -e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
I can access Elasticsearch (POST & GET) fine using Postman directly on mac e.g
localhost:9200/testindex/_search
However when running a Dropwizard application from a different docker image which accesses the docker Elasticsearch instance, I get connection refused using same host and port (localhost 9200).
I have no problems at all when running the Dropwizard app direct from an IDE, only when its running from a docker image and accessing ES from a different image.
docker image -p 8080:8080 -p 8081:8081 testapp
Has anyone else had similar issues or solved this in the past?
I'm assuming it 'network' related and that connecting to localhost from one docker image will not be map to the other docker image
The issue you are facing is in the url you pass to the dropwizard container. As a container by default has its own networking, a value of localhost means the dropwizard container itself, not what you see as your local host from outside the container.
Please have a look at docker networking, how you can link two containers by name. I would suggest to check out docker-compose for multi-container setups on a local machine.
What would also work (but is not good practice) is to pass the dropwizard container the ip of your machine as elasticsearch host because you created a port mapping from your host into the elasticsearch container. But better have a look at compose to do it as it is supposed to be done.
For details how to use compose please have a look at this answer with a similar example.
I am trying to get JMX monitoring working to monitor a test kafka instance.
I have kafka (ches/kafka) running in docker via boot2docker but I can not get JMX monitoring configured properly. I have done a bunch of troubleshooting and I know the kafka instance is running properly (consumers and producers work). The issue arises when I try simple JMX tools (jconsole and jvisualvm) and both can not connect (insecure connection error, connection failed).
Configuration items of note: I connect to 192.168.59.103 (virtualbox ip found when running 'boot2docker ip') and the ches/kafka docker/kafka instance uses the port 7203 as the JMX_PORT (confirmed in the kafka startup logs). Using jconsole, I connect to 192.168.59.103:7203 and that is when the errors occur.
Any help is appreciated.
For completeness, here is the solution that works:
I ran ches/kafka docker image as follows -- note that the JMX_PORT (7203) is now published appropriately:
$ docker run --hostname localhost --name kafka --publish 9092:9092 --publish 7203:7203 --env EXPOSED_HOST=192.168.59.103 --env ZOOKEEPER_IP=192.168.59.103 ches/kafka
Also, the following environment is set in the kafka-run-class.sh (.bat for windows)
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
But I needed to add one additional item (thanks to one of the commenters for pointing this out):
-Dcom.sun.management.jmxremote.rmi.port=7203
Now, to run the ches/docker image in boot2docker you just need to set one of the recognized environment variables (KAFKA_JMX_OPTS or KAKFA_OPTS) to add the additional item and it now works.
Thanks for the help!
There's no reason the kafka docker port would bind to the same port in the boot2docker VM except if you specify it.
Try running it with -p 7203:7203 to force the 1:1 forwarding of the port.