Remote debug docker+wildfly with intelliJ 2017.2.6 - docker

So there are a lot of posts around this subject, but none of which seems to help.
I have an application running on a wildfly server inside a docker container.
And for some reason I cannot connect my remote debugger to it.
So, it is a wildfly 11 server that has been started with this command:
/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -c standalone.xml --debug 9999;
And in my standalone.xml I have this:
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
The console output seems promising:
Listening for transport dt_socket at address: 9999
I can even access the admin console with the credentials admin:admin on localhost:9990/console
However IntelliJ refuses to connect... I've creates a remote JBoss Server configuration that in the server tab points to localhost with management port 9990.
And in the startup/connection tab I've entered 9999 as remote socket port.
The docker image has exposed the ports 9999 and 9990, and the docker-compose file binds those ports as is.
Even with all of this IntelliJ throws this message when trying to connect:
Error running 'remote':
Unable to open debugger port (localhost:9999): java.io.IOException "handshake failed - connection prematurally closed"
followed by
Error running 'remote':
Unable to connect to the localhost:9990, reason:
com.intellij.javaee.process.common.WrappedException: java.io.IOException: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed
I'm completely lost as to what the issue might be...
Interessting addition is that after intelliJ fails, if I invalidate caches and restart then wildfly reprints the message saying that it is listening on port 9999

In case someone else in the future comes to this thread with he same issue, I found this solution here:
https://github.com/jboss-dockerfiles/wildfly/issues/91#issuecomment-450192272
Basically, apparart from the --debug parameter, you also need to pass *:8787
Dockerfile:
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "--debug", "*:8787"]
docker-compose:
ports:
- "8080:8080"
- "8787:8787"
- "9990:9990"
command: /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 --debug *:8787
I have not tested the docker-compose solution, as my solution was on dockerfile.

Not sure if this can be seen as an answer since it goes around the problem.
But the way I solved this, was by adding a "pure" remote configuration in intelliJ instead of jboss remote. This means that it won't automagically deploy, but I'm fine with that

Related

port 80 refused - digital ocean droplet web console w/ caprover instance

I have a cap rover instance in my digital ocean instance that I created. I want to use teh caprover instance to run cap rover sample apps.
I opened the digital ocean droplet web console in order to run a caprover isntance.
I ran the following lines of code:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
and got this:
Skipping adding existing rule
Skipping adding existing rule (v6)
Skipping adding existing rule
Skipping adding existing rule (v6)
I then ran this:
docker run -p 80:80 -p 443:443 -p 3000:3000 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
I got this:
Unable to find image 'caprover/caprover:latest' locally
latest: Pulling from caprover/caprover
Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f
Status: Downloaded newer image for caprover/caprover:latest
docker: Error response from daemon: driver failed programming external connectivity on endpoint priceless_sammet (9da9028cfc4873818f113458237ebd00f9c64fa648b853730a60b10bea39c720): Bind for 0.0.0.0:3000 failed: port is already allocated.
I tried changing the ports to:
docker run -p 81:81 -p 444:444 -p 3321:3321 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
and got this:
Captain Starting ...
Installing Captain Service ...
Installation of CapRover is starting...
For troubleshooting, please see: https://caprover.com/docs/troubleshooting.html
>>> Checking System Compatibility <<<
Docker Version passed.
Ubuntu detected.
X86 CPU detected.
Total RAM 1033 MB
Are your trying to run CapRover on a local machine or a machine without public IP?
In that case, you need to add this to your installation command:
-e MAIN_NODE_IP_ADDRESS='127.0.0.1'
Otherwise, if you are running CapRover on a VPS with public IP:
Your firewall may have been blocking an in-use port: 80
A simple solution on Ubuntu systems is to run "ufw disable" (security risk)
Or [recommended] just allowing necessary ports:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
See docs for more details on how to fix firewall issues
Finally, if you are an advanced user, and you want to bypass this check (NOT RECOMMENDED),
you can append the docker command with an addition flag: -e BY_PASS_PROXY_CHECK='TRUE'
Installation failed.
Error: Port seems to be closed: 80
at Request._callback (/usr/src/app/built/utils/CaptainInstaller.js:149:24)
at Request.self.callback (/usr/src/app/node_modules/request/request.js:185:22)
at Request.emit (events.js:400:28)
at Request.<anonymous> (/usr/src/app/node_modules/request/request.js:1154:10)
at Request.emit (events.js:400:28)
at IncomingMessage.<anonymous> (/usr/src/app/node_modules/request/request.js:1076:12)
at Object.onceWrapper (events.js:519:28)
at IncomingMessage.emit (events.js:412:35)
at endReadableNT (internal/streams/readable.js:1334:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
How can I open port 80, 443, and 3000 so that I can run the cap rover instance

Unable to bind port 80

I'm using docker compose to run a simple web server project I created. This configuration has been working fine for months but suddenly stopped working after I haven't been to the office for two weeks.
It works when I map my ports like that - 8080:80, but I don't want to have to type out port 8080 every time. I used netstat -a -n -o | findstr /c:80 to find the process ID of the process listening to port 80, and tasklist /fi "pid eq 4" to find out what the name of the process is.
Turns out it's some system process, so I'm not sure what to do about that. I've uninstalled Skype and checked that the World Wide Web Publishing Service isn't turned on. Does anybody have an explanation or ideas as to how to fix this?
Thanks in advance.
update
when I run net stop http and kill all dependant services with it, port 80 is free. Services being stopped: Windows Remote Management (WS-Management), SSDP Discovery, Print Spooler, BranchCache and HTTP of course. Which of these could be the culprit?
update 2
I now stopped those services one by one, and after stopping every one of those it seems BranchCache is responsible for this. more testing ensues
docker-compose.yml
version: "3"
services:
vote-client:
build:
context: .
dockerfile: Dockerfile
ports:
- "80:80"
Dockerfile
FROM nginx
COPY ./html /usr/share/nginx/html
when I run docker-compose up this is my output:
docker-compose up --build
Removing vote-client_vote-client_1
Building vote-client
Step 1/2 : FROM nginx
---> 42b4762643dc
Step 2/2 : COPY ./html /usr/share/nginx/html
---> Using cache
---> a1aade2a299e
Successfully built a1aade2a299e
Successfully tagged vote-client_vote-client:latest
Recreating c2654f31dcff_vote-client_vote-client_1 ... error
ERROR: for c2654f31dcff_vote-client_vote-client_1 Cannot start service vote-client: driver failed programming external connectivity on endpoint vote-client_vote-client_1 (2188c8607a04ba2388a661504601431d6b30825d595dafae0c318f2d2b5685b0): Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error Permission denied
ERROR: for vote-client Cannot start service vote-client: driver failed programming external connectivity on endpoint vote-client_vote-client_1 (2188c8607a04ba2388a661504601431d6b30825d595dafae0c318f2d2b5685b0): Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error Permission denied
ERROR: Encountered errors while bringing up the project.

When attempting to Verify my CouchDB installation I get the error "Error: could not resolve http://any:5984/verifytestdb/"

When I install CouchDB and use the GUI and run verify.
I get the error
Error: could not resolve http://any:5984/verifytestdb/
And Replication status get's an X saying I can't replicate. Any suggestion on how to fix this problem.
It's running in a Docker Container and the Ports says
4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp
The GUI should same it works an not show an Error.
Feel like a port might be blocked in 5986 required for replication.
Use the Config setting on on the CouchDB GUI
Go to httpd
Then select bind_address
And and change the value from "Any" to "bind_address"
Run the test again and it should work.
for me what works is adding to couchdb config or change in UI
[httpd]
bind_address = 0.0.0.0
tested with verify and
curl -vX POST http://127.0.0.1:5984/_replicate -d '{"source":"albums","target":"albums-replica","create_target":true}' -H "Content-Type: application/json"
{"ok":true,"session_id":"9ab3e4f1a9cae16df05b32866088510c","source_last_seq":"6-g1AAAAILeJyNkU0OgjAQRqto1IVn0CMA_YGu5CZKOzVIsF2o......
with docker exposing only port
services:
couchdb:
ports:
- "5984:5984"

ActiveMQ within Wildfly on a Docker container gives: Invalid "host" value "0.0.0.0" detected

I have Wildfly running in a Docker container.
Within Wildfly the messaging-activemq subsystem is active.
The subsystem and extension defaults are taken from the standalone-full.xml file.
After starting wildfly, following output is displayed
[org.apache.activemq.artemis.jms.server] (ServerService Thread Pool -- 64)
AMQ121005: Invalid "host" value "0.0.0.0" detected for "http-connector" connector.
Switching to "eeb79399d447".
If this new address is incorrect please manually configure the connector to use the proper one.
The eeb79399d447 is the docker container id.
It's also impossible to connect to jms from my java client. While connecting it gives the following error.
AMQ214016: Failed to create netty connection: java.net.UnknownHostException: eeb79399d447
When I start wildfly on my local workstation (outside docker) the problem does not occur and I can connect to jms and send my messages.
Here are a few options. Option 1 & 2 may be what you asked for, but in the end didn't work for me. Option 3 however, I think will better address your intent.
Option 1) You can do this by adding some scripting to your docker image ( and not touching your standalone-full.xml. The basic idea ( credit goes to git-hub user kwart ) is to make a docker entry point that can determine the IPv4 address of the docker container before calling standalone.sh.
see : https://github.com/kwart/dockerfiles/tree/master/wildfly-ext and check out the usage of WILDFLY_BIND_ADDR. I forked it.
Notes:
GetIp.java will print out the IPv4 address ( and is copied into the container )
dockerentry-point.sh calls GetIp.java as needed
WILDFLY_BIND_ADDR=${WILDFLY_BIND_ADDR:-0.0.0.0}
if [ "${WILDFLY_BIND_ADDR}" = "auto" ]; then
WILDFLY_BIND_ADDR=`java -cp /opt/jboss GetIp`
fi
Option 2) Alternatively, using some script-fu, you may be able to do everything you need in a Dockerfile:
#CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]
CMD ["sh", "-c", "DOCKER_IPADDR=$(hostname --ip-address) && echo IP Address was $DOCKER_IPADDR && /opt/jboss/wildfly/bin/standalone.sh -c standalone-full.xml -b=$DOCKER_IPADDR -bmanagement=$DOCKER_IPADDR"]
Your mileage may very.
I was working with the helloworld-jms quickstart from the WildFly docs, and had to jump through some extra hoops to get the JMS queue created. Even at that point, the sample java code wasn't able to connect with either option 1 or option 2.
Option 3) ( This worked for me btw ) Start your container with binding to 0.0.0.0, expose your 8080 port for your JMS client running on the host, and add an entry in your hosts' /etc/hosts file:
Dockerfile:
FROM jboss/wildfly
# CP foo.war /opt/jboss/wildfly/standalone/deployments/
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
RUN /opt/jboss/wildfly/bin/add-user.sh -a quickstartUser quickstartPwd1! --silent
RUN echo "quickstartUser=guest" >> /opt/jboss/wildfly/standalone/configuration/application-roles.properties
# use standalone-full.xml to enable the JMS feature
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]
Build & run ( expose 8080 if your client is on your host machine )
docker build -t mywildfly .
docker run -it --rm --name jboss -p127.0.0.1:8080:8080 -p127.0.0.1:9990:9990 my_wildfly
Then on the host machine ( I'm running OSX; my jboss container's id was 46d04508b92b ) add an entry in your /etc/hosts for the docker-host-name that points to 127.0.0.1:
127.0.0.1 46d04508b92b # <-- replace with your container's id
Once the wildfly container is running, you create/configure the testQueue via scripts or in the management console. My config came from https://github.com/wildfly/quickstart.git under the helloworld-jms folder:
docker cp configure-jms.cli jboss:/tmp/
docker exec jboss /opt/jboss/wildfly/bin/jboss-cli.sh --connect --file=/tmp/configure-jms.cli
and SUCCESS from mvn clean compile exec:java the host machine (from w/in the helloworld-jms folder):
Mar 28, 2018 9:03:15 PM org.jboss.as.quickstarts.jms.HelloWorldJMSClient main
INFO: Found destination "jms/queue/test" in JNDI
Mar 28, 2018 9:03:16 PM org.jboss.as.quickstarts.jms.HelloWorldJMSClient main
INFO: Sending 1 messages with content: Hello, World!
Mar 28, 2018 9:03:16 PM org.jboss.as.quickstarts.jms.HelloWorldJMSClient main
INFO: Received message with content Hello, World!
You need to edit the standalone-full.xml to cope with jms behind NAT and when you run the docker container pass though the ip and port that your jms client can use to connect, which is the ip of the machine running docker in Dockers' default config

Docker-compose - Redis at 0.0.0.0 instead of 127.0.0.1

I havs migrated my Rails app (local dev machine) to Docker-Compose. All is working except the Worker Rails instance (batch) cannot connect to Redis.
Completed 500 Internal Server Error in 40ms (ActiveRecord: 2.3ms)
Redis::CannotConnectError (Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)):
In my docker-compose.yml
redis:
image: redis
ports:
- "6379:6379"
batch:
build: .
command: bundle exec rake environment resque:work QUEUE=*
volumes:
- .:/app
links:
- db
- redis
environment:
- REDIS_URL=redis://redis:6379
I think the Redis instance is available via the IP of the Docker host.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.10.0
Accessing via 0.0.0.0 doesn't work
$ curl 0.0.0.0:6379
curl: (7) Failed to connect to 0.0.0.0 port 6379: Connection refused
Accessing via the docker-machine IP I think works:
$ curl http://192.168.99.100:6379
-ERR wrong number of arguments for 'get' command
-ERR unknown command 'Host:'
EDIT
After installing redis-cli in the batch instance, I was able to hit the redis server using the 'redis' hostname. I think the problem is possibly in the Rails configuration itself.
Facepalm!!!
The docker containers were communicating just fine, the problem was I hadn't told Resque (the app using Redis) where to find it. Thank you to "The Real Bill" for pointing out I should be using docker-cli.
For anyone else using Docker and Resque, you need this in your config/initializers/resque.rb file:
Resque.redis = Redis.new(host: 'redis', port: 6379)
Resque.after_fork = Proc.new { ActiveRecord::Base.establish_connection }
If you run
docker-compose run --rm batch env | grep REDIS
you will get the env variables that your container has (the link line in the compose will auto-generate some).
Then all you need to do is look for one along the lines of _REDIS_1_PORT... and use the correct one. I have never had luck connecting my rails to another service in any other way. But luckily these env variables are always generated on start so they will be up to date even if the container IP happens to change between startups.
You should use the hostname redis to connect to the service, although you may need to wait for redis to start.

Resources