Credential validation was not successful: Unable to connect to localhost:8080 from hawkular connection - monitoring

My connection to hawkular-services-dist-0.30.0.Final of managiq ui is failed. But my manageiq configuration seems to be successful. I can examine the manageiq UI with below url, https://localhost:8443/ (The configuration ref site is http://www.manageiq.org/docs/get-started/docker). However in middleware tab of manageiq UI, adding new middleware provider always throws this message,
Credential validation was not successful: Unable to connect to localhost:8080
Please check the following captured pic.
Even more I have no idea which log files of manageiq container have this error statements.
Update
The ssl error message is shown below,
[Tue Feb 07 11:15:25.015949 2017] [ssl:warn] [pid 23] AH01909: RSA certificate configured for localhost:443 does NOT include an ID which matches the server name
[Tue Feb 07 11:15:25.041695 2017] [ssl:warn] [pid 23] AH01909: RSA certificate configured for localhost:443 does NOT include an ID which matches the server name
[Tue Feb 07 11:15:30.667617 2017] [proxy:error] [pid 134] (111)Connection refused: AH00957: HTTP: attempt to connect to 0.0.0.0:3000 (0.0.0.0) failed
[Tue Feb 07 11:15:30.667658 2017] [proxy:error] [pid 134] AH00959: ap_proxy_connect_backend disabling worker for (0.0.0.0) for 60s
[Tue Feb 07 11:15:30.667693 2017] [proxy_http:error] [pid 134] [client 172.17.0.1:42666] AH01114: HTTP: failed to make connection to backend: 0.0.0.0
[Tue Feb 07 11:15:30.727713 2017] [proxy:error] [pid 137] (111)Connection refused: AH00957: HTTP: attempt to connect to 0.0.0.0:3000 (0.0.0.0) failed
[Tue Feb 07 11:15:30.727756 2017] [proxy:error] [pid 137] AH00959: ap_proxy_connect_backend disabling worker for (0.0.0.0) for 60s
[Tue Feb 07 11:15:30.727766 2017] [proxy_http:error] [pid 137] [client 172.17.0.1:42672] AH01114: HTTP: failed to make connection to backend: 0.0.0.0
[Tue Feb 07 11:15:30.740819 2017] [proxy:error] [pid 135] (111)Connection refused: AH00957: HTTP: attempt to connect to 0.0.0.0:3000 (0.0.0.0) failed
[Tue Feb 07 11:15:30.740878 2017] [proxy:error] [pid 135] AH00959: ap_proxy_connect_backend disabling worker for (0.0.0.0) for 60s
[Tue Feb 07 11:15:30.740893 2017] [proxy_http:error] [pid 135] [client 172.17.0.1:42678] AH01114: HTTP: failed to make connection to backend: 0.0.0.0
[Tue Feb 07 11:15:36.283437 2017] [proxy:error] [pid 139] (111)Connection refused: AH00957: HTTP: attempt to connect to 0.0.0.0:3000 (0.0.0.0) failed
[Tue Feb 07 11:15:36.283474 2017] [proxy:error] [pid 139] AH00959: ap_proxy_connect_backend disabling worker for (0.0.0.0) for 60s
[Tue Feb 07 11:15:36.283483 2017] [proxy_http:error] [pid 139] [client 172.17.0.1:42684] AH01114: HTTP: failed to make connection to backend: 0.0.0.0
[Tue Feb 07 11:15:44.561119 2017] [proxy:error] [pid 140] (111)Connection refused: AH00957: HTTP: attempt to connect to 0.0.0.0:3000 (0.0.0.0) failed
[Tue Feb 07 11:15:44.561154 2017] [proxy:error] [pid 140] AH00959: ap_proxy_connect_backend disabling worker for (0.0.0.0) for 60s
[Tue Feb 07 11:15:44.561162 2017] [proxy_http:error] [pid 140] [client 172.17.0.1:42692] AH01114: HTTP: failed to make connection to backend: 0.0.0.0
[Tue Feb 07 11:16:07.212882 2017] [ssl:warn] [pid 557] AH01909: RSA certificate configured for localhost:443 does NOT include an ID which matches the server name
[Tue Feb 07 11:16:07.254083 2017] [ssl:warn] [pid 557] AH01909: RSA certificate configured for localhost:443 does NOT include an ID which matches the server name
Any idea?

Can you show the logs of hawkular-services ?
Did you create the "jhwang" user without errors ?

Related

[Docker x ColdFusion][Apache2] - (95)Operation not supported: mod_jk

The Apache2 on my Docker container keeps failing on starting; I already check the config using apachectl configtest, and it's returning OK. The error below is what I found under /var/log/apache2/error.log
[Wed Aug 10 15:17:30.643137 2022] [mpm_event:notice] [pid 465:tid 139744629492672] AH00489: Apache/2.4.52 (Ubuntu) mod_jk/1.2.46 configured -- resuming normal operations
[Wed Aug 10 15:17:30.643188 2022] [core:notice] [pid 465:tid 139744629492672] AH00094: Command line: '/usr/sbin/apache2'
[Mon Oct 31 22:14:51.535467 2022] [jk:crit] [pid 63:tid 274907793600] (95)Operation not supported: mod_jk: could not create jk_log_lock
But when I tried to uninstall and reinstall apache2, I could access the localhost:80, but the ColdFusion under it was not working. It just shows me the directory of the working directory..
Docker Desktop: v4.13.1
Docker: version 20.10.20, build 9fdeb9c
ColdFusion: 2018
This happens only on my Macbook 13 M2. I tried running it on a windows laptop, and it's working well.

Docker (Snap) Containers Getting Stopped

I've installed Docker using Snap. Recently running containers have been getting stopped on their own. This happens say 2-3 times in the space of ~8-10 hours. I've been trying to find a root cause without much success. Relevant information below. Let me know if I can provide more information to help.
$ docker --version
Docker version 19.03.13, build cd8016b6bc
$ snap --version
snap 2.51.4
snapd 2.51.4
series 16
ubuntu 18.04
kernel 5.4.0-81-generic
Docker daemon.json
$ cat /var/snap/docker/current/config/daemon.json
{
"log-level": "error",
"storage-driver": "aufs",
"bip": "172.28.0.1/24"
}
$ dmesg -T
[Tue Sep 14 20:31:37 2021] aufs aufs_fill_super:918:mount[18200]: no arg
[Tue Sep 14 20:31:37 2021] overlayfs: missing 'lowerdir'
[Tue Sep 14 20:31:43 2021] br-6c6facc1a891: port 5(veth4c212a4) entered disabled state
[Tue Sep 14 20:31:43 2021] device veth4c212a4 left promiscuous mode
[Tue Sep 14 20:31:43 2021] br-6c6facc1a891: port 5(veth4c212a4) entered disabled state
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 1(veth1c95aae) entered disabled state
[Tue Sep 14 20:31:45 2021] device veth1c95aae left promiscuous mode
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 1(veth1c95aae) entered disabled state
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 4(veth1dfd80e) entered disabled state
[Tue Sep 14 20:31:45 2021] device veth1dfd80e left promiscuous mode
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 4(veth1dfd80e) entered disabled state
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 2(veth8e48cf4) entered disabled state
[Tue Sep 14 20:31:46 2021] device veth8e48cf4 left promiscuous mode
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 2(veth8e48cf4) entered disabled state
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 3(veth534c1d3) entered disabled state
[Tue Sep 14 20:31:46 2021] device veth534c1d3 left promiscuous mode
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 3(veth534c1d3) entered disabled state
[Tue Sep 14 20:31:47 2021] br-6c6facc1a891: port 6(veth316fdd7) entered disabled state
[Tue Sep 14 20:31:47 2021] device veth316fdd7 left promiscuous mode
Note the difference in timestamp between Docker logs, below and dmesg, above.
The Docker logs appear to be from previous time I restarted containers using docker-compose.
$ sudo snap logs docker
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.783211664+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/af7c138e4399d3bb8a5615ec05fd1ba90bc7e98391b468067374a020d792906d.sock: connect: connection refused" id=2b9e8a563dad5f61e2ad525c5d590804c33c6cd323d580fe365c170fd5a68a8a namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.860328985+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/281fedfbf5b11053d28853b6ad6175009903b338995d5faa0862e8f1ab0e3b10.sock: connect: connection refused" id=43449775462debc8336ab1bc63e2020e8a554ee25db31befa561dc790c76e1ac namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.878788076+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/ff2c9cacd1ef1ac083f93e4823f5d0fa4146593f2b6508a098b22270b48507b4.sock: connect: connection refused" id=4d91c4451a011d87b2d21fe7d74e3c4e7ffa20f2df69076f36567b5389597637 namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.906212149+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/017a3907df26803a221be66a2a0ac25e43a994d26432cba30f6c81c078ad62fa.sock: connect: connection refused" id=79e0d419a1d82f83dd81898a02fa1161b909ae88c1e46575a1bec894df31a482 namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.919895281+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/47e9b56ce80402793038edf72fe64b44a05f659371c212361e47d1463ad269ae.sock: connect: connection refused" id=99aba37c4f1521854130601f19afeb196231a924effba1cfcfb7da90b5703a86 namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.931562562+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/58d5711ddbcc9faf6a4d8d7d0433d4254d5069c9e559d61eb1551f80d193a3eb.sock: connect: connection refused" id=a09358b02332b18dfa99b4dc99edf4b1ebac80671c29b91946875a53e1b8bd7e namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.949511272+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/67de51fdf40350feb583255a5e703c719745ef9123a8a47dad72df075c12f953.sock: connect: connection refused" id=ee145dfe0eb44fde323a431b191a62aa47ad265c438239f7243c684e10713042 namespace=moby
2021-09-14T15:01:24Z docker.dockerd[27385]: time="2021-09-14T20:31:24.671615174+05:30" level=error msg="Force shutdown daemon"
2021-09-14T15:01:25Z systemd[1]: Stopped Service for snap application docker.dockerd.
2021-09-14T15:01:37Z systemd[1]: Started Service for snap application docker.dockerd.

How can I find out why a docker container is no longer running? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
On one of my AWS server I have manually started a detached Docker container running willnorris/imageproxy. With no warning, it seems to go down after a few days, for no apparent (external) reason. I checked the container logs and the syslog and found nothing.
How can I find out what goes wrong (this happens every time)?
This is how I start it:
ubuntu#local:~ $ ssh ubuntu#my_aws_box
ubuntu#aws_box:~ $ docker run -dp 8081:8080 willnorris/imageproxy -addr 0.0.0.0:8080
Typically, this is what I do when it seems to have crashed:
ubuntu#aws_box:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de63701bbc82 willnorris/imageproxy "/app/imageproxy -ad…" 10 days ago Exited (2) 7 days ago frosty_shockley
ubuntu#aws_box:~$ docker logs de63701bbc82
imageproxy listening on 0.0.0.0:8080
2021/08/04 00:46:42 error copying response: write tcp 172.17.0.2:8080->172.17.0.1:38568: write: broken pipe
2021/08/04 00:46:42 error copying response: write tcp 172.17.0.2:8080->172.17.0.1:38572: write: broken pipe
2021/08/04 01:29:18 invalid request URL: malformed URL "/jars": too few path segments
2021/08/04 01:29:18 invalid request URL: malformed URL "/service/extdirect": must provide absolute remote URL
2021/08/04 11:09:49 invalid request URL: malformed URL "/jars": too few path segments
2021/08/04 11:09:49 invalid request URL: malformed URL "/service/extdirect": must provide absolute remote URL
2021/08/04 13:04:33 error copying response: write tcp 172.17.0.2:8080->172.17.0.1:41036: write: broken pipe
As you can see, the logs tell me nothing of the crash and the only real thing I have to go by is the exit status: Exited (2) 7 days ago .
As this exit seemed to originate outside of the container/Docker, I needed to find the right logs. A linked-to question (which essentially makes this a dupe) hinted to checking out journald on unix systems. Doing journald -u docker (essentially grepping the log for docker) showed that the Docker container was killed on August 6:
Aug 06 06:06:49 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:49.544825959Z" level=info msg="Processing signal 'terminated'"
Aug 06 06:06:49 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:49.836744355Z" level=info msg="ignoring event" container=de63701bbc828ca8bfcb895eeccae62bbda602d3be0508ceaf20fe76d7d018d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 06 06:06:49 ip-192-168-3-117 containerd[885]: time="2021-08-06T06:06:49.837480333Z" level=info msg="shim disconnected" id=de63701bbc828ca8bfcb895eeccae62bbda602d3be0508ceaf20fe76d7d018d5
Aug 06 06:06:49 ip-192-168-3-117 containerd[885]: time="2021-08-06T06:06:49.840764380Z" level=warning msg="cleaning up after shim disconnected" id=de63701bbc828ca8bfcb895eeccae62bbda602d3be0508ceaf20fe76d7d018d5 namespace=moby
Aug 06 06:06:49 ip-192-168-3-117 containerd[885]: time="2021-08-06T06:06:49.840787254Z" level=info msg="cleaning up dead shim"
Aug 06 06:06:49 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:49.868008333Z" level=info msg="ignoring event" container=709e057de026ff11f783121c839c56938ea79dcd5965be1546cd6931beb5a903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 06 06:06:49 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:49.868091089Z" level=info msg="ignoring event" container=9219e652436aae8016145bf3e0681ff1bb7046f230338d8ab79f9ced9532e342 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 06 06:06:49 ip-192-168-3-117 containerd[885]: time="2021-08-06T06:06:49.868916377Z" level=info msg="shim disconnected" id=9219e652436aae8016145bf3e0681ff1bb7046f230338d8ab79f9ced9532e342
A
Aug 06 06:06:51 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:51.068939160Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Aug 06 06:06:51 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:51.069763813Z" level=info msg="Daemon shutdown complete"
Aug 06 06:06:51 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:51.070022944Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd names
pace=plugins.moby
Aug 06 06:06:51 ip-192-168-3-117 systemd[1]: Stopped Docker Application Container Engine.
Aug 06 06:06:51 ip-192-168-3-117 systemd[1]: Starting Docker Application Container Engine...
Now, what killed it? To find that out, I needed to not filter out the preceding events, so I just did journald | grep 'Aug 06' and found these lines preceding the previous ones:
Aug 06 05:56:01 ip-192-168-3-117 systemd[1]: Starting Daily apt download activities...
Aug 06 05:56:11 ip-192-168-3-117 systemd[1]: Started Daily apt download activities.
Aug 06 06:06:39 ip-192-168-3-117 systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 06 06:06:48 ip-192-168-3-117 systemd[1]: Reloading.
Aug 06 06:06:48 ip-192-168-3-117 systemd[1]: Starting Message of the Day...
Aug 06 06:06:48 ip-192-168-3-117 systemd[1]: Reloading.
Aug 06 06:06:49 ip-192-168-3-117 systemd[1]: Reloading.
Aug 06 06:06:49 ip-192-168-3-117 systemd[1]: Stopping Docker Application Container Engine...
So this was basically caused by a cron job that upgraded the Docker daemon and killed the old one! Since I did not have --restart=always, the container was not restarted after the daemon had respawned.

Failed to start LSB: Start Jenkins at boot time

I've been trying since a while to add and modify things within jenkins. Jenkins was running on 8080 port, I redirected trafic to 80 port through this command:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 54.185.x.x:8080
I did some modifications and now I cannot start jenkins:
Jun 08 13:20:17 ip-10-173-x-x jenkins[32108]: Correct java version found
Jun 08 13:20:17 ip-10-173-x-x jenkins[32108]: * Starting Jenkins Automation Server jenkins
Jun 08 13:20:17 ip-10-173-x-x su[32157]: Successful su for jenkins by root
Jun 08 13:20:17 ip-10-173-x-x su[32157]: + ??? root:jenkins
Jun 08 13:20:17 ip-10-173-x-x su[32157]: pam_unix(su:session): session opened for user jenkins by (uid=0)
Jun 08 13:20:17 ip-10-173-x-x su[32157]: pam_unix(su:session): session closed for user jenkins
Jun 08 13:20:18 ip-10-173-x-x jenkins[32108]: ...fail!
Jun 08 13:20:18 ip-10-173-x-x systemd[1]: jenkins.service: Control process exited, code=exited status=7
Jun 08 13:20:18 ip-10-173-x-x systemd[1]: jenkins.service: Failed with result 'exit-code'.
Jun 08 13:20:18 ip-10-173-x-x systemd[1]: Failed to start LSB: Start Jenkins at boot time.
I changed few lines in the jenkins file and here how it looks like:
JENKINS_ARGS="--javahome=$JAVA_HOME --httpListenAddress=$HTTP_HOST --httpPort=$HTTP_PORT --webroot=~/.jenkins/war"

Problems running two versions of ThinkingSphinx on the same server

I have a dedicated server set up with two instances of my app. One in production, one in staging
I can't seem to run Thinking Sphinx on both apps at the same time.
When I try to start it, I get this error:
[Tue Jul 30 10:02:31.618 2013] [20464] Child process 20465 has been forked
[Tue Jul 30 10:02:31.669 2013] [20465] listening on 127.0.0.1:9306
[Tue Jul 30 10:02:31.669 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:02:34.672 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:02:37.676 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:02:40.679 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:02:43.682 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:02:46.685 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:02:49.688 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:02:52.691 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:02:55.694 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:02:58.697 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:03:01.700 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:03:04.703 2013] [20465] bind() failed on 127.0.0.1, retrying...
[Tue Jul 30 10:03:07.706 2013] [20465] FATAL: bind() failed on 127.0.0.1: Address already in use
[Tue Jul 30 10:03:07.707 2013] [20464] Child process 20465 has been finished, exit code 1. Watchdog finishes also. Good bye!
Can anyone advise how I can run TS on two versions of the same app on the same server?
You should specify different ports for different environments:
http://pat.github.io/thinking-sphinx/advanced_config.html

Resources