I'm running RabbitMQ locally using:
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management
Some log:
narley#brittes ~ $ docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: list of feature flags found:
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: [ ] drop_unroutable_metric
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: [ ] empty_basic_get_metric
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: [ ] implicit_default_bindings
2020-01-08 22:31:52.080 [info] <0.8.0> Feature flags: [ ] quorum_queue
2020-01-08 22:31:52.080 [info] <0.8.0> Feature flags: [ ] virtual_host_metadata
2020-01-08 22:31:52.080 [info] <0.8.0> Feature flags: feature flag states written to disk: yes
2020-01-08 22:31:52.160 [info] <0.268.0> ra: meta data store initialised. 0 record(s) recovered
2020-01-08 22:31:52.162 [info] <0.273.0> WAL: recovering []
2020-01-08 22:31:52.164 [info] <0.277.0>
Starting RabbitMQ 3.8.2 on Erlang 22.2.1
Copyright (c) 2007-2019 Pivotal Software, Inc.
Licensed under the MPL 1.1. Website: https://rabbitmq.com
## ## RabbitMQ 3.8.2
## ##
########## Copyright (c) 2007-2019 Pivotal Software, Inc.
###### ##
########## Licensed under the MPL 1.1. Website: https://rabbitmq.com
Doc guides: https://rabbitmq.com/documentation.html
Support: https://rabbitmq.com/contact.html
Tutorials: https://rabbitmq.com/getstarted.html
Monitoring: https://rabbitmq.com/monitoring.html
Logs: <stdout>
Config file(s): /etc/rabbitmq/rabbitmq.conf
Starting broker...2020-01-08 22:31:52.166 [info] <0.277.0>
node : rabbit#1586b4698736
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.conf
cookie hash : bwlnCFiUchzEkgAOsZwQ1w==
log(s) : <stdout>
database dir : /var/lib/rabbitmq/mnesia/rabbit#1586b4698736
2020-01-08 22:31:52.210 [info] <0.277.0> Running boot step pre_boot defined by app rabbit
...
...
...
2020-01-08 22:31:53.817 [info] <0.277.0> Setting up a table for connection tracking on this node: tracked_connection_on_node_rabbit#1586b4698736
2020-01-08 22:31:53.827 [info] <0.277.0> Setting up a table for per-vhost connection counting on this node: tracked_connection_per_vhost_on_node_rabbit#1586b4698736
2020-01-08 22:31:53.828 [info] <0.277.0> Running boot step routing_ready defined by app rabbit
2020-01-08 22:31:53.828 [info] <0.277.0> Running boot step pre_flight defined by app rabbit
2020-01-08 22:31:53.828 [info] <0.277.0> Running boot step notify_cluster defined by app rabbit
2020-01-08 22:31:53.829 [info] <0.277.0> Running boot step networking defined by app rabbit
2020-01-08 22:31:53.833 [info] <0.624.0> started TCP listener on [::]:5672
2020-01-08 22:31:53.833 [info] <0.277.0> Running boot step cluster_name defined by app rabbit
2020-01-08 22:31:53.833 [info] <0.277.0> Running boot step direct_client defined by app rabbit
2020-01-08 22:31:53.922 [info] <0.674.0> Management plugin: HTTP (non-TLS) listener started on port 15672
2020-01-08 22:31:53.922 [info] <0.780.0> Statistics database started.
2020-01-08 22:31:53.923 [info] <0.779.0> Starting worker pool 'management_worker_pool' with 3 processes in it
completed with 3 plugins.
2020-01-08 22:31:54.316 [info] <0.8.0> Server startup complete; 3 plugins started.
* rabbitmq_management
* rabbitmq_management_agent
* rabbitmq_web_dispatch
Then I go to http:localhost:15672 and page doesn't load. No error is displayed.
Interesting thing is that it worked last time I used it (about 3 weeks ago).
Can anyone give me some help?
Cheers!
have a try:
step 1, going into docker container
docker exec -it rabbitmq bash
step 2, run it in docker container
rabbitmq-plugins enable rabbitmq_management
is work for me
I got it working by simply upgrading docker.
Was running docker 18.09.7 and upgrade to 19.03.5.
In my case, clearing the cookies up has fixed this issue instantly.
Related
I am trying to run consul docker container in host network mode as suggested on docker hub. I am unable to access the UI at port 8500
My docker host IP address: 192.168.30.12
network interface which is used by host: ens192
Here is my docker run command:
docker run -d --net=host -v /home/docker/conf.json:/consul/config/config.json -v /home/docker/data/:/consul/data/ -e CONSUL_BIND_INTERFACE=ens192 -e CONSUL_CLIENT_INTERFACE=ens192 --name=consulserver1 -d consul agent -server -bootstrap-expect=1 -client 0.0.0.0 -bind=192.168.30.12
I also see following error in docker logs
==> Found address '192.168.30.12' for interface 'ens192', setting bind option...
==> Found address '192.168.30.12' for interface 'ens192', setting client option...
==> Starting Consul agent...
Version: '1.14.4'
Build Date: '2023-01-26 15:47:10 +0000 UTC'
Node ID: 'd8e91718-dcf3-70be-dd29-c558158959f0'
Node name: 'docker-try1'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: true)
Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: -1, gRPC-TLS: 8503, DNS: 8600)
Cluster Addr: 192.168.30.12 (LAN: 8301, WAN: 8302)
Gossip Encryption: false
Auto-Encrypt-TLS: false
HTTPS TLS: Verify Incoming: false, Verify Outgoing: false, Min Version: TLSv1_2
gRPC TLS: Verify Incoming: false, Min Version: TLSv1_2
Internal RPC TLS: Verify Incoming: false, Verify Outgoing: false (Verify Hostname: false), Min Version: TLSv1_2
==> Log data will now stream in as it occurs:
2023-02-17T15:18:30.052Z [WARN] agent: BootstrapExpect is set to 1; this is the same as Bootstrap mode.
2023-02-17T15:18:30.052Z [WARN] agent: Node name "docker-try1" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2023-02-17T15:18:30.052Z [WARN] agent: bootstrap = true: do not enable unless necessary
2023-02-17T15:18:30.057Z [WARN] agent.auto_config: BootstrapExpect is set to 1; this is the same as Bootstrap mode.
2023-02-17T15:18:30.057Z [WARN] agent.auto_config: Node name "docker-try1" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2023-02-17T15:18:30.057Z [WARN] agent.auto_config: bootstrap = true: do not enable unless necessary
2023-02-17T15:18:30.061Z [INFO] agent.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d8e91718-dcf3-70be-dd29-c558158959f0 Address:192.168.30.12:8300}]"
2023-02-17T15:18:30.061Z [INFO] agent.server.raft: entering follower state: follower="Node at 192.168.30.12:8300 [Follower]" leader-address= leader-id=
2023-02-17T15:18:30.062Z [INFO] agent.server.serf.wan: serf: EventMemberJoin: docker-try1.dc1 192.168.30.12
2023-02-17T15:18:30.062Z [WARN] agent.server.serf.wan: serf: Failed to re-join any previously known node
2023-02-17T15:18:30.062Z [INFO] agent.server.serf.lan: serf: EventMemberJoin: docker-try1 192.168.30.12
2023-02-17T15:18:30.063Z [INFO] agent.router: Initializing LAN area manager
2023-02-17T15:18:30.063Z [WARN] agent.server.serf.lan: serf: Failed to re-join any previously known node
2023-02-17T15:18:30.063Z [INFO] agent.server: Adding LAN server: server="docker-try1 (Addr: tcp/192.168.30.12:8300) (DC: dc1)"
2023-02-17T15:18:30.063Z [INFO] agent.server.autopilot: reconciliation now disabled
2023-02-17T15:18:30.064Z [INFO] agent.server: Handled event for server in area: event=member-join server=docker-try1.dc1 area=wan
2023-02-17T15:18:30.064Z [INFO] agent.server.cert-manager: initialized server certificate management
2023-02-17T15:18:30.064Z [INFO] agent: Started DNS server: address=0.0.0.0:8600 network=udp
2023-02-17T15:18:30.065Z [INFO] agent: Started DNS server: address=0.0.0.0:8600 network=tcp
2023-02-17T15:18:30.065Z [INFO] agent: Starting server: address=[::]:8500 network=tcp protocol=http
2023-02-17T15:18:30.065Z [INFO] agent: Started gRPC listeners: port_name=grpc_tls address=[::]:8503 network=tcp
2023-02-17T15:18:30.065Z [INFO] agent: started state syncer
2023-02-17T15:18:30.065Z [INFO] agent: Consul agent running!
2023-02-17T15:18:37.152Z [WARN] agent.cache: handling error in Cache.Notify: cache-type=connect-ca-leaf error="No cluster leader" index=0
2023-02-17T15:18:37.152Z [ERROR] agent.server.cert-manager: failed to handle cache update event: error="leaf cert watch returned an error: No cluster leader"
2023-02-17T15:18:37.248Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No cluster leader"
2023-02-17T15:18:39.483Z [WARN] agent.server.raft: heartbeat timeout reached, starting election: last-leader-addr= last-leader-id=
2023-02-17T15:18:39.483Z [INFO] agent.server.raft: entering candidate state: node="Node at 192.168.30.12:8300 [Candidate]" term=7
2023-02-17T15:18:39.486Z [INFO] agent.server.raft: election won: term=7 tally=1
2023-02-17T15:18:39.486Z [INFO] agent.server.raft: entering leader state: leader="Node at 192.168.30.12:8300 [Leader]"
2023-02-17T15:18:39.486Z [INFO] agent.server: cluster leadership acquired
2023-02-17T15:18:39.487Z [INFO] agent.server: New leader elected: payload=docker-try1
2023-02-17T15:18:39.493Z [INFO] agent.server.autopilot: reconciliation now enabled
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="federation state anti-entropy"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="federation state pruning"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="streaming peering resources"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="metrics for streaming peering resources"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="peering deferred deletion"
2023-02-17T15:18:39.493Z [INFO] connect.ca: initialized primary datacenter CA from existing CARoot with provider: provider=consul
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="intermediate cert renew watch"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="CA root pruning"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="CA root expiration metric"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="CA signing expiration metric"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="virtual IP version check"
2023-02-17T15:18:39.493Z [INFO] agent.leader: stopping routine: routine="virtual IP version check"
2023-02-17T15:18:39.493Z [INFO] agent.leader: stopped routine: routine="virtual IP version check"
2023-02-17T15:18:40.065Z [ERROR] agent.server.autopilot: Failed to reconcile current state with the desired state
2023-02-17T15:18:41.061Z [INFO] agent: Synced node info
I think I figured it out.
There was a firewall blocking tcp ports. As soon as I opened all ports recommended in Consul documentation Consul Ports, it started working
I can build the Dockerfile. When I do docker run -it path-to-image/tomcat9:latest and check the logs, there isn't a catalina.out and the run fail with getting /bin/sh: ["catalina.sh",: command not found.
Here is my Dockerfile
FROM gitlab-registry.gs.mil/gets-development/docker/openjdk11
USER root
# Copy Tomcat and start
ADD imageFiles/apache-tomcat-9.0.65.tar.gz /usr/local/
RUN mv /usr/local/apache-tomcat-9.0.65/ /usr/local/tomcat
ENV WORKPATH /usr/local
WORKDIR $WORKPATH
ENV CATALINA_HOME /usr/local/tomcat
ENV CATALINA_BASE /usr/local/tomcat
ENV PATH $PATH:$CATALINA_HOME/bin:$CATALINA_HOME/lib
EXPOSE 8080
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
Build command:
docker build -t gitlab-registry.gs.mil/gets-development/docker/tomcat9-test .
Start command:
docker run --name tomcatTest -it gitlab-registry.gs.mil/gets-development/docker/tomcat9-test:latest /bin/bash
Trying to connect inside the docker container to the localhost fails
curl: (7) Failed to connect to localhost port 8080: Connection refused
There are no log files
[root#b058163e9605 local]# cd tomcat/logs/
[root#b058163e9605 logs]# ls -als
total 0
0 drwxr-x--- 2 root root 6 Jul 14 12:28 .
0 drwxr-xr-x 9 root root 220 Aug 5 16:17 ..
[root#b058163e9605 logs]#
This is telling me that Tomcat did not start. Starting Tomcat inside the container, tomcat could launch success:
[root#b058163e9605 bin]# ./catalina.sh run
.....
08-Aug-2022 13:12:02.934 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
08-Aug-2022 13:12:03.038 INFO [main] org.apache.catalina.startup.Catalina.load Server initialization in [1590] milliseconds
08-Aug-2022 13:12:03.204 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
08-Aug-2022 13:12:03.205 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.65]
08-Aug-2022 13:12:03.224 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/ROOT]
08-Aug-2022 13:12:03.877 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/ROOT] has finished in [652] ms
08-Aug-2022 13:12:03.879 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/docs]
08-Aug-2022 13:12:03.945 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/docs] has finished in [66] ms
08-Aug-2022 13:12:03.947 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/examples]
08-Aug-2022 13:12:04.559 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/examples] has finished in [613] ms
08-Aug-2022 13:12:04.562 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/host-manager]
08-Aug-2022 13:12:04.626 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/host-manager] has finished in [63] ms
08-Aug-2022 13:12:04.626 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/manager]
08-Aug-2022 13:12:04.717 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/manager] has finished in [90] ms
08-Aug-2022 13:12:04.733 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
08-Aug-2022 13:12:04.767 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [1728] milliseconds
Last, I check the Docker logs shows me what I did inside the container, but no other information.
Please assist.
Your docker run command does not launch Tomcat but simply bash. Notice the last argument
docker run --name tomcatTest -it gitlab-registry.gs.mil/gets-development/docker/tomcat9-test:latest /bin/bash
change it into
docker run --name tomcatTest gitlab-registry.gs.mil/gets-development/docker/tomcat9-test:latest
If you want a shell to investigate what is going on inside a running container, use
docker exec -it tomcatTest /bin/bash
I'm trying to deploy ejabberd docker image in kubernetes with the following folders are mounted from a persistent volume,
/home/ejabberd/logs
/home/ejabberd/conf
/home/ejabberd/database
populated the database,and conf directory with our configuration files and the database folder
from the docker image using an init container .Upon setting the permissions, we could able to
start the ejabberd service , the logs says that the services (on ports 5222,5269,5280) are ready .
when I check the xmpp server status in the container using " ejabberdctl status " , the output says "node down"
===========ejabberd.log===================================================
2020-12-16 09:18:58.477630+00:00 [info] <0.3406.0>#mod_mqtt:init_topic_cache/2:611 Building MQTT cache for mydomain this may take a while
2020-12-16 09:18:59.087380+00:00 [info] <0.483.0>#ejabberd_mnesia:create/2:267 Creating Mnesia ram table 'bytestream'
2020-12-16 09:19:01.193203+00:00 [info] <0.126.0>#ejabberd_cluster_mnesia:wait_for_sync/1:123 Waiting for Mnesia synchronization to complete
2020-12-16 09:19:02.401537+00:00 [info] <0.126.0>#ejabberd_app:start/2:62 ejabberd 20.4.0 is started in the node 'ejabberd#mydomain' in 49.77s
2020-12-16 09:19:02.403414+00:00 [info] <0.601.0>#ejabberd_listener:init/4:159 Start accepting TCP connections at [::]:5222 for ejabberd_c2s
2020-12-16 09:19:02.403479+00:00 [info] <0.602.0>#ejabberd_listener:init/4:159 Start accepting TCP connections at [::]:5269 for ejabberd_s2s_in
2020-12-16 09:19:02.403956+00:00 [info] <0.603.0>#ejabberd_listener:init/4:159 Start accepting TLS connections at [::]:5443 for ejabberd_http
2020-12-16 09:19:02.403999+00:00 [info] <0.604.0>#ejabberd_listener:init/4:159 Start accepting TCP connections at [::]:5280 for ejabberd_http
2020-12-16 09:19:02.404098+00:00 [info] <0.605.0>#ejabberd_listener:init/4:159 Start accepting TCP connections at [::]:1883 for mod_mqtt
2020-12-16 09:19:02.404345+00:00 [info] <0.3418.0>#ejabberd_listener:init/4:159 Start accepting TCP connections at 10.42.8.15:7777 for mod_proxy65_stream
========================================ejabberdctl status===========================
~ $ ./bin/ejabberdctl status
Failed RPC connection to the node 'ejabberd#mydomain': nodedown
Commands to start an ejabberd node:
start - Start an ejabberd node in server mode
debug - Attach an interactive Erlang shell to a running ejabberd node
iexdebug - Attach an interactive Elixir shell to a running ejabberd node
live - Start an ejabberd node in live (interactive) mode
iexlive - Start an ejabberd node in live (interactive) mode, within an Elixir shell
foreground - Start an ejabberd node in server mode (attached)
Optional parameters when starting an ejabberd node:
--config-dir dir Config ejabberd: /home/ejabberd/conf
--config file Config ejabberd: /home/ejabberd/conf/ejabberd.yml
--ctl-config file Config ejabberdctl: /home/ejabberd/conf/ejabberdctl.cfg
--logs dir Directory for logs: /home/ejabberd/logs
--spool dir Database spool dir: /home/ejabberd/database/ejabberd#mydomain
--node nodename ejabberd node name: ejabberd#mydomain
If anyone has tried ejabberd on kubernetes, Please share your thought on this issue
Thanks in advance
I have create a activemq docker file and when i start the image i cannot log to the login screen. The url is http://127.0.0.1:8161
here is my docker file you can also see the url in the log.
# Using jdk as base image
FROM openjdk:8-jdk-alpine
# Copy the whole directory of activemq into the image
COPY activemq /opt/activemq
# Set the working directory to the bin folder
WORKDIR /opt/activemq/bin
# Start up the activemq server
ENTRYPOINT ["./activemq","console"]
and here is the log from the console
INFO: Using java '/usr/lib/jvm/java-1.8-openjdk/bin/java'
INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C)
INFO: Creating pidfile /opt/activemq//data/activemq.pid
Java Runtime: IcedTea 1.8.0_212 /usr/lib/jvm/java-1.8-openjdk/jre
Heap sizes: current=390656k free=386580k max=5779968k
JVM args: -Djava.util.logging.config.file=logging.properties -
Djava.security.auth.login.config=/opt/activemq//conf/login.config -Djava.awt.headless=true -
Djava.io.tmpdir=/opt/activemq//tmp -Dactivemq.classpath=/opt/activemq//conf:/opt/activemq//../lib/: -
Dactivemq.home=/opt/activemq/ -Dactivemq.base=/opt/activemq/ -Dactivemq.conf=/opt/activemq//conf -
Dactivemq.data=/opt/activemq//data
Extensions classpath:
[/opt/activemq/lib,/opt/activemq/lib/camel,/opt/activemq/lib/optional,/opt/activemq/lib/web,
/opt/activemq
/lib/extra]
ACTIVEMQ_HOME: /opt/activemq
ACTIVEMQ_BASE: /opt/activemq
ACTIVEMQ_CONF: /opt/activemq/conf
ACTIVEMQ_DATA: /opt/activemq/data
Loading message broker from: xbean:activemq.xml
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#6be46e8f: startup date [Mon Nov 23
15:32:26 GMT 2020]; root of context hierarchy
INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/opt/activemq/data/kahadb]
INFO | KahaDB is version 7
INFO | PListStore:[/opt/activemq/data/localhost/tmp_storage] started
INFO | Apache ActiveMQ 5.16.0 (localhost, ID:afee6bfb43ba-45805-1606145547047-0:1) is starting
INFO | Listening for connections at: tcp://afee6bfb43ba:61616?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector openwire started
INFO | Listening for connections at: amqp://afee6bfb43ba:5672?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector amqp started
INFO | Listening for connections at: stomp://afee6bfb43ba:61613?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector stomp started
INFO | Listening for connections at: mqtt://afee6bfb43ba:1883?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector mqtt started
INFO | Starting Jetty server
INFO | Creating Jetty connector
WARN | ServletContext#o.e.j.s.ServletContextHandler#ab7395e{/,null,STARTING} has uncovered http
methods for path: /
INFO | Listening for connections at ws://afee6bfb43ba:61614?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector ws started
INFO | Apache ActiveMQ 5.16.0 (localhost, ID:afee6bfb43ba-45805-1606145547047-0:1) started
INFO | For help or more information please see: http://activemq.apache.org
INFO | ActiveMQ WebConsole available at http://127.0.0.1:8161/
INFO | ActiveMQ Jolokia REST API available at http://127.0.0.1:8161/api/jolokia/
what have i done wrong ? Thanks
As at ActiveMQ 5.16.0 the jetty endpoint host value was changed from 0.0.0.0 to 127.0.0.1, see AMQ-7007.
To overcome this in my Dockerfile I use CMD ["/bin/sh", "-c", "bin/activemq console -Djetty.host=0.0.0.0"]
Activemq startup done by ENTRYPOINT in your Dockerfile, so CMD ["/bin/sh", "-c", "bin/activemq console -Djetty.host=0.0.0.0"] won't work.
Correct usage with ENTRYPOINT is
ENTRYPOINT ["./activemq","console","-Djetty.host=0.0.0.0"]
Background: I have a system behind a proxy/firewall. I can access docker to pull images, but do not have a username/password to access any other sites. Therefore my docker container of sonarqube is essentially offline.
Question: The docker container starts fine the first time, but fails to restart. This happens in two instances, either a manually installed plugin presents an error that it fails to download the update-center url, or it simply starts shutting down immediately as it starts. Both fail the application which closes the container. I do not seem to be able (or understand how to) modify the sonar.properties to get the update-center disabled and need guidance.
I have inquired on the github for the container without much help: https://github.com/SonarSource/docker-sonarqube/issues/76#issuecomment-364563967 The '-Dsonar.updatecenter.activate=false' option does not work when I try it.
Simply shutting down
2018.02.09 21:45:38 INFO ce[][o.s.p.ProcessEntryPoint] Starting ce
2018.02.09 21:45:38 INFO ce[][o.s.ce.app.CeServer] Compute Engine starting up...
2018.02.09 21:45:39 INFO ce[][o.e.p.PluginsService] no modules loaded
2018.02.09 21:45:39 INFO ce[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin]
2018.02.09 21:45:39 INFO ce[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.join.ParentJoinPlugin]
2018.02.09 21:45:39 INFO ce[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
2018.02.09 21:45:39 INFO ce[][o.e.p.PluginsService] loaded plugin org.elasticsearch.transport.Netty4Plugin]
2018.02.09 21:45:41 INFO ce[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2018.02.09 21:45:41 INFO ce[][o.sonar.db.Database] Create JDBC data source for jdbc:postgresql://pgsonar:5432/sonar
2018.02.09 21:45:43 INFO ce[][o.s.s.p.ServerFileSystemImpl] SonarQube home: /opt/sonarqube
2018.02.09 21:45:43 INFO ce[][o.s.c.c.CePluginRepository] Load plugins
2018.02.09 21:45:45 INFO ce[][o.s.c.q.PurgeCeActivities] Delete the Compute Engine tasks created before Sun Aug 13 21:45:45 UTC 2017
2018.02.09 21:45:45 INFO ce[][o.s.ce.app.CeServer] Compute Engine is operational
2018.02.09 21:45:45 INFO app[][o.s.a.SchedulerImpl] Process[ce] is up
2018.02.09 21:45:45 INFO app[][o.s.a.SchedulerImpl] SonarQube is up
2018.02.09 21:47:12 INFO app[][o.s.a.SchedulerImpl] Stopping SonarQube
2018.02.09 21:47:13 INFO ce[][o.s.p.StopWatcher] Stopping process
2018.02.09 21:47:13 INFO ce[][o.s.ce.app.CeServer] Compute Engine is stopping...
2018.02.09 21:47:13 INFO ce[][o.s.c.t.CeProcessingSchedulerImpl] Waiting for workers to finish in-progress tasks
2018.02.09 21:47:14 INFO ce[][o.s.ce.app.CeServer] Compute Engine is stopped
2018.02.09 21:47:15 INFO app[][o.s.a.SchedulerImpl] Process [ce] is stopped
2018.02.09 21:47:15 INFO web[][o.s.p.StopWatcher] Stopping process
2018.02.09 21:47:18 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2018.02.09 21:47:18 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.02.09 21:47:18 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
2018.02.09 21:47:18 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143
chown: cannot access '/opt/sonarqube/temp/README.txt': No such file or directory
Will update with the fail to download later (no access to logs at this exact moment)
Regarding the README.txt issue, you have to create a volume and mount the temp folder (note that I use the postgres setup from anorak:girl). You can then start and stop with no problems.
sudo docker volume create sonarqube-temp
sudo docker run -d --name sonarqube --link sonar-postgres:pgsonar -p 9000:9000 -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD='secure' -e SONARQUBE_JDBC_URL=jdbc:postgresql://pgsonar:5432/sonar -v sonarqube-temp:/opt/sonarqube/temp sonarqube:lts
Regarding the UpdateCenter issue, workaround is to specify a configuration with the run command (this is specific to Godin's docker container for sonarqube - through his run.sh script):
sudo docker run -d --name sonarqube --link sonar-postgres:pgsonar -p 9000:9000 -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD='secure' -e SONARQUBE_JDBC_URL=jdbc:postgresql://pgsonar:5432/sonar -v sonarqube-temp:/opt/sonarqube/temp sonarqube:lts -Dsonar.updatecenter.activate=false