I am new to Wildly and Docker
I am trying to build a test cluster of wildfly.
I am using docker compose for orchestration.
Following is my docker-compose.yml file
node:
image: wildfly-mgmt
links:
- lb:lb
lb:
image: wildfly-cluster-httpd
ports:
- "9090:80"
After running docker-compose up
I can not see the nodes in the mod cluster Management page.
http://localhost:9090/mod_cluster_manager
It is blank, somehow mod_cluster manager not able to see the nodes...
Docker file for mod cluster:
FROM fedora:latest
RUN yum -y update
RUN yum -y install httpd mod_cluster
RUN yum clean all
RUN sed -i 's|LoadModule proxy_balancer_module|# LoadModule proxy_balancer_module|' /etc/httpd/conf.modules.d/00-proxy.conf
ADD mod_cluster.conf /etc/httpd/conf.d/mod_cluster.conf
EXPOSE 80
CMD ["/sbin/httpd", "-DFOREGROUND"]
Mod_cluster.conf
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so
LoadModule manager_module modules/mod_manager.so
<IfModule manager_module>
Maxhost 100
ServerName localhost
<VirtualHost *:80>
<Directory />
Require all granted
</Directory>
<Location /mod_cluster_manager>
SetHandler mod_cluster-manager
Require all granted
</Location>
KeepAliveTimeout 60
ManagerBalancerName mycluster
EnableMCPMReceive On
ServerAdvertise On
</VirtualHost>
</IfModule>
I can see the servers running.
> Docker ps command shows the two containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b613166f4236 wildfly-mgmt "/opt/jboss/wildfly/b" 18 hours ago Up 18 hours 8080/tcp dockercomposecluster_node_1
963a728bae70 wildfly-cluster-httpd "/sbin/httpd -DFOREGR" 18 hours ago Up 18 hours 0.0.0.0:9090->80/tcp dockercomposecluster_lb_1
I can see the servers running from the console log
node_1 | 19:43:23,828 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://0.0.0.0:9990/management
node_1 | 19:43:23,828 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://0.0.0.0:9990
node_1 | 19:43:23,829 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final) started in 75208ms - Started 331 of 577 services (393 services are lazy, passive or on-demand)
But the Mod-CLuster_manager is not able to see the nodes. Can anyone please point out what is wrong here? I am really new to this.
For debugging you can do docker exec -it containername bash -it for interactive terminal. This should put you inside the container. From there can you do telnet <containername> <port> (you probably have to install telnet first) - or docker inspect <containername> the container you want to see and use its IP.
If you can't telnet, have you tried starting them on the same docker network?
Related
I have a cap rover instance in my digital ocean instance that I created. I want to use teh caprover instance to run cap rover sample apps.
I opened the digital ocean droplet web console in order to run a caprover isntance.
I ran the following lines of code:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
and got this:
Skipping adding existing rule
Skipping adding existing rule (v6)
Skipping adding existing rule
Skipping adding existing rule (v6)
I then ran this:
docker run -p 80:80 -p 443:443 -p 3000:3000 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
I got this:
Unable to find image 'caprover/caprover:latest' locally
latest: Pulling from caprover/caprover
Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f
Status: Downloaded newer image for caprover/caprover:latest
docker: Error response from daemon: driver failed programming external connectivity on endpoint priceless_sammet (9da9028cfc4873818f113458237ebd00f9c64fa648b853730a60b10bea39c720): Bind for 0.0.0.0:3000 failed: port is already allocated.
I tried changing the ports to:
docker run -p 81:81 -p 444:444 -p 3321:3321 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
and got this:
Captain Starting ...
Installing Captain Service ...
Installation of CapRover is starting...
For troubleshooting, please see: https://caprover.com/docs/troubleshooting.html
>>> Checking System Compatibility <<<
Docker Version passed.
Ubuntu detected.
X86 CPU detected.
Total RAM 1033 MB
Are your trying to run CapRover on a local machine or a machine without public IP?
In that case, you need to add this to your installation command:
-e MAIN_NODE_IP_ADDRESS='127.0.0.1'
Otherwise, if you are running CapRover on a VPS with public IP:
Your firewall may have been blocking an in-use port: 80
A simple solution on Ubuntu systems is to run "ufw disable" (security risk)
Or [recommended] just allowing necessary ports:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
See docs for more details on how to fix firewall issues
Finally, if you are an advanced user, and you want to bypass this check (NOT RECOMMENDED),
you can append the docker command with an addition flag: -e BY_PASS_PROXY_CHECK='TRUE'
Installation failed.
Error: Port seems to be closed: 80
at Request._callback (/usr/src/app/built/utils/CaptainInstaller.js:149:24)
at Request.self.callback (/usr/src/app/node_modules/request/request.js:185:22)
at Request.emit (events.js:400:28)
at Request.<anonymous> (/usr/src/app/node_modules/request/request.js:1154:10)
at Request.emit (events.js:400:28)
at IncomingMessage.<anonymous> (/usr/src/app/node_modules/request/request.js:1076:12)
at Object.onceWrapper (events.js:519:28)
at IncomingMessage.emit (events.js:412:35)
at endReadableNT (internal/streams/readable.js:1334:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
How can I open port 80, 443, and 3000 so that I can run the cap rover instance
Before upgrading my system, I was able to successfully connect to mongo running in a docker container using published ports. After upgrading, as shown in Case #1 connecting via published ports no longer work for me.
Case #1
~ docker run --rm -d -p 27017:27017 mongo:3.6
2594b7e5cbf481526589d221361c853338ff55ecb32d9e02eae17383960e971a
~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2594b7e5cbf4 mongo:3.6 "docker-entrypoint.s…" 4 seconds ago Up 3 seconds 0.0.0.0:27017->27017/tcp dazzling_fermat
Robo3T Logs
Cannot connect to the MongoDB at localhost:27017.
Error:
Network is unreachable. Reason: network error while attempting to run command 'isMaster' on host 'localhost:27017'
~ sudo lsof -i -P -n | grep LISTEN
...
docker-pr 263637 root 4u IPv4 3723123 0t0 TCP *:27017 (LISTEN)
✘ ~ sudo ufw status
Status: inactive
Now I can only connect using the host networking stack.
Case #2
~ docker run --rm -d --network=host mongo:3.6
39929a8d50cc8554d256f7516d039621cd22ed8be86680ac0e1400809464b619
~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39929a8d50cc mongo:3.6 "docker-entrypoint.s…" 5 seconds ago Up 4 seconds admiring_grothendieck
Robo3T Logs
4:13:20 PM Info: Connecting to localhost:27017...
4:13:20 PM Info: Establish connection successful. Connection: localhost
Pre-upgrade:
Linux Mint 19 - Tricia,
Docker version was 19.xx something I believe.
Post Upgrade:
~ lsb_release -a
No LSB modules are available.
Distributor ID: Linuxmint
Description: Linux Mint 20
Release: 20
Codename: ulyana
~ docker --version
Docker version 20.10.7, build 20.10.7-0ubuntu1~20.04.1
I verified there are no running firewalls (UFD, etc), I can connect from container to container when specifying a private docker network for both the server and client. What am I missing? How can I connect using published ports again? Thanks in advance.
Docker on Linux generally uses the host's DNS and modifies your iptables to provide the connectivity between the host and container. If there's a problem with connectivity, in your case the most likely culprits are (in order of likelihood):
DNS entry missing for localhost or wrong IP version target. Try using 127.0.0.1 or ::1 as the hostname instead.
iptables rules are missing. Check the earlier link in my response for remediations and flags that can affect this.
The container might actually have issues starting up. Check the output of docker log <container_id> for errors after you start it. I would say this option is unlikely as things work under host network but don't discount this possibility too quickly.
I saw you were setting up a Docker-compose file but it which creates 3 different containers but wanted to combine those 3 containers to a single container/image instead of setting it up as multiple containers at deployment system.
My current list of containers are as follow:
my main container containing my code that I built using Docker File
rest 2 are containers of Redis and Postress but wanted to combine them in 1.
Is there any way to do so?
First of all, running redis, postgres and your "main container" in one container is NOT best practice.
Typically you should have 3 separate containers (single app per container) communicating over the network. Sometimes we want to run two or more lightweight services inside the same container but redis and postgres aren't such services.
I recommend reading: best practices for building containers.
However, it's possible to have multiple services in the same docker container using the supervisord process management system.
I will run both redis and postgres services in one docker container (it's similar to your issue) to illustrate you how it works. It's for demonstration purposes only.
This is a directory structure, we only need Dockerfile and supervisor.conf (supervisord config file):
$ tree example_container/
example_container/
├── Dockerfile
└── supervisor.conf
First, I created a supervisord configuration file with redis and postgres services defined:
$ cat example_container/supervisor.conf
[supervisord]
nodaemon=true
[program:redis]
command=redis-server # command to run redis service
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
[program:postgres]
command=/usr/lib/postgresql/12/bin/postgres -D /var/lib/postgresql/12/main/ -c config_file=/etc/postgresql/12/main/postgresql.conf # command to run postgres service
autostart=true
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
user=postgres
environment=HOME="/var/lib/postgresql",USER="postgres"
Next I created a simple Dockerfile:
$ cat example_container/Dockerfile
FROM ubuntu:latest
ARG DEBIAN_FRONTEND=noninteractive
# Installing redis and postgres
RUN apt-get update && apt-get install -y supervisor redis-server postgresql-12
# Copying supervisor configuration file to container
ADD supervisor.conf /etc/supervisor.conf
# Initializing redis and postgres services using supervisord
CMD ["supervisord","-c","/etc/supervisor.conf"]
And then I built the docker image:
$ docker build -t example_container:v1 .
Finally I ran and tested docker container using the image above:
$ docker run --name multi_services -dit example_container:v1
472c7b2eac7441360126f8fcd0cc80e0e63ac3039f8195715a3a400f6288a236
$ docker exec -it multi_services bash
root#472c7b2eac74:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.7 0.1 27828 23372 pts/0 Ss+ 10:04 0:00 /usr/bin/python3 /usr/bin/supervisord -c /etc/supervisor.conf
postgres 8 0.1 0.1 212968 28972 pts/0 S 10:04 0:00 /usr/lib/postgresql/12/bin/postgres -D /var/lib/postgresql/12/main/ -c config_file=/etc/postgresql/12/main/postgresql.conf
root 9 0.1 0.0 47224 6216 pts/0 Sl 10:04 0:00 redis-server *:6379
...
root#472c7b2eac74:/# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 9/redis-server *:6
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 8/postgres
tcp6 0 0 :::6379 :::* LISTEN 9/redis-server *:6
As you can see it is possible to have multiple services in a single container but this is a NOT recommended approach that should be used ONLY for testing.
Regarding Kubernetes, you can group your containers in a single pod, as a deployment unit.
A Pod is the smallest deployable units of computing that you can create and manage in Kubernetes.
It is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.
A Pod's contents are always co-located and co-scheduled, and run in a shared context.
That would be more helpful than trying to merge containers together in one container.
I'm setting a backup/ sync within an Ubuntu network using rsync.
Assume that a Desktop (Ubuntu 18.04)-ip: 10.0.0.13
Running a docker with two Containers :
Client_A: 2001 -> 22/tcp , 8001 -> 80/tcp
Client_B: 2002 -> 22/tcp , 8002 -> 80/tcp
All 3 images are ubuntu, apache2 installed and running
dir:
DesktopOS Container1 Container2
10.0.0.13:80 10.0.0.13:8001 10.0.0.13:8002
⊢var ⊢var ⊢var
⊢www ⊢www ⊢www
⊢html ⊢html ⊢html
⊢1.txt ⊢2.txt ⊢3.txt
all three txt can be accessed in browser
When i try to pull 3.txt to Container1:
rsync -av -e 'ssh -p 2002' --rsh=ssh user#10.0.0.13/var/www/html/ ~/BACKUP/
1.txt has received.
How to access the 3.txt in Container1?
Please use the IP Address since I am simulating a real network, maybe 1 docker on 1 device in the real world.
Finally I found I installed ssh only, doesn't install the ssh-server.
Otherwise, the firewall blocks the access.
#find out port 22, 2002, 2001 etc.
#from netstat result, is it listening?
netstat | grep 2002
Install ssh server
sudo apt install tasksel
sudo tasksel install openssh-server
for Firewall:
sudo ufw allow 2001,2002
and it solved, thanks for your patients who try to answer me.
I have a service running in a docker container (local machine). I can see the service URL in the Ambari service config.
Now I want to connect to that service using my local development environment.
I found I can connect to that within the container but when I use that URL outside in my local I get connection refused.
Cause: org.apache.http.conn.HttpHostConnectException: Connect to
xx.xx.xx.com:12008 [xx.xx.xx.com/195.169.98.101] failed: Connection refused
How to connect to a service running inside a container from outside?
In my case code execute in my local machine.
If your container has mapped its port on the VM 12008 port, you would need to make sure you have port forwarded 12008 in your VirtualBox connection settings, as I mention in "How to connect mysql workbench to running mysql inside docker?"
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port12008 ,tcp,,12008,,12008"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port12008 ,udp,,12008,,12008"
The question needs more clarification, but I will answer with some assumptions.
I used an Ambari docker image (chose this randomly based on popularity).
Then I started 3 clusters as mentioned and my amb-settings and docker ps looked like this:
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ amb-settings
NODE_PREFIX=amb
CLUSTER_SIZE=3
AMBARI_SERVER_NAME=amb-server
AMBARI_SERVER_IMAGE=hortonworks/ambari-server:latest
AMBARI_AGENT_IMAGE=hortonworks/ambari-agent:latest
DOCKER_OPTS=
AMBARI_SERVER_IP=172.17.0.6
CONSUL=amb-consul
CONSUL_IMAGE=sequenceiq/consul:v0.5.0-v6
EXPOSE_DNS=false
DRY_RUN=false
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2483a74d919 hortonworks/ambari-agent:latest "/usr/sbin/init syste" 20 minutes ago Up 20 minutes amb2
4acaec766eaa hortonworks/ambari-agent:latest "/usr/sbin/init syste" 21 minutes ago Up 20 minutes amb1
47e9419de59f hortonworks/ambari-server:latest "/usr/sbin/init syste" 21 minutes ago Up 21 minutes 8080/tcp amb-server
548730bb1824 sequenceiq/consul:v0.5.0-v6 "/bin/start -server -" 22 minutes ago Up 22 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 8500/tcp amb-consul
27c725af6531 sequenceiq/ambari "/usr/sbin/init" 23 minutes ago Up 23 minutes 8080/tcp awesome_tesla
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$
As of now, I can visit the Ambari server through: http://172.17.0.6:8080/
This works also from my host computer. However, if you want this to be connected from another computer from a similar network, then one option is to have a haproxy which does the redirection from:
localhost:8080 -> 172.17.0.6:8080
So, I created a small haproxy.cfg and Dockerfile to achieve this:
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ cat Dockerfile
FROM haproxy:1.6
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ cat haproxy.cfg
frontend localnodes
bind *:8080
mode http
default_backend ambari
backend ambari
mode http
server ambari-server 172.17.0.6:8080 check
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker build --rm -t ambariproxy .
Sending build context to Docker daemon 9.635 MB
Step 1 : FROM haproxy:1.6
---> af749d0291b2
Step 2 : COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
---> Using cache
---> 60cdd2c7bb05
Successfully built 60cdd2c7bb05
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker run -d -p 8080:8080 ambariproxy
63dd026349bbb6752dbd898e1ae70e48a8785e792b35040e0d0473acb00c2834
Now if I say localhost:8080 or MY_HOST_IP:8080 I can see the ambari-server and this should work also from computers in the same network.
Hope I managed to answer your question :)
Thanks,