How to run avahi in docker on a Linux host? - docker

I am trying to setup avahi in a docker container running on a Linux host. The purpose is to let avahi announce a service of my own and form me find the host and IP of the docker host.
So far avahi seems to run nicely in the container but I can not find my services searching from outside of my host.
I have googled alot and there are suggestions what to do but they all seems to be contradictory and/or insecure.
This is what I got so far.
docker-compose.yml
version: "3.7"
services:
avahi:
container_name: avahi
build:
context: ./config/avahi
dockerfile: DockerFile
network: host
DockerFile:
FROM alpine:3.13
RUN apk add --no-cache avahi avahi-tools
ADD avahi-daemon.conf /etc/avahi/avahi-daemon.conf
ADD psmb.service /etc/avahi/services/mpsu.service
ENTRYPOINT avahi-daemon --no-drop-root --no-rlimits
avahi-daemon.conf:
[server]
enable-dbus=no
psmb.service: (my service)
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">PSMB</name>
<service> <type>_mqtt._tcp</type> <port>1883</port>
<txt-record>info=MPS Service Host</txt-record>
</service>
</service-group>
This is from the terminal when starting avahi:
> docker-compose up
Starting avahi ... done
Attaching to avahi
avahi | avahi-daemon 0.8 starting up.
avahi | WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
avahi | Loading service file /etc/avahi/services/mpsu.service.
avahi | Loading service file /etc/avahi/services/sftp-ssh.service.
avahi | Loading service file /etc/avahi/services/ssh.service.
avahi | Joining mDNS multicast group on interface eth0.IPv4 with address 172.18.0.2.
avahi | New relevant interface eth0.IPv4 for mDNS.
avahi | Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
avahi | New relevant interface lo.IPv4 for mDNS.
avahi | Network interface enumeration completed.
avahi | Registering new address record for 172.18.0.2 on eth0.IPv4.
avahi | Registering new address record for 127.0.0.1 on lo.IPv4.
avahi | Server startup complete. Host name is 8f220b5ac449.local. Local service cookie is 1841391818.
avahi | Service "8f220b5ac449" (/etc/avahi/services/ssh.service) successfully established.
avahi | Service "8f220b5ac449" (/etc/avahi/services/sftp-ssh.service) successfully established.
avahi | Service "PSMB" (/etc/avahi/services/mpsu.service) successfully established.
So,, how do I configure to be able to search for my service?
I would like to get the host information for the Host running docker.

So, I ran across this project https://gitlab.com/ydkn/docker-avahi
Dockerfile:
# base image
ARG ARCH=amd64
FROM $ARCH/alpine:3
# args
ARG VCS_REF
ARG BUILD_DATE
# labels
LABEL maintainer="Florian Schwab <me#ydkn.io>" \
org.label-schema.schema-version="1.0" \
org.label-schema.name="ydkn/avahi" \
org.label-schema.description="Simple Avahi docker image" \
org.label-schema.version="0.1" \
org.label-schema.url="https://hub.docker.com/r/ydkn/avahi" \
org.label-schema.vcs-url="https://gitlab.com/ydkn/docker-avahi" \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.build-date=$BUILD_DATE
# install packages
RUN apk --no-cache --no-progress add avahi avahi-tools
# remove default services
RUN rm /etc/avahi/services/*
# disable d-bus
RUN sed -i 's/.*enable-dbus=.*/enable-dbus=no/' /etc/avahi/avahi-daemon.conf
# entrypoint
ADD docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT [ "docker-entrypoint.sh" ]
# default command
CMD ["avahi-daemon"]
# volumes
VOLUME ["/etc/avahi/services"]
you'll have to change/uncomment ALLOW_INTERFACES and DENY_INTERFACES in /etc/avahi/avahi-daemon.conf to "allow" whatever eth port you are using. i.e.
avahi-daemon.conf:
...
use-ipv4=yes
use-ipv6=yes
allow-interfaces=eth1 # was commented out with eth0
deny-interfaces=eth0 # was commented out with eth1
...
and then you can simply run the container, with the ALLOW_INTERFACES=??? matching what was set in avahi-daemon.conf
run command:
docker run -d --restart always \
--net=host \
-e ALLOW_INTERFACES=eth1 \
-v $(pwd)/services:/etc/avahi/services \
ydkn/avahi:latest
Seems to work, I was able to ping computername.local from another computer on/connected to the router, where computername is whatever is on the terminal line, i.e. username#computername:~$
Looks like there is also a way to add a service file, mounted to /etc/avahi/services so I believe you can customize the service name to something else more useful. I need to figure out how to do that, will edit when I find out.

Related

Connecting with Portainer: "resource is online but isn't responding to connection attempts"

I installed Ubuntu on an older Laptop. Now there is Docker with Portainer running and I want to access Portainer via my main PC in the same network. When I try to connect to Portainer via my Laptop where it is runnig (not Localhost address) it works fine. But when I try to connect via my PC, I get a timeout. Windows diagnostics says: "resource is online but isn't responding to connection attempts". How can I open Portainer to my local network? Or is this a problem with Ubuntu?
so check if you have openssh server running for ssh! disable firewall on terminal sudo ufw disable check if your network card is running on name eth0 ifconfig if not change following this step below
Using netplan which is the default these days. File /etc/netplan/00-installer-config.yaml file. but b4 you need to get serial/mac
Find the target devices mac/hw address using the lshw command:
lshw -C network
You'll see some output which looks like:
root#ys:/etc# lshw -C network
*-network
description: Ethernet interface
physical id: 2
logical name: eth0
serial: dc:a6:32:e8:23:19
size: 1Gbit/s
capacity: 1Gbit/s
capabilities: ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bcmgenet driverversion=5.8.0-1015-raspi duplex=full ip=192.168.0.112 link=yes multicast=yes port=MII speed=1Gbit/s
So then you take the serial
dc:a6:32:e8:23:19
Note the set-name option.
This works for the wifi section as well.
if you using calbe you can delete everything add the example only change for your serial "mac" sudo nano /etc/netplan/00-installer-config.yaml file.
network:
version: 2
ethernets:
eth0:
dhcp4: true
match:
macaddress: <YOUR MAC ID HERE>
set-name: eth0
Then then to test this config run.
netplan try
When your happy with it
netplan apply
reboot you ubuntu
after restart
stop portainer container
sudo docker stop portainer
remove portainer container
sudo docker rm portainer
now run again on the last version
docker run -d -p 8000:8000 -p 9000:9000 \
--name=portainer --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:2.13.1

Docker-in-Docker issues with connecting to internal container network (Anchore Engine)

I am having issues when trying to connect to a docker-compose network from inside of a container. These are the files I am working with. The whole thing runs when I ./run.sh.
Dockerfile:
FROM docker/compose:latest
WORKDIR .
# EXPOSE 8228
RUN apk update
RUN apk add py-pip
RUN apk add jq
RUN pip install anchorecli
COPY dockertest.sh ./dockertest.sh
COPY docker-compose.yaml docker-compose.yaml
CMD ["./dockertest.sh"]
docker-compose.yaml
services:
# The primary API endpoint service
engine-api:
image: anchore/anchore-engine:v0.6.0
depends_on:
- anchore-db
- engine-catalog
#volumes:
#- ./config-engine.yaml:/config/config.yaml:z
ports:
- "8228:8228"
..................
## A NUMBER OF OTHER CONTAINERS THAT ANCHORE-ENGINE USES ##
..................
networks:
default:
external:
name: anchore-net
dockertest.sh
echo "------------- INSTALL ANCHORE CLI ---------------------"
engineid=`docker ps | grep engine-api | cut -f 1 -d ' '`
engine_ip=`docker inspect $engineid | jq -r '.[0].NetworkSettings.Networks."cws-anchore-net".IPAddress'`
export ANCHORE_CLI_URL=http://$engine_ip:8228/v1
export ANCHORE_CLI_USER='user'
export ANCHORE_CLI_PASS='pass'
echo "System status"
anchore-cli --debug system status #This line throws error (see below)
run.sh:
#!/bin/bash
docker build . -t anchore-runner
docker network create anchore-net
docker-compose up -d
docker run --network="anchore-net" -v //var/run/docker.sock:/var/run/docker.sock anchore-runner
#docker network rm anchore-net
Error Message:
System status
INFO:anchorecli.clients.apiexternal:As Account = None
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 172.19.0.6:8228
Error: could not access anchore service (user=user url=http://172.19.0.6:8228/v1): HTTPConnectionPool(host='172.19.0.6', port=8228): Max retries exceeded with url: /v1
(Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))
Steps:
run.sh builds container image and creates network anchore-net
the container has an entrypoint script, which does multiple things
firstly, it brings up the docker-compose network as detached FROM inside the container
secondly, nstalls anchore-cli so I can run commands against container network
lastly, attempts to get a system status of the anchore-engine (d.c network) but thats where I am running into HTTP request connection issues.
I am dynamically getting the IP of the api endpoint container of anchore-engine and setting the URL of the request to do that. I have also tried passing those variables from command line such as:
anchore-cli --u user --p pass --url http://$engine_ip/8228/v1 system status but that throws the same error.
For those of you who took the time to read through this, I highly appreciate any input you can give me as to where the issue may be lying. Thank you very much.

Run X application in a Docker container reliably on a server connected via SSH without "--net host"

Without a Docker container, it is straightforward to run an X11 program on a remote server using the SSH X11 forwarding (ssh -X). I have tried to get the same thing working when the application runs inside a Docker container on a server. When SSH-ing into a server with the -X option, an X11 tunnel is set up and the environment variable "$DISPLAY" is automatically set to typically "localhost:10.0" or similar. If I simply try to run an X application in a Docker, I get this error:
Error: GDK_BACKEND does not match available displays
My first idea was to actually pass the $DISPLAY into the container with the "-e" option like this:
docker run -ti -e DISPLAY=$DISPLAY name_of_docker_image
This helps, but it does not solve the issue. The error message changes to:
Unable to init server: Broadway display type not supported: localhost:10.0
Error: cannot open display: localhost:10.0
After searching the web, I figured out that I could do some xauth magic to fix the authentication. I added the following:
SOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge -
chmod 777 $XAUTH
docker run -ti -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH name_of_docker_image
However, this only works if also add "--net host" to the docker command:
docker run -ti -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH --net host name_of_docker_image
This is not desirable since it makes the whole host network visible for the container.
What is now missing in order to get it fully to run on a remote server in a docker without "--net host"?
I figured it out. When you are connecting to a computer with SSH and using X11 forwarding, /tmp/.X11-unix is not used for the X communication and the part related to $XSOCK is unnecessary.
Any X application rather uses the hostname in $DISPLAY, typically "localhost" and connects using TCP. This is then tunneled back to the SSH client. When using "--net host" for the Docker, "localhost" will be the same for the Docker container as for the Docker host, and therefore it will work fine.
When not specifying "--net host", the Docker is using the default bridge network mode. This means that "localhost" means something else inside the container than for the host, and X applications inside the container will not be able to see the X server by referring to "localhost". So in order to solve this, one would have to replace "localhost" with the actual IP-address of the host. This is usually "172.17.0.1" or similar. Check "ip addr" for the "docker0" interface.
This can be done with a sed replacement:
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
Additionally, the SSH server is commonly not configured to accept remote connections to this X11 tunnel. This must then be changed by editing /etc/ssh/sshd_config (at least in Debian) and setting:
X11UseLocalhost no
and then restart the SSH server, and re-login to the server with "ssh -X".
This is almost it, but there is one complication left. If any firewall is running on the Docker host, the TCP port associated with the X11-tunnel must be opened. The port number is the number between the : and the . in $DISPLAY added to 6000.
To get the TCP port number, you can run:
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
Then (if using ufw as firewall), open up this port for the Docker containers in the 172.17.0.0 subnet:
ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
All the commands together can be put into a script:
XSOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | sudo xauth -f $XAUTH nmerge -
sudo chmod 777 $XAUTH
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
sudo ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
sudo docker run -ti --rm -e DISPLAY=$DISPLAY -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH name_of_docker_image
Assuming you are not root and therefore need to use sudo.
Instead of sudo chmod 777 $XAUTH, you could run:
sudo chown my_docker_container_user $XAUTH
sudo chmod 600 $XAUTH
to prevent other users on the server from also being able to access the X server if they know what you have created the /tmp/.docker.auth file for.
I hope this should make it properly work for most scenarios.
If you set X11UseLocalhost = no, you're allowing even external traffic to reach the X11 socket. That is, traffic directed to an external IP of the machine can reach the SSHD X11 forwarding. There are still two security mechanism which might apply (firewall, X11 auth). Still, I'd prefer leaving a system global setting alone if you're fiddling with a user- or even application-specific issue like in this case.
Here's an alternative how to get X11 graphics out of a container and via X11 forwarding from the server to the client, without changing X11UseLocalhost in the sshd config.
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------- veth123#if5 --|-- eth0#if6 |
| (bridge) (veth pair) | (veth pair) |
| | |
| 127.0.0.1 +-------------------------+
routing +- lo
| (loopback)
|
| 192.168.1.2
+- ens33
(physical host interface)
With the default X11UseLocalhost yes, sshd listens only on 127.0.0.1 on the root network namespace. We need to get the X11 traffic from inside the docker network namespace to the loopback interface in the root net ns. The veth pair is connected to the docker0 bridge and both ends can therefore talk to 172.17.0.1 without any routing. The three interfaces in the root net ns (docker0, lo and ens33) can communicate via routing.
We want to achieve the following:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123#if5 --|-< eth0#if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo >--ssh x11 fwd-+
(loopback) |
v
192.168.1.2 |
<-- ssh -- ens33 ------<-----+
(physical host interface)
We can let the X11 application talk directly to 172.17.0.1 to "escape" the docker net ns. This is achieved by setting the DISPLAY appropriately: export DISPLAY=172.17.0.1:10:
+ docker container net ns+
| |
172.17.0.1 | 172.17.0.2 |
docker0 --------- veth123#if5 --|-- eth0#if6 -----< xeyes |
(bridge) (veth pair) | (veth pair) |
| |
127.0.0.1 +-------------------------+
lo
(loopback)
192.168.1.2
ens33
(physical host interface)
Now, we add an iptables rule on the host to route from 172.17.0.1 to 127.0.0.1 in the root net ns:
iptables \
--table nat \
--insert PREROUTING \
--proto tcp \
--destination 172.17.0.1 \
--dport 6010 \
--jump DNAT \
--to-destination 127.0.0.1:6010
sysctl net.ipv4.conf.docker0.route_localnet=1
Note that we're using port 6010, that's the default port on which SSHD performs X11 forwarding: It's using display number 10, which is added to the port "base" 6000. You can check which display number to use after you've established the SSH connection by checking the DISPLAY environment variable in a shell started by SSH.
Maybe you can improve on the forwarding rule by only routing traffic from this container (veth end). Also, I'm not quite sure why the route_localnet is needed, to be honest. It appears that 127/8 is a strange source / destination for packets and therefore disabled for routing by default. You can probably also reroute traffic from the loopback interface inside the docker net ns to the veth pair, and from there to the loopback interface in the root net ns.
With the commands given above, we end up with:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123#if5 --|-< eth0#if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo
(loopback)
192.168.1.2
ens33
(physical host interface)
The remaining connection is established by SSHD when you establish a connection with X11 forwarding. Please note that you have to establish the connection before attempting to start an X11 application inside the container, since the application will immediately try to reach the X11 server.
There is one piece missing: authentication. We're now trying to access the X11 server as 172.17.0.1:10 inside the container. The container however doesn't have any X11 authentication, or not a correct one if you're bind-mounting the home directory (outside the container it's usually something like <hostname>:10). Use Ruben's suggestion to add a new entry visible inside the docker container:
# inside container
xauth add 172.17.0.1:10 . <cookie>
where <cookie> is the cookie set up by the SSH X11 forwarding, e.g. via xauth list.
You might also have to allow traffic ingress to 172.17.0.1:6010 in your firewall.
You can also start an application from the host inside the docker container network namespace:
sudo nsenter --target=<pid of process in container> --net su - $USER <app>
Without the su, you'll be running as root. Of course, you can also use another container and share the network namespace:
sudo docker run --network=container:<other container name/id> ...
The X11 forwarding mechanism shown above applies to the entire network namespace (actually, to everything connected to the docker0 bridge). Therefore, it will work for any applications inside the container network namespace.
In my case, I sit at "remote" and connect to a "docker_container" on "docker_host":
remote --> docker_host --> docker_container
To make debugging scripts easier with VScode, I installed SSHD into the "docker_container", reporting on port 22, mapped to another port (say 1234) on the "docker_host".
So I can connect directly with the running container via ssh (from "remote"):
ssh -Y -p 1234 appuser#docker_host.local
(where appuser is the username within the "docker_container". I am working on my local subnet now, so I can reference my server via the .local mapping. For external IPs, just make sure your router is mapped to this port to this machine.)
This creates a connection directly from my "remote" to "docker_container" via ssh.
remote --> (ssh) --> docker_container
Inside the "docker_container", I installed sshd with
sudo apt-get install openssh-server (you can add this to your Dockerfile to install at build time).
To allow X11 forwarding to work, edit the /etc/ssh/sshd_config file as such:
X11Forwarding yes
X11UseLocalhost no
Then restart the ssh within the container. You should do this from shell executed into the container, from the "docker_host", not when you are connected to the "docker_container" via ssh: (docker exec -ti docker_container bash)
Restart sshd:
sudo service ssh restart
When you connect via ssh to the "docker_container", check the $DISPLAY environment variable. It should say something like
appuser#3f75a98d67e6:~/data$ echo $DISPLAY
3f75a98d67e6:10.0
Test by executing your favorite X11 graphics program from within "docker_container" via ssh (like cv2.imshow())
I use an automated approach which can be executed entirely from within the docker container.
All that is needed is to pass the DISPLAY variable to the container, and mounting .Xauthority.
Moreover, it only uses the port from the DISPLAY variable, so it will also work in cases where DISPLAY=localhost:XY.Z.
Create a file, source-me.sh, with the following content:
# Find the containers address in /etc/hosts
CONTAINER_IP=$(grep $(hostname) /etc/hosts | awk '{ print $1 }')
# Assume the docker-host IP only differs in the last byte
SUBNET=$(echo $CONTAINER_IP | sed 's/\.[^\.]$//')
DOCKER_HOST_IP=${SUBNET}.1
# Get the port from the DISPLAY variable
DISPLAY_PORT=$(echo $DISPLAY | sed 's/.*://' | sed 's/\..*//')
# Create the correct display-name
export DISPLAY=$DOCKER_HOST_IP:$DISPLAY_PORT
# Find an existing xauth entry for the same port (DISPLAY_PORT),
# and copy everything except the dispay-name
# filtering out entries containing /unix: which correspond to "same-machine" connections
ENTRY=$(xauth -n list | grep -v '/unix\:' | grep "\:${DISPLAY_PORT}" | head -n 1 | sed 's/^[^ ]* *//')
# Prepend our display-name
ENTRY="$DOCKER_HOST_IP:$DISPLAY_PORT $ENTRY"
# Add the new xauth entry.
# Because our .Xauthority file is mounted, a new file
# named ${HOME}/.Xauthority-n will be created, and a warning
# is printed on std-err
xauth add $ENTRY 2> /dev/null
# replace the content of ${HOME}/.Xauthority with that of ${HOME}/.Xauthority-n
# without creating a new i-node.
cat ${HOME}/.Xauthority-n > ${HOME}/.Xauthority
Create the following Dockerfile for testing:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y xauth
COPY source-me.sh /root/
RUN cat /root/source-me.sh >> /root/.bashrc
# xeyes for testing:
RUN apt-get install -y x11-apps
Build and run:
docker build -t test-x .
docker run -ti \
-v $HOME/.Xauthority:/root/.Xauthority:rw \
-e DISPLAY=$DISPLAY \
test-x \
bash
Inside the container, run:
xeyes
To run non-interactively, you must ensure source-me.sh is sourced:
docker run \
-v $HOME/.Xauthority:/root/.Xauthority:rw \
-e DISPLAY=$DISPLAY \
test-x \
bash -c "source source-me.sh ; xeyes"

ActiveMQ within Wildfly on a Docker container gives: Invalid "host" value "0.0.0.0" detected

I have Wildfly running in a Docker container.
Within Wildfly the messaging-activemq subsystem is active.
The subsystem and extension defaults are taken from the standalone-full.xml file.
After starting wildfly, following output is displayed
[org.apache.activemq.artemis.jms.server] (ServerService Thread Pool -- 64)
AMQ121005: Invalid "host" value "0.0.0.0" detected for "http-connector" connector.
Switching to "eeb79399d447".
If this new address is incorrect please manually configure the connector to use the proper one.
The eeb79399d447 is the docker container id.
It's also impossible to connect to jms from my java client. While connecting it gives the following error.
AMQ214016: Failed to create netty connection: java.net.UnknownHostException: eeb79399d447
When I start wildfly on my local workstation (outside docker) the problem does not occur and I can connect to jms and send my messages.
Here are a few options. Option 1 & 2 may be what you asked for, but in the end didn't work for me. Option 3 however, I think will better address your intent.
Option 1) You can do this by adding some scripting to your docker image ( and not touching your standalone-full.xml. The basic idea ( credit goes to git-hub user kwart ) is to make a docker entry point that can determine the IPv4 address of the docker container before calling standalone.sh.
see : https://github.com/kwart/dockerfiles/tree/master/wildfly-ext and check out the usage of WILDFLY_BIND_ADDR. I forked it.
Notes:
GetIp.java will print out the IPv4 address ( and is copied into the container )
dockerentry-point.sh calls GetIp.java as needed
WILDFLY_BIND_ADDR=${WILDFLY_BIND_ADDR:-0.0.0.0}
if [ "${WILDFLY_BIND_ADDR}" = "auto" ]; then
WILDFLY_BIND_ADDR=`java -cp /opt/jboss GetIp`
fi
Option 2) Alternatively, using some script-fu, you may be able to do everything you need in a Dockerfile:
#CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]
CMD ["sh", "-c", "DOCKER_IPADDR=$(hostname --ip-address) && echo IP Address was $DOCKER_IPADDR && /opt/jboss/wildfly/bin/standalone.sh -c standalone-full.xml -b=$DOCKER_IPADDR -bmanagement=$DOCKER_IPADDR"]
Your mileage may very.
I was working with the helloworld-jms quickstart from the WildFly docs, and had to jump through some extra hoops to get the JMS queue created. Even at that point, the sample java code wasn't able to connect with either option 1 or option 2.
Option 3) ( This worked for me btw ) Start your container with binding to 0.0.0.0, expose your 8080 port for your JMS client running on the host, and add an entry in your hosts' /etc/hosts file:
Dockerfile:
FROM jboss/wildfly
# CP foo.war /opt/jboss/wildfly/standalone/deployments/
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
RUN /opt/jboss/wildfly/bin/add-user.sh -a quickstartUser quickstartPwd1! --silent
RUN echo "quickstartUser=guest" >> /opt/jboss/wildfly/standalone/configuration/application-roles.properties
# use standalone-full.xml to enable the JMS feature
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]
Build & run ( expose 8080 if your client is on your host machine )
docker build -t mywildfly .
docker run -it --rm --name jboss -p127.0.0.1:8080:8080 -p127.0.0.1:9990:9990 my_wildfly
Then on the host machine ( I'm running OSX; my jboss container's id was 46d04508b92b ) add an entry in your /etc/hosts for the docker-host-name that points to 127.0.0.1:
127.0.0.1 46d04508b92b # <-- replace with your container's id
Once the wildfly container is running, you create/configure the testQueue via scripts or in the management console. My config came from https://github.com/wildfly/quickstart.git under the helloworld-jms folder:
docker cp configure-jms.cli jboss:/tmp/
docker exec jboss /opt/jboss/wildfly/bin/jboss-cli.sh --connect --file=/tmp/configure-jms.cli
and SUCCESS from mvn clean compile exec:java the host machine (from w/in the helloworld-jms folder):
Mar 28, 2018 9:03:15 PM org.jboss.as.quickstarts.jms.HelloWorldJMSClient main
INFO: Found destination "jms/queue/test" in JNDI
Mar 28, 2018 9:03:16 PM org.jboss.as.quickstarts.jms.HelloWorldJMSClient main
INFO: Sending 1 messages with content: Hello, World!
Mar 28, 2018 9:03:16 PM org.jboss.as.quickstarts.jms.HelloWorldJMSClient main
INFO: Received message with content Hello, World!
You need to edit the standalone-full.xml to cope with jms behind NAT and when you run the docker container pass though the ip and port that your jms client can use to connect, which is the ip of the machine running docker in Dockers' default config

Reach host with Docker Compose

I have a Docker Compose v2 file which starts a container. I locally run a service on port 3001. I want to reach this service from the Docker container.
The Docker Compose file looks like this:
version: '2'
services:
my-thingy:
image: my-image:latest
#network_mode: host #DOES not help
environment:
- THE_HOST_I_WANT_TO_CONNECT_TO=http://127.0.0.1:3001
ports:
- "3010:3010"
Now, how can I reach THE_HOST_I_WANT_TO_CONNECT_TO?
What I tried is:
Setting network_mode to host. This did not work. 127.0.0.1 could not be reached.
I can also see that I can reach the host from the container if I use the local IP of the host. A quick hack would be to use something like ifconfig | grep broadcast | awk '{print $2}' to obtain the IP and substitute that in Docker Compose. Since this IP can change on reconnect and different setups can have different ifconfig results, I am looking for a better solution.
I've used another hack/workarkound from comments in the docker issue #1143. Seems to Work For Me™ for the time being... Specifically, I've added the following lines in my Dockerfile:
# - net-tools contains netstat, used to discover IP of Docker host server.
# NOTE: the netstat trick is to make Docker host server accessible
# from inside Docker container under name 'dockerhost'. Unfortunately,
# as of 2016.10, there's no official/robust way to do this when Docker host
# has no public IP/DNS entry. What is used here is built based on:
# - https://github.com/docker/docker/issues/1143#issuecomment-39364200
# - https://github.com/docker/docker/issues/1143#issuecomment-46105218
# See also:
# - http://stackoverflow.com/q/38936738/98528
# - https://github.com/docker/docker/issues/8395#issuecomment-200808798
# - https://github.com/docker/docker/issues/23177
RUN apt-get update && apt-get install -y net-tools
CMD (netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2" dockerhost"}' >> /etc/hosts) && \
...old CMD...
With this, I can use dockerhost as the name of the host where Docker is installed. As mentioned above, this is based on:
https://github.com/docker/docker/issues/1143#issuecomment-39364200
(...) One way is to rely on the fact that the Docker host is reachable through the address of the Docker bridge, which happens to be the default gateway for the container. In other words, a clever parsing of ip route ls | grep ^default might be all you need in that case. Of course, it relies on an implementation detail (the default gateway happens to be an IP address of the Docker host) which might change in the future. (...)
https://github.com/docker/docker/issues/1143#issuecomment-46105218
(...) A lot of people like us are looking for a little tidbit like this
netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'
where netstat -nr means:
Netstat prints information about the Linux networking subsystem.
(...)
--route , -r
Display the kernel routing tables.
(...)
--numeric , -n
Show numerical addresses instead of trying to determine symbolic host, port or user names.
This is a known issue with Docker Compose: see Document how to connect to Docker host from container #1143. The suggested solution of a dockerhost entry in /etc/hosts is not implemented.
I went for the solution with a shell variable as also suggested in a comment by amcdl on the issue:
Create a LOCAL_XX_HOST variable: export LOCAL_XX_HOST="http://$(ifconfig en0 inet | grep "inet " | awk -F'[: ]+' '{ print $2 }'):3001".
Then, for example, refer to this variable in docker-compose like this:
my-thingy:
image: my-image:latest
environment:
- THE_HOST_I_WANT_TO_CONNECT_TO=${LOCAL_XX_HOST}

Resources