Can't reach ActiveMQ from inside Docker container - docker

I'm trying to access an ActiveMQ instance on my local machine from inside a docker container also running on my machine. AMQ is listening on 0.0.0.0:61616. I tried to configure my program running in the container to use the ip address of docker0 as well as enp6s0, both didn't work.
If I however use the --net=host option it suddenly works, no matter which ip address I use. The problem is that I can't use the option in production as the code that starts the container doesn't support this. So if it's not possible to change the default network in the Dockerfile, I have to fix this in a different way.
EDIT: My Dockerfile
FROM java:8-jre
RUN mkdir -p /JCloudService
COPY ./0.4.6-SNAPSHOT-SHADED/ /JCloudService
RUN apt-get update && apt-get install netcat -y && apt-get install nano
WORKDIR /JCloudService
CMD set -x; /bin/sh -c '/JCloudService/bin/JCloudScaleService'
And the run command: docker run -it jcs:latest. With this command it doesn't work. Only if I add --net=host

--net=host works because it tells Docker to put your container in the same networking stack as your host machine.
To connect to a service running on your machine you need the ip of your host machine on the docker0 network. So ip addr show docker0 on your host, then you should be able to use that IP and 61616 to connect to the host from within the container.

Related

Question about needing host network in running pouchdb-server inside docker

I have this docker image that exposes port 5984 from a pouchdb-server running inside a docker container.
Here's what Dockerfile looks like:
From node:16-alpine
WORKDIR /pouchdb
RUN apk update
RUN npm install --location=global pouchdb-server
EXPOSE 5984
ENTRYPOINT ["pouchdb-server"]
CMD ["--port", "5984"]
Running the container using the default bridge network doesn't work:
docker run -d -v $(pwd)/pouchdb -p 5984:5984 pouchdb-server:v1
But upon running the container using the host docker network, it works like a charm.
docker run -d -v $(pwd)/pouchdb -p 5984:5984 --network host pouchdb-server:v1.
I understand that it removes the network isolation between docker and host network but this has the caveat of possible port conflicts.
My question is, is there any way to export make this work without using host network?

How to connect with container inside VM

I have issue to connect from my local machine through SSH with Docker container on which I have openssh-server installed and exposed on port 22 (default for openssh server) and this container is on virtual machine.
Here is my dockerfile:
FROM ubuntu:latest
RUN apt-get -y update
RUN apt-get -y install openssh-server
EXPOSE 22
After expose 22 in dockerfile shouldn't I be able to connect for example through ssh://user#vmIP:22?
First of all, it is not ideal to connect via SSH to a running docker container, read this so you can understand why https://www.cloudbees.com/blog/ssh-into-a-docker-container-how-to-execute-your-commands, now if you really want to do that, the EXPOSE instruction is a way to document to the Dockerfile maintainer or another dev that you are most likely to have a service running on that port, it will not map that port to the host were you are running the container.
In order to map a port from the container to the VM you can do this:
#Build the container in the same directory where the Dockerfile is located
docker build . -t mycontainer:latest
#Run it mapping port 22 to the VMs port 2222 in detached mode
docker run -d -p 2222:22 mycontainer:latest
Now, you have port 2222 of the VM mapped to the running container port 22, so ssh://user#vmip:222 would work

Docker port not being exposed

Using Windows and I have pulled the Jenkins image successfully via
docker pull jenkins
I am running a new container via following command and it seems to start the container fine. But when I try to access the Jenkins page on my browser, I just get following error message. I was expecting to see the Jenkins main log in page. Same issue when I tried other images like Redis, Couchbase and JBoss/Wildfly. What am I doing wrong? New to Docker and following tutorials which has described the following command to expose ports. Same for some answers given here + docs. Please advice. Thanks.
docker run -tid -p 127.0.0.1:8097:8097 --name jen1 --rm jenkins
In browser, just getting a normal 'Problem Loading page Error'.
The site could be temporarily unavailable or too busy.
First, it looks a little bit strange use -tid. Since you're trying to run it detached, maybe, it'd be better just -d, and use -ti for example to access via shell docker exec -ti jen1 bash.
Second, docker localhost is not the same than host localhost, so, I'd run the container directly without 127.0.0.1. If you want to use it, you may specify --net=host, what makes 127.0.0.1 is the same inside and outside docker.
Third, try to access first through port 8080 for initial admin password.
Definitively, in summary:
docker run -d -p 8097:8080 --name jen1 --rm jenkins
Then,
http://172.17.0.2:8080/
Finally, unlock Jenkins setting admin password. You can have a look at starting logs: docker logs jen1
Take a look at Jenkins Dockerfile from here:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
ARG http_port=8080
ARG agent_port=50000
.....
.....
# for main web interface:
EXPOSE ${http_port}
# will be used by attached slave agents:
EXPOSE ${agent_port}
As you can see port 8080 is being exposed and not 8097.
Change your command to
docker run -tid -p 8097:8080 --name jen1 --rm jenkins
What your command does is connects your host port 8097 with jenkins image port 8097, but how do you know that the image exposes/uses port 8097 (spoiler: it doesn't).
This image uses port 8080, so you want to port your local 8097 to port that one.
Change the command to this:
docker run -tid -p 127.0.0.1:8097:8080 --name jen1 --rm jenkins
Just tested your command with this small fix, and it works locally for me.

How do I advertise AND browse mDNS from within docker container?

I'm trying to create a ubuntu 17.04 based docker container that can browse mDNS on my network (outside of the docker network) AND advertise on mDNS to my network (outside of docker network).
I want to be able to run this docker container on a macOS host (during my development) AND a Linux (Debian) host for production.
https://github.com/ianblenke/docker-avahi seems to have solved this for Linux hosts (utilizing avahi daemon and mapping the /var/run/dbus volume to the host). When I'm developing on my macbook, I would like to use mDNSResponder.
How do I create a container that can advertise and browse on my local network, that will also run on my macOS laptop and on a Linux server?
Here is what I have so far.
Dockerfile
FROM ubuntu:17.04
WORKDIR /app
RUN apt-get update && apt-get install -yq avahi-daemon avahi-utils libnss-mdns \
&& apt-get -qq -y autoclean \
&& apt-get -qq -y autoremove \
&& apt-get -qq -y clean
RUN update-rc.d avahi-daemon enable
COPY docker/etc/nsswitch.conf /etc/nsswitch.conf
COPY docker/etc/avahi-daemon.conf /etc/avahi/avahi-daemon.conf
COPY docker/start.sh /app
CMD ["/bin/bash","start.sh"]
start.sh
#!/bin/bash
service avahi-daemon restart
service avahi-daemon status
avahi-browse -a
nsswitch.conf
hosts: files mdns_minimal [NOTFOUND=return] dns
avahi-daemon.conf
...
enable-dbus=no
...
Running
docker run --net=host -it mdns1
* Restarting Avahi mDNS/DNS-SD Daemon avahi-daemon [ OK ]
Avahi mDNS/DNS-SD Daemon is running
Failed to create client object: Daemon not running
As you can see avahi-daemon is running, but avahi-browse doesn't think it is. Is this because I disabled dbus?
Running the same commands (except I keep enable-dbus=yes) inside a 17.04 virtualbox image on my mac things work just fine.
Update: it looks like you can not do bridged networking on a macOS host. So is what I am trying to do impossible?
I'm currently trying to get avahi working inside a docker container and in my research came across this:
you can in the Avahi settings configuration disable dbus so it won't
use it. Then when you run Avahi in Docker you must pass it the
--no-rlimits flag and it'll work without compromising your containers security.
https://www.reddit.com/r/docker/comments/54ufz2/is_there_any_way_to_run_avahi_in_docker_without/
Hopefully this can help with your situation.
For mdns advertising/listening we run
dnssd
inside docker containers.
But! In order to be discoverable on a local network
the docker container should have an IP address from the network, proper routes from the network to docker container should be configured.
If you do not have control over the default router of the network,
you can try to use macvlan/ipvlan network driver.
It will allows you to assign multiple mac/IP addresses on the same network interface.
In our case, the network is wifi, so we had to use the ipvlan, because macvlan does not works with wifi. In a wired case you should prefer macvlan.

Running 2 services

I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question
EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.

Resources