How do I advertise AND browse mDNS from within docker container? - docker

I'm trying to create a ubuntu 17.04 based docker container that can browse mDNS on my network (outside of the docker network) AND advertise on mDNS to my network (outside of docker network).
I want to be able to run this docker container on a macOS host (during my development) AND a Linux (Debian) host for production.
https://github.com/ianblenke/docker-avahi seems to have solved this for Linux hosts (utilizing avahi daemon and mapping the /var/run/dbus volume to the host). When I'm developing on my macbook, I would like to use mDNSResponder.
How do I create a container that can advertise and browse on my local network, that will also run on my macOS laptop and on a Linux server?
Here is what I have so far.
Dockerfile
FROM ubuntu:17.04
WORKDIR /app
RUN apt-get update && apt-get install -yq avahi-daemon avahi-utils libnss-mdns \
&& apt-get -qq -y autoclean \
&& apt-get -qq -y autoremove \
&& apt-get -qq -y clean
RUN update-rc.d avahi-daemon enable
COPY docker/etc/nsswitch.conf /etc/nsswitch.conf
COPY docker/etc/avahi-daemon.conf /etc/avahi/avahi-daemon.conf
COPY docker/start.sh /app
CMD ["/bin/bash","start.sh"]
start.sh
#!/bin/bash
service avahi-daemon restart
service avahi-daemon status
avahi-browse -a
nsswitch.conf
hosts: files mdns_minimal [NOTFOUND=return] dns
avahi-daemon.conf
...
enable-dbus=no
...
Running
docker run --net=host -it mdns1
* Restarting Avahi mDNS/DNS-SD Daemon avahi-daemon [ OK ]
Avahi mDNS/DNS-SD Daemon is running
Failed to create client object: Daemon not running
As you can see avahi-daemon is running, but avahi-browse doesn't think it is. Is this because I disabled dbus?
Running the same commands (except I keep enable-dbus=yes) inside a 17.04 virtualbox image on my mac things work just fine.
Update: it looks like you can not do bridged networking on a macOS host. So is what I am trying to do impossible?

I'm currently trying to get avahi working inside a docker container and in my research came across this:
you can in the Avahi settings configuration disable dbus so it won't
use it. Then when you run Avahi in Docker you must pass it the
--no-rlimits flag and it'll work without compromising your containers security.
https://www.reddit.com/r/docker/comments/54ufz2/is_there_any_way_to_run_avahi_in_docker_without/
Hopefully this can help with your situation.

For mdns advertising/listening we run
dnssd
inside docker containers.
But! In order to be discoverable on a local network
the docker container should have an IP address from the network, proper routes from the network to docker container should be configured.
If you do not have control over the default router of the network,
you can try to use macvlan/ipvlan network driver.
It will allows you to assign multiple mac/IP addresses on the same network interface.
In our case, the network is wifi, so we had to use the ipvlan, because macvlan does not works with wifi. In a wired case you should prefer macvlan.

Related

How to connect with container inside VM

I have issue to connect from my local machine through SSH with Docker container on which I have openssh-server installed and exposed on port 22 (default for openssh server) and this container is on virtual machine.
Here is my dockerfile:
FROM ubuntu:latest
RUN apt-get -y update
RUN apt-get -y install openssh-server
EXPOSE 22
After expose 22 in dockerfile shouldn't I be able to connect for example through ssh://user#vmIP:22?
First of all, it is not ideal to connect via SSH to a running docker container, read this so you can understand why https://www.cloudbees.com/blog/ssh-into-a-docker-container-how-to-execute-your-commands, now if you really want to do that, the EXPOSE instruction is a way to document to the Dockerfile maintainer or another dev that you are most likely to have a service running on that port, it will not map that port to the host were you are running the container.
In order to map a port from the container to the VM you can do this:
#Build the container in the same directory where the Dockerfile is located
docker build . -t mycontainer:latest
#Run it mapping port 22 to the VMs port 2222 in detached mode
docker run -d -p 2222:22 mycontainer:latest
Now, you have port 2222 of the VM mapped to the running container port 22, so ssh://user#vmip:222 would work

Upload speed inside Docker Container is limited to 4 Mbit/s

I am new to dockers and I have a docker container running for a set of students with some version-specific compilers and it's a part of a virtual laboratory setup.
Everything is fine with the setup except for the network. I have a 200 Mbps network and this is a speed test done on my phone on the same 200 Mbps network.
I did a speed test on the host machine where the docker container is running. It is running on Ubuntu 20.04 LTS. It is all good.
From inside the docker container running on the above host machine, I did a speed test with the same server ID 9898 and with an auto-selected server too.
We can see that the upload speed inside the docker container is limited to 4 Mbit/s somehow. I cannot find a reason why elsewhere.
I have seen recently that many students experienced connection drops during their attempt to connect to our SSH server. I believe this has something to do with the bandwidth limit.
The docker run command I am using to run this container build is as follows.
$ sudo docker run -p 7766:22 --detach --name lahtp-serv-3 --hostname lahtp-server --mount source=lahtp-3-storage-hdd,target=/var/lahtp-storage main:0.1
I asked a few people who suggested to run the container with net=host which will run the container with the host network instead of the docker network. I would like to know why docker container limits the upload bandwidth and how using the host network instead of the docker network fixes the issue?
Update #1:
I tried to spawn a new Ubuntu 18.04 container with the following command:
$ sudo docker run --net=host -it ubuntu:18.04 /bin/bash
Once inside the container, I installed the following to run the speedtest.
root#lahtp-server:/# apt-get update && apt-get upgrade -y && apt-get install build-essential openssh-server speedtest-cli
Once the installation is done, here is the results.
But adding --net=host does not change the issue. The upload speed is still 4 Mbit/s
How to remove this bandwidth throttling?
Update #2
I spawned a new Ubuntu 14.04 docker container using the following command
$ sudo docker run -it ubuntu:14.04 /bin/bash
Once the container is up, I installed the following
$ apt-get install python3-dev python3-pip
$ pip3 install speedtest-cli
And tested inside this container, and here are the results.
NO THROTTLING.
Did the same with Ubuntu 16.04 LTS, No throttling.
$ sudo docker run -it ubuntu:16.04 /bin/bash
And once inside the container
$ apt-get install python3-dev python3-pip
$ pip3 install speedtest-cli
NO THROTTLING.

How to disable network for a running Docker container?

I would like to start a Docker container normally, run it, install some things into it, and then I would like to disable the network, to run some more commands in it, but they should not have access to the network. How can I do that for a running container?
I use docker-py and I know I can use network_disabled to disable networking for the whole container. But I am not sure how I can disable the network after the container is already created. Ideally, I would run the container with command sleep infinity, then docker exec some commands in it, then disable networking, then run few more commands using docker exec.
Maybe an option would be docker network disconnect
Description
Disconnect a container from a network
Usage
docker network disconnect [OPTIONS] NETWORK CONTAINER
Example:
Create a container attached to the default bridge network
docker container run --rm -it alpine ping 8.8.8.8
and after a while disconnect it with:
docker network disconnect bridge <container-name>
The standard pattern you should use here is to write a Dockerfile that does whatever software installation you need, and builds an image out of it. This actually fits your immediate need quite nicely, since once you've built the image you can run it without network.
A typical Dockerfile skeleton might look more or less like
FROM ubuntu:18.04
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
thing-to-run-without-network
CMD ["/usr/bin/thing-to-run-without-network"]
And then you'd build and run it as
docker build -t no-net-test .
docker run --rm --net none no-net-test
Generally you should set your image up so that docker run does everything the container needs to do, without needing to docker exec ever (except for hand debugging). You should never install things into a running container: your work will get lost as soon as you docker rm the container, and deleting and restarting your containers is extremely routine.

Can't reach ActiveMQ from inside Docker container

I'm trying to access an ActiveMQ instance on my local machine from inside a docker container also running on my machine. AMQ is listening on 0.0.0.0:61616. I tried to configure my program running in the container to use the ip address of docker0 as well as enp6s0, both didn't work.
If I however use the --net=host option it suddenly works, no matter which ip address I use. The problem is that I can't use the option in production as the code that starts the container doesn't support this. So if it's not possible to change the default network in the Dockerfile, I have to fix this in a different way.
EDIT: My Dockerfile
FROM java:8-jre
RUN mkdir -p /JCloudService
COPY ./0.4.6-SNAPSHOT-SHADED/ /JCloudService
RUN apt-get update && apt-get install netcat -y && apt-get install nano
WORKDIR /JCloudService
CMD set -x; /bin/sh -c '/JCloudService/bin/JCloudScaleService'
And the run command: docker run -it jcs:latest. With this command it doesn't work. Only if I add --net=host
--net=host works because it tells Docker to put your container in the same networking stack as your host machine.
To connect to a service running on your machine you need the ip of your host machine on the docker0 network. So ip addr show docker0 on your host, then you should be able to use that IP and 61616 to connect to the host from within the container.

Running 2 services

I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question
EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.

Resources