I am new to dockers and I have a docker container running for a set of students with some version-specific compilers and it's a part of a virtual laboratory setup.
Everything is fine with the setup except for the network. I have a 200 Mbps network and this is a speed test done on my phone on the same 200 Mbps network.
I did a speed test on the host machine where the docker container is running. It is running on Ubuntu 20.04 LTS. It is all good.
From inside the docker container running on the above host machine, I did a speed test with the same server ID 9898 and with an auto-selected server too.
We can see that the upload speed inside the docker container is limited to 4 Mbit/s somehow. I cannot find a reason why elsewhere.
I have seen recently that many students experienced connection drops during their attempt to connect to our SSH server. I believe this has something to do with the bandwidth limit.
The docker run command I am using to run this container build is as follows.
$ sudo docker run -p 7766:22 --detach --name lahtp-serv-3 --hostname lahtp-server --mount source=lahtp-3-storage-hdd,target=/var/lahtp-storage main:0.1
I asked a few people who suggested to run the container with net=host which will run the container with the host network instead of the docker network. I would like to know why docker container limits the upload bandwidth and how using the host network instead of the docker network fixes the issue?
Update #1:
I tried to spawn a new Ubuntu 18.04 container with the following command:
$ sudo docker run --net=host -it ubuntu:18.04 /bin/bash
Once inside the container, I installed the following to run the speedtest.
root#lahtp-server:/# apt-get update && apt-get upgrade -y && apt-get install build-essential openssh-server speedtest-cli
Once the installation is done, here is the results.
But adding --net=host does not change the issue. The upload speed is still 4 Mbit/s
How to remove this bandwidth throttling?
Update #2
I spawned a new Ubuntu 14.04 docker container using the following command
$ sudo docker run -it ubuntu:14.04 /bin/bash
Once the container is up, I installed the following
$ apt-get install python3-dev python3-pip
$ pip3 install speedtest-cli
And tested inside this container, and here are the results.
NO THROTTLING.
Did the same with Ubuntu 16.04 LTS, No throttling.
$ sudo docker run -it ubuntu:16.04 /bin/bash
And once inside the container
$ apt-get install python3-dev python3-pip
$ pip3 install speedtest-cli
NO THROTTLING.
Related
I have this Dockerfile :
FROM ubuntu:20.04
EXPOSE 80
After installing apache2 package in the container I can't acces the default page of apache from the network. Also docker is in a virtual machine with debian 10. If I try the official apache image (https://hub.docker.com/_/httpd) everything works fine but I want to know why installing it manually doesn't work.
To build the container from the image I use this command :
sudo docker run --name ubuntu -p 80:80 -it ubuntu /bin/bash
I have run the exactly same test on my virtual centos machine and found working.
I've build the image using your dockerfile and run apache installation using below command.
docker build -t ubuntu
docker run --name ubuntu -p 80:80 -it ubuntu /bin/bash
and In terminal opened by the above mentioned command, i ran the below command.
apt-get update
apt-get install apache2
service apache2 start
After that opened another ssh terminal keeping the current running as i have not run the Ubuntu container in detached mode and checked by using.
docker ps -a
and found container is running with exposing 0.0.0.0:80 and checked
curl localhost
Please make sure you have not stoped docker container before running curl command or hit in the browser as its not run in detached mode or background.
Following docker image starts tomcat8 in a fresh ubuntu 16.04 in a virtualbox but doesnt in a docker container. Is this a problem with docker, tomcat or am I missing on something?
Dockerfile:
FROM ubuntu:16.04
RUN apt update
RUN apt install -y openjdk-8-jdk
RUN apt-get install -y tomcat8
CMD service tomcat8 start
I assume that the image is built correctly (docker build command ends without errors)
While running the docker container just connect to it and check its logs:
docker logs <CONTAINER_ID> -f
You should see what happens there and why does tomcat fail to start. Maybe Java is not mapped correctly, maybe the ports are busy (unlikely but who knows).
And maybe tomcat starts correctly but you can't access it from outside because the 8080 port is not exposed / mapped (EXPOSE 8080 in docker file / -p 8080:8080 option while running a docker container)
Suppose I have a server running on port 8000 on Windows. How can my Docker container access it via 127.0.0.1:8000? I can't change the hostname too as the app in the container is not in my control.
I've read this previous discussion same but inside VB on using "--network=host" for a container to access the host machine's network, or using some different ip for the localhost. However, I'm on Windows and Docker runs inside a Windows Subsystem for Linux (ubuntu from the windows store) so localhost from the Docker container with --network="host" does not get it right. From inside the Windows Subsystem for Linux I can access 127.0.0.1:8000 without problems.
For instance, having some bash file doing the following
#!/bin/bash
set -x
# we get some file from the localhost
curl 127.0.0.1:22334/some_file.txt
# we build docker image
docker build --tag dockername --network=host dockerfiles/imagename
and some docker file like this
# This is a Dockerfile
FROM ubuntu:16.04 as installer
RUN apt-get update && apt-get install --no-install-recommends curl -y
RUN curl 127.0.0.1:22334/some_file.txt
I get something like
+ curl 127.0.0.1:22334/some_file.txt
here_we_get_some_file_content
+ docker build --tag dockername --network=host dockerfiles/imagename
Sending build context to Docker daemon 18.43kB
Step 1/30 : FROM ubuntu:16.04 as installer
---> 4a689991aa24
Step 2/30 : RUN apt-get update && apt-get install --no-install-recommends curl -y
---> Running in 72378a1be045
...
Step 3/30 : RUN curl 127.0.0.1:22334/some_file.txt
---> Running in 41a4c470337b
curl: (3) <url> malformed
The command '/bin/sh -c curl 127.0.0.1:22334/some_file.txt' returned a non-zero code: 3
I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question
EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.
I have a very basic doubt regarding docker.
I have a docker host installed in ubuntuA.
So, to test this from the client(UbuntuB) , should the docker be installed in UbuntuB machine also?
More correct answer is "only docker client" need to be installed in UbuntuB
In UbuntuB, install docker client only, it is around 17M
# apt-get update && apt-get install -y curl
# curl https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
In order to run docker command, you need talk to daemon in ubuntuA (port 2375 is used since docker 1.0)
$ docker -H tcp://ubuntuA:2375 images
or
$ export DOCKER_HOST tcp://ubuntuA:2375
$ docker images
see more detail in http://docs.docker.com/articles/basics/
Yes,
you have to install docker on both client and server.