Docker client execution - docker

I have a very basic doubt regarding docker.
I have a docker host installed in ubuntuA.
So, to test this from the client(UbuntuB) , should the docker be installed in UbuntuB machine also?

More correct answer is "only docker client" need to be installed in UbuntuB
In UbuntuB, install docker client only, it is around 17M
# apt-get update && apt-get install -y curl
# curl https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
In order to run docker command, you need talk to daemon in ubuntuA (port 2375 is used since docker 1.0)
$ docker -H tcp://ubuntuA:2375 images
or
$ export DOCKER_HOST tcp://ubuntuA:2375
$ docker images
see more detail in http://docs.docker.com/articles/basics/

Yes,
you have to install docker on both client and server.

Related

Upload speed inside Docker Container is limited to 4 Mbit/s

I am new to dockers and I have a docker container running for a set of students with some version-specific compilers and it's a part of a virtual laboratory setup.
Everything is fine with the setup except for the network. I have a 200 Mbps network and this is a speed test done on my phone on the same 200 Mbps network.
I did a speed test on the host machine where the docker container is running. It is running on Ubuntu 20.04 LTS. It is all good.
From inside the docker container running on the above host machine, I did a speed test with the same server ID 9898 and with an auto-selected server too.
We can see that the upload speed inside the docker container is limited to 4 Mbit/s somehow. I cannot find a reason why elsewhere.
I have seen recently that many students experienced connection drops during their attempt to connect to our SSH server. I believe this has something to do with the bandwidth limit.
The docker run command I am using to run this container build is as follows.
$ sudo docker run -p 7766:22 --detach --name lahtp-serv-3 --hostname lahtp-server --mount source=lahtp-3-storage-hdd,target=/var/lahtp-storage main:0.1
I asked a few people who suggested to run the container with net=host which will run the container with the host network instead of the docker network. I would like to know why docker container limits the upload bandwidth and how using the host network instead of the docker network fixes the issue?
Update #1:
I tried to spawn a new Ubuntu 18.04 container with the following command:
$ sudo docker run --net=host -it ubuntu:18.04 /bin/bash
Once inside the container, I installed the following to run the speedtest.
root#lahtp-server:/# apt-get update && apt-get upgrade -y && apt-get install build-essential openssh-server speedtest-cli
Once the installation is done, here is the results.
But adding --net=host does not change the issue. The upload speed is still 4 Mbit/s
How to remove this bandwidth throttling?
Update #2
I spawned a new Ubuntu 14.04 docker container using the following command
$ sudo docker run -it ubuntu:14.04 /bin/bash
Once the container is up, I installed the following
$ apt-get install python3-dev python3-pip
$ pip3 install speedtest-cli
And tested inside this container, and here are the results.
NO THROTTLING.
Did the same with Ubuntu 16.04 LTS, No throttling.
$ sudo docker run -it ubuntu:16.04 /bin/bash
And once inside the container
$ apt-get install python3-dev python3-pip
$ pip3 install speedtest-cli
NO THROTTLING.

docker inside docker container

I want to install docker inside a running docker container.
docker run -it centos:centos7
My base container is using centos, I can login to running container using docker exec. But when I try to install docker inside it using yum install -y docker it installs.
But somehow I can't start the docker service with docker -d &, it gives me error as:
INFO[0000] Option DefaultNetwork: bridge
WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: , error: exit status 1
FATA[0000] Error starting daemon: Error initializing network controller: Error initializing bridge driver: Setup IP forwarding failed: open /proc/sys/net/ipv4/ip_forward: read-only file system
Is there a way I can install docker inside docker container or build image already having running docker? I have already seen these examples but none works for me.
The output of uname -r on the host machine:
[fedora# ~]$ uname -r
4.2.6-200.fc22.x86_64
Any help would be appreciated.
Thanks in advance
Update
Thanks to https://stackoverflow.com/a/38016704/372019 I want to show another approach.
Instead of mounting the host's docker binary, you should copy or install a container specific release of the docker binary. Since you're only using it in a client mode, you won't need to install it as a system service. You still need to mount the Docker socket into the container so that you can easily communicate with the host's Docker engine.
Assuming that you got a base image with a working Docker binary (e.g. the official docker image), the example now looks like this:
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
docker:1.12 docker info
Without actually answering your question I'd suggest you to read Using Docker-in-Docker for your CI or testing environment? Think twice.
It explains why running docker-in-docker should be replaced with a setup where Docker containers run as siblings of the "outer" or "base" container. The article also links to the original https://github.com/jpetazzo/dind project where you can find working examples how to run Docker in Docker - in case you still want to have docker-in-docker.
An example how to enable a container to access the host's Docker daemon look like this:
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /usr/bin/docker:/usr/bin/docker\
busybox:latest /usr/bin/docker info
If you are on Mac with Docker toolbox.
The below command WON’T WORK
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /usr/bin/docker:/usr/bin/docker\
busybox:latest /usr/bin/docker info
Because /var/run/docker.sock will not be on your OSX filesystem
the Docker daemon is running inside the boot2docker VM - and that's where the unix socket is.
So you have to run the container from boot2docker VM
$ docker-machine ssh default
$ docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v $(which docker):/usr/bin/docker\
busybox:latest /usr/bin/docker info
$ exit
This looks like Docker-in-Docker, feels like Docker-in-Docker, but it’s not Docker-in-Docker, when this container will create more containers, those containers will be created in the top-level Docker.
You need the --privileged parameter.
By default, Docker containers are “unprivileged” and cannot, for
example, run a Docker daemon inside a Docker container.
Source
Run your base image with the command docker run --privileged -it centos:centos7 bash. Then you may install and run another docker container inside that container.
I`ve a similar problems in my vms.
I`ve solve the problem with change the storage file system from image to vfs(in daemon.json file)
like the image bellow
For image works first create a base image, in my case with centos7
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
with this image builded (in my case i called local/c7-systemd) create a second image, installing docker and moving daemon.json to inside.
FROM local/c7-systemd
RUN yum install -y yum-utils
RUN yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
RUN yum install -y docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.28.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
COPY daemon.json /etc/docker/daemon.json
RUN yum install -y nano
RUN systemctl enable docker
EXPOSE 80
EXPOSE 8080
EXPOSE 8161
EXPOSE 6379
EXPOSE 8761
CMD ["/usr/sbin/init"]
enjoy!

Running 2 services

I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question
EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.

Docker toolbox volumes on windows doesn't refresh changes on container

I am starting with docker on windows and I am trying to use volumes for manage data in containers.
My host environment is a:
Windows 8.1
Docker Toolbox 1.8.
Virtual Box 5.0.6
I've created a ngnix image using the following Dockerfile.
Dockerfile
FROM centos:6.6
MAINTAINER afym
ENV WEBPORT 80
RUN yum -y update; yum clean all
RUN yum -y install epel-release; yum clean all
RUN yum -y install nginx; yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
VOLUME /usr/share/nginx/html
EXPOSE $WEBPORT
CMD [ "/usr/sbin/nginx" ]
I've created a ngnix container using the following command.
docker run -d --name nge -v //c/Users/src:/usr/share/nginx/html -p 8082:80 ng1
b738fef9cc4d135416a8cca4caf869acf944319b7c3c61129b11f37f9d891598
Then I go to my browser and I can see the web page:
However when I make a change on my index.html file it doesn't refresh on browser
Editing my file
On my browser (ctrl + f5)
I went to the VirtualBox machine to check if my shared directories options is ok.
Then I inspect my nge container with the following command.
docker inspect ng1
Docker inspect
What is happening with volumes? Why I can not see my changes?
After a couple of days I could find the solution.
Firstable docker on windows even on MAC uses a boot2docker instance on VirtualBox.
Diagrams
On MAC
On Windows
Next, the official docker's documentation says :
docker volume
Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory
However, after find a solution I decided to change the default c/Users to another path just for keep order. With this in mind I did the following steps:
Define your own workspace directory. In my case is /e/arquitectura (optional. If you want you can use the default path which is /c/Users)
Verify the configuration on the Virtual Machine (In default machine go to > Configuration > Share directories)
Join to the default machine and mount the directory using the alias name
sudo mount -t vboxsf alias-name-virtualbox some-path-in-boot2docker
# In my case (boot2docker instance)
$ cd
$ mkdir arquitectura
$ sudo mount -t vboxsf arquitectura /arquitectura
Finally create a new container or restart an existing one if you haven't changed the c/user/ path
# In my case (docker client)
$ docker run -d --name nge -v //arquitectura/src:/usr/share/nginx/html -p 8081:80 ng1
Now it works.

Is it possible to access Hbase installed inside docker container to be accessed using java client on mac OSX?

I created a docker container that have HBase installed in standalone mode. I used -net=host mode to run docker container. I can see UI for master and regionserver but when I am trying to connect to HBase from my java program after establishment of connection with zookeeper, it says that This server is in the failed servers list: boot2docker:60020. I am using mac OSX and boot2docker. Please give suggestion to this. Here is my dockerfile.
FROM centos:6
# Install required libraries.
RUN yum install -y tar
# Install java.
RUN curl -LO \
'http://download.oracle.com/otn-pub/java/jdk/7u71-b14/jdk-7u71-linux-x64.rpm'\
-H 'Cookie: oraclelicense=accept-securebackup-cookie'
RUN rpm -i jdk-7u71-linux-x64.rpm
RUN rm -f jdk-7u71-linux-x64.rpm
# Export JAVA_HOME.
ENV JAVA_HOME /usr
# Copy hbase code to docker container.
COPY hbase-*.tar.gz /
RUN tar -xzvf hbase-*.tar.gz
RUN rm hbase-*.tar.gz
RUN mv hbase-* hbase
# Copy hbase-site.xml.
ADD hbase-config-files/hbase-site.xml /hbase/conf/hbase-site.xml
# Start Hbase.
CMD ["./hbase/bin/hbase", "master", "start"]`
To run this container I used docker run --net=host -t docker_image

Resources