Issues with docker build command and github - docker

I'm trying to run a dev env as docker machine,
I created the following docker file
FROM rails:4.2
MAINTAINER Chen Kinnrot <kinnrot#gmail.com>
RUN mkdir -p /var/app
COPY Gemfile /var/app/Gemfile
WORKDIR /var/app
RUN bundle install
CMD rails s -b 0.0.0.0
When Running docker build -t dev .
I get the follwing message
fatal: unable to connect to github.com:
github.com: Name or service not known
Why is that, and how can I solve this annoying issue?

This is a known issue with VirtualBox / boot2docker; when switching networks, boot2docker sometimes looses its DNS information. See these issues; https://github.com/boot2docker/boot2docker/issues/776, https://github.com/docker/machine/issues/1857
You can either try to restart the machine;
docker-machine stop default
docker-machine start default
Or set the right name server to the virtual machine;
docker-machine ssh default
echo "nameserver 8.8.8.8" > /etc/resolv.conf

Related

XQuartz: Can't Open Display Mac OS

While trying to follow these SO instructions for getting a simple Xeyes application running from within a Docker container on a Mac (10.15.5) using XQuartz, this is what I get:
$ docker run -it -e DISPLAY="${IP}:0" -v /tmp/.X11-unix:/tmp/.X11-unix so_xeyes
/work # xeyes
Error: Can't open display: 192.168.1.9:0
Here are the steps to reproduce:
$ brew install --cask xquartz
Dockerfile:
# Base Image
FROM alpine:latest
RUN apk update && \
apk add --no-cache xeyes
# Set a working directory
WORKDIR /work
# Start a shell by default
CMD ["ash"]
Build image with:
$ docker build -t so_xeyes .
And run the Docker Container/xeyes with this:
# Set your Mac IP address
IP=$(/usr/sbin/ipconfig getifaddr en0)
echo $IP
192.168.1.9
# Allow connections from Mac to XQuartz
/opt/X11/bin/xhost + "$IP"
192.168.1.9 being added to access control list
# Run container
docker run -it -e DISPLAY="${IP}:0" -v /tmp/.X11-unix:/tmp/.X11-unix so_xeyes
When inside the container, type:
xeyes
BUT, I get the following error: Error: Can't open display: 192.168.1.9:0
Does anyone have an idea how I can resolve this or to investigate further?
#MarkSetchell gave me a hint with suggesting I needed to modify the XQuartz Preferences > Security...
But, even after selecting "Allow connections from network clients", it still didn't work.
Then I found a Gist that gave me a little more information here because someone commented that after they made the change, they needed to reboot their Mac (again): https://gist.github.com/cschiewek/246a244ba23da8b9f0e7b11a68bf3285
So, after I made the change AND rebooted my Mac, it worked!
Thanks for guiding me to the final answer!
ALSO NOTE: You do NOT need to volume mount the .X11 directory for this to work:
docker run -it -e DISPLAY="${IP}:0" so_xeyes
By default, X11 does not listen over TCP/IP. You can enable that if you want in Settings, but I don't think it's necessary here. Docker should be able to route traffic to the unix domain socket setup by launchd for DISPLAY (eg: /private/tmp/com.apple.launchd.jTIfZplv7A/org.xquartz:0).
If that doesn't work, you should reach out to Docker to add support for that as it's much preferred to using TCP for X11 traffic.

How can i run docker commands inside a docker file?

I have this docker file:
# Use the official image as a parent image
FROM mysql/mysql-server:8.0
# Set the working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy the file from your host to your current location
COPY customers.sql .
COPY entrypoint.sh .
# Inform Docker that the container is listening on the specified port at runtime.
EXPOSE 1433:1433
# Run the command inside your image filesystem
RUN chmod +x entrypoint.sh
# Run the specified command within the container.
RUN /bin/bash ./entrypoint.sh
And entypoint.sh:
mysql --host=localhost --protocol=tcp -u root -pMypassword -e "create database customersDatabase; use customersDatabase; source customers.sql;"
but i get the following error message:
ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (99)
when i run docker build
what is the correct way to build entrypoint.sh in order to run docker commands?
BEFORE OP EDIT:
problem:
./entrypoint.sh: line 2: docker: command not found
You are trying to run docker inside docker.
Possible solutions
1) Mount host's docker sock
or
2) Install docker inside docker before you run your
-> apt install docker.io
--> expect super size of your image
entrypoint
Difference between 1) and 2)
in 1) your docker's docker is the host's docker
while in 2) installed docker in the docker is independent and thus isolated from host
AFTER OP EDIT
problem:
ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (99)
EDIT: And since you edit your question, which now doesnt correspond with your title, I will provide you with your second problem
You cannot connect to your localhost, because insider docker, localhost is docker itself, not your host.
This can be solved by using host network driver.
Or preferably, put your db in docker too, have in the same docker network,expose port, name your db container as mysql_database, and connect to it as mysql_database:port
Or dont try to connect to db which is in your container from within your container. Thats I think antipattern. Usually it should be possible to get into db's CLI where you can run commands

DNS resolution with the container

I have a docker image which is build from the following file.
FROM java:7
MAINTAINER Tushar Gandhi
ARG version
ENV version=$version
ARG port
ENV port=$port
RUN mkdir -p /cacheDir/services/live/prediction/p$port/$version/logs
RUN ls -tlr /cacheDir/services/live/prediction/p$port/
RUN mkdir -p /cacheDir/services/releases/prediction/p$port/$version/
RUN mkdir -p /cacheDir/services/predictionmodel
ADD target/predictionDependencies/* /cacheDir/services/predictionmodel/
ADD /target/prediction-0.0.13-SNAPSHOT.jar /cacheDir/services/releases/prediction/p$port/$version/prediction-0.0.13-SNAPSHOT.jar
ADD /target/instance.properties /cacheDir/services/releases/prediction/p$port/$version/instance.properties
ADD /target/logback.xml /cacheDir/services/releases/prediction/p$port/$version/logback.xml
RUN ls -ltr /cacheDir/services/live/prediction/p$port/$version/
RUN ls -ltr /cacheDir/services/releases/prediction/p$port/$version/
RUN ls -ltr /cacheDir/services/predictionmodel
ENTRYPOINT ["sh","-c","java -server -Xmx2g -Xloggc:/cacheDir/services/live/prediction/p${port}/${version}/logs/gc.log -verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/cacheDir/services/live/prediction/p${port}/${version}/oom.dump -Dlogback.configurationFile=/cacheDir/services/releases/prediction/p${port}/${version}/logback.xml -Dlog.home=/cacheDir/services/live/prediction/p${port}/${version}/logs -Dlogback.debug=true -Dbroker.l^Ct=sv-kafka6.pv.sv.nextag.com:9092,sv-kafka7.pv.sv.nextag.com:9092,sv-kafka8.pv.sv.nextag.com:9092,sv-kafka9.pv.sv.nextag.com:9092 -jar /cacheDir/services/releases/prediction/p${port}/${version}/prediction-0.0.13-SNAPSHOT.jar $port /cacheDir/services/releases/prediction/p${port}/${version}/instance.properties /com/abc/services/$ZK_PATH"]
I'm using the following build command to build the image.
docker build --build-arg version=test1 --build-arg port=3001 -f Dockerfile -t prediction:test1 .
The image creation is successful and the container comes up to be successful. Run command used
sudo docker run -p 7105:3001 -v ~/PredictionVolume/logs/:/cacheDir/services/live/prediction/p5030/Testing1/logs/ -e ZK_PATH=qa -t prediction:test
Now, the problem lies in that my application when runs in a docker container, it tries to access URL qa-zk1.com:2181. This URL is accessible from my system but not from the docker container. Can anyone please suggest a way to make the URL accessible from the container.
[Edit] I have been trying different methods and came across that I was able to ping google.com. This showed me that internet is working. If internet is working, then that URL should also be accessible, but it isn't, therefore it seems to be a problem of DNS resolution. I tried with the IP address and was able to hit the service properly, now I need to find out how to enable that search pattern using a URL rather than an IP address.
In case you can reach the site by IP, it means that inside the container you are pointing to the DNS server, which does not know "qa-zk1.com" name.
You can 2 options:
Add your ip to the local hosts file
/etc/hosts
Update container's DNS configuration
See Configure container DNS for more details

Docker toolbox volumes on windows doesn't refresh changes on container

I am starting with docker on windows and I am trying to use volumes for manage data in containers.
My host environment is a:
Windows 8.1
Docker Toolbox 1.8.
Virtual Box 5.0.6
I've created a ngnix image using the following Dockerfile.
Dockerfile
FROM centos:6.6
MAINTAINER afym
ENV WEBPORT 80
RUN yum -y update; yum clean all
RUN yum -y install epel-release; yum clean all
RUN yum -y install nginx; yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
VOLUME /usr/share/nginx/html
EXPOSE $WEBPORT
CMD [ "/usr/sbin/nginx" ]
I've created a ngnix container using the following command.
docker run -d --name nge -v //c/Users/src:/usr/share/nginx/html -p 8082:80 ng1
b738fef9cc4d135416a8cca4caf869acf944319b7c3c61129b11f37f9d891598
Then I go to my browser and I can see the web page:
However when I make a change on my index.html file it doesn't refresh on browser
Editing my file
On my browser (ctrl + f5)
I went to the VirtualBox machine to check if my shared directories options is ok.
Then I inspect my nge container with the following command.
docker inspect ng1
Docker inspect
What is happening with volumes? Why I can not see my changes?
After a couple of days I could find the solution.
Firstable docker on windows even on MAC uses a boot2docker instance on VirtualBox.
Diagrams
On MAC
On Windows
Next, the official docker's documentation says :
docker volume
Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory
However, after find a solution I decided to change the default c/Users to another path just for keep order. With this in mind I did the following steps:
Define your own workspace directory. In my case is /e/arquitectura (optional. If you want you can use the default path which is /c/Users)
Verify the configuration on the Virtual Machine (In default machine go to > Configuration > Share directories)
Join to the default machine and mount the directory using the alias name
sudo mount -t vboxsf alias-name-virtualbox some-path-in-boot2docker
# In my case (boot2docker instance)
$ cd
$ mkdir arquitectura
$ sudo mount -t vboxsf arquitectura /arquitectura
Finally create a new container or restart an existing one if you haven't changed the c/user/ path
# In my case (docker client)
$ docker run -d --name nge -v //arquitectura/src:/usr/share/nginx/html -p 8081:80 ng1
Now it works.

docker error: /var/run/docker.sock: no such file or directory

I am new to docker. I have a shell script that loads data into impala and I want a docker file that runs builds an image and run the container.
I am on mac, installed boot2docker and have the DOCKER_HOST env set up.
bash-3.2$ docker info
Containers: 0
Images: 0
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Dirs: 0
Execution Driver: native-0.2
Kernel Version: 3.15.3-tinycore64
Debug mode (server): true
Debug mode (client): false
Fds: 10
Goroutines: 10
EventsListeners: 0
Init Path: /usr/local/bin/docker
Sockets: [unix:///var/run/docker.sock tcp://0.0.0.0:2375]
I am trying to just installed a pre-built image using:
sudo docker pull busybox
I get this error:
sudo docker pull busybox
2014/08/18 17:56:19 Post http:///var/run/docker.sock/images/create?fromImage=busybox&tag=: dial unix /var/run/docker.sock: no such file or directory
Is something wrong with my docker setup?
When I do a docker pull busybox, It pulls the image and download is complete.
bash-3.2$ docker pull busybox
Pulling repository busybox
a9eb17255234: Download complete
fd5373b3d938: Download complete
d200959a3e91: Download complete
37fca75d01ff: Download complete
511136ea3c5a: Download complete
42eed7f1bf2a: Download complete
c120b7cab0b0: Download complete
f06b02872d52: Download complete
120e218dd395: Download complete
1f5049b3536e: Download complete
bash-3.2$ docker run busybox /bin/echo Hello Doctor
Hello Doctor
Am I missing something?
You don't need to run any docker commands as sudo when you're using boot2docker as every command passed into the boot2docker VM runs as root by default.
You're seeing the error when you're running as sudo because sudo doesn't have the DOCKER_HOST env set, only your user does.
You can confirm this by doing a:
$ env
Then a
$ sudo env
And looking for DOCKER_HOST in each output.
As for having a docker file that runs your script, something like this might work for you:
Dockerfile
FROM busybox
# Copy your script into the docker image
ADD /path/to/your/script.sh /usr/local/bin/script.sh
# Run your script
CMD /usr/local/bin/script.sh
Then you can run:
docker build -t your-image-name:your-tag .
This will build your docker image, which you can see by doing a:
docker images
Then, to run your container, you can do a:
docker run your-image-name:your-tag
This run command will start a container from the image you created with your Dockerfile and your build command and then it will finish once your script.sh has finished executing.
You can quickly setup your environment using shellinit
At your command prompt execute:
$(boot2docker shellinit)
That will populate and export the environment variables and initialize other features.
docker pull will fail if docker service is not running. Make sure it is running by
:~$ ps aux | grep docker
root 18745 1.7 0.9 284104 13976 ? Ssl 21:19 0:01 /usr/bin/docker -d
If it is not running, you can start it by
sudo service docker start
For Ubuntu 15 and above use
sudo systemctl start docker
On my MAC when I start boot2docker-vm on the terminal using
boot2docker start
I see the following
To connect the Docker client to the Docker daemon, please set:
export DOCKER_CERT_PATH=<my things>
export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://<ip>:2376
After setting these environment variables I was able to run the build without the problem.
Update [2016-04-28] If you are using a the recent versions of docker you can do
eval $(docker-machine env) will set the environment
(docker-machine env will print the export statements)
I also got this error. Though, I did not use boot2docker but just installed "plain" docker on Ubuntu (see https://docs.docker.com/installation/ubuntulinux/).
I got the error ("dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?") because the docker daemon was not running, yet.
On Ubuntu, you need to start the service:
sudo service docker start
See also http://blog.arungupta.me/resolve-dial-unix-docker-sock-error-techtip64
For boot2docker on Windows, after seeing:
FATA[0000] Get http:///var/run/docker.sock/v1.18/version:
dial unix /var/run/docker.sock: no such file or directory.
Are you trying to connect to a TLS-enabled daemon without TLS?
All I did was:
boot2docker start
boot2docker shellinit
That generated:
export DOCKER_CERT_PATH=C:\Users\vonc\.boot2docker\certs\boot2docker-vm
export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://192.168.59.103:2376
Finally:
boot2docker ssh
And docker works again
In Linux, first of all execute sudo service docker start in terminal.
If you're using CentOS 7, and you've installed Docker via yum, don't forget to run:
$ sudo systemctl start docker
$ sudo systemctl enable docker
This will start the server, as well as re-start it automatically on boot.
To setup your environment and to keep it for the future sessions you can do:
echo 'export DOCKER_HOST="tcp://$(boot2docker ip 2>/dev/null):2375";' >> ~/.bashrc
Then:
source ~/.bashrc
And your environment will be setup in every session
The first /var/run/docker.sock refers to the same path in your boot2docker virtual machine. Correcly write for windows /var/run/docker.sock
You, maybe the not the OP, but someone may have a directory called /var/run/docker.sock/ already due to how many times you hack and slash to get things right with docker (especially noobs). Delete that directory and try again.
This helped me on my way to getting it to work on Centos 7.
I have installed the docker using offline method and post server restart docker is not running.
So, I executed the below command it worked for me!
/usr/bin/dockerd > /dev/null
run the following commands, OS = CentOS / RHLE / Amazon Linux, etc.
sudo systemctl start docker
sudo systemctl enable docker
sudo systemctl status docker
chmod 777 /var/run/docker.sock

Resources