I have Docker installed and it runs fine, but when I have created a container and run it, I want to know the ID of the container, so I do a docker ps. But then I always get this message:
Get http:///var/run/docker.sock/v1.15/containers/json: dial unix /var/run/docker.sock: no such file or directory
What could be wrong here?
Make sure you export the docker environment variables:
where it says
after you run
boot2docker start
and it says:
To connect the Docker client to the Docker daemon, please set:
export DOCKER_CERT_PATH=/Users/jbielick/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://192.168.59.103:2376
You need to export those variables. Check that they are empty with
echo $DOCKER_HOST
and if it's blank, docker can't talk to your VM.
make sure boot2docker is running:
$boot2docker start
make sure docker host variable is exposed:
# Will print boot2docker VM IP
boot2docker ip
The VM's Host only interface IP address is: 192.168.59.103
# Set docker host variable with value from previous command
export DOCKER_HOST tcp://192.168.59.103:2375
Check if the docker daemon is running on the boot2docker host
boot2docker ssh
ps aux | grep docker
/usr/local/bin/docker -d ....
If you are running on Linux make sure you are running as root user
Related
I have a docker image, mapped the host 8888 to docker 22.
when i use another computer to ssh to host 8888, it goes into docker directly, but it's weird.
if i use 'sudo docker exec -u 0 -it xxxx /bin/bash' goes into it, when i input 'pip list', i can get the results like below
but if i ssh to docker via host 8888 port by root, it says command not found!
and if i input python from docker exec
well by ssh directly into docker, it is like this
totally different, what should i do to ssh into docker like docker exec from host, much appreciate!
Is there a way to know the IP address of a CentOS host machine inside the docker container inside which the container is running? Like say if I have a linux machine 10.10.10.10 IP that has docker container1 and container2 running, I would want to query the IP from my java code. The Java code is inside the docker container. I am actually running these services as a docker swarm.
Docker is about isolating the container from the host.
So, in theory, the container should not be aware fo this IP address, unless the host "gives" it to the container
Some ideas
In an environment variable at run time
either
docker run -e "host_IP=10.10.10.10"...
or in a file my_env containing
host_IP 10.10.10.10 and use it in
docker run --env-file my_env
see the doc
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file
in an environment variable at build time
have in your Dockerfile a line
ENV host_IP 10.10.10.10
the doc
https://docs.docker.com/engine/reference/builder/#env
you can share /etc/hosts between the host and the container with
docker run -v /etc/hosts:/etc/hosts ...
You can use the following netstat command to find the host ip from inside a container:
netstat -nr | grep '^0.0.0.0' | awk '{print $2}'
I have an image that I'm using to run my CI/CD builds (using GitLab CE). I'd like to deploy my app doing something like this from within the container:
eval "$(docker-machine env manager)"
sudo docker stack deploy --compose-file docker-stack.yml web
However, I'd like the docker-machine to access machines defined on the host system since the container will be destroyed and I don't want to include access details in the image.
I've tried a few things
Accessing the Remote Host via docker-machine
Create the docker-machine on the host and mount the MACHINE_STORAGE_PATH so that it is available to the container
Connect to the remote docker-machine manually from within the container and setting the MACHINE_STORAGE_PATH equal to a mounted volume
Mounting the docker socket
In both cases, I can see the machine storage is persisted, but whenever I create a new container and run docker-machine ls none of the machines are listed.
Accessing the Remote Host via DOCKER_HOST
Forward the remote machine docker port to the host docker port docker-machine ssh manager-1 -N -L 2376:localhost:2376
export DOCKER_HOST=:2376
Tell docker to use the same certs that are used by docker-machine: export DOCKER_TLS_VERIFY=1 and export DOCKER_CERT_PATH=/Users/me/.docker/machine/machines/manager-1
Test with docker info
This gives me error during connect: Get https://localhost:2376/v1.26/info: x509: certificate signed by unknown authority
Any ideas on how I can perform a remote deployment from within a container?
Thanks
EDIT
Here is a diagram to try and help better communicate the scenario.
Don't use docker-machine for this.
Docker-machine stores files in $HOME/.docker/machine, so when you restart with a fresh copy of this folder, all previously defined machines will be removed. You could store this folder as a volume, but there's a much easier way for your purposes.
The solution is to mount the docker socket, and either as root or from a user with the same gid as the docker socket (note that group names themselves inside and outside the container may not match, so gid is important), run your docker ... commands as normal. You can skip the docker-machine eval completely since you are running the commands against the local docker socket.
If you need to run commands remotely, I find it easier to define the DOCKER_HOST and DOCKER_TLS_VERIFY variables manually rather than using docker-machine.
In case you want to communicate from your CI container to the Docker host you can simply mount the Docker socket when starting the CI container:
docker run -v /var/run/docker.sock:/var/run/docker.sock <gitlab-image>
Now you can run docker commands on the host from within the CI container.
I´ve been looking in google but i cannot find any answer.
It is possible connect to a virtualbox docker container that I just start up. I have the IP of the virtual machine, but if I try to connect by SSH of course ask me for a password.
Regards.
see
https://github.com/BITPlan/docker-stackoverflowanswers/tree/master/33232371
to repeat steps.
On my Mac OS X machine
docker-machine env default
shows
export DOCKER_HOST="tcp://192.168.99.100:2376"
So i added an entry
192.168.99.100 docker
to my /etc/hosts
so that ping docker works.
As a Dockerfile i am using:
# Ubuntu image
FROM ubuntu:14.04
which I am building with
docker build -t bitplan/sshtest:0.0.1 .
and testing with
docker run -it bitplan/sshtest:0.0.1 /bin/bash
Now ssh docker will react with
The authenticity of host 'docker (192.168.99.100)' can't be established.
ECDSA key fingerprint is SHA256:osRuE6B8bCIGiL18uBBrtySH5+iGPkiHHiq5PZNfDmc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'docker,192.168.99.100' (ECDSA) to the list of known hosts.
wf#docker's password:
But here you are connecting to the docker machine not your image!
The ssh port is at port 22. You need to redirect it to another port and configure your image to support ssh to root or a valid user.
See e.g. https://docs.docker.com/examples/running_ssh_service/
Are you trying to connect to a running container or trying to connect to the virtualbox image running the docker daemon?
If the first, you cannot just SSH into a running container unless that container is running an ssh daemon. The easiest way to get a shell into a running container is with docker exec -ti <container name/id> /bin/sh. Do a docker ps to see running containers.
If the second, if your host was created with docker-machine then you can ssh into it with docker-machine ssh <machine name>. You can see all of you're running machines with docker-machine ls.
If this doesn't help can you clarify your question a little and provide details around how your creating your image and starting the container.
You can use ssh keys to access passwordless.
Here's some intro
https://wiki.archlinux.org/index.php/SSH_keys
I'm running under boot2docker 1.3.1.
I have a Docker container running a web server via uwsgi --http :8080.
If I attach to the container I can browse the web site using lynx http://127.0.0.1:8080 so I know the server is working.
I ran my container with:
$ docker run -itP --expose 8080 uwsgi_app:0.2
It has the following details:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5248ad86596d uwsgi_app:0.2 "bash" 11 minutes ago Up 11 minutes 0.0.0.0:49159->8080/tcp cocky_hypatia
$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 5248ad86596d
172.17.0.107
I thought I could access that web site from my host by going to http://172.17.0.107:49159.
This does not work. I just see 'connecting...' in Chrome, getting nowhere.
What am I doing wrong?
Extending Anentropic's answer: boot2docker is the old app for Mac and Windows, docker-machine is the new one.
Firstly, list your machines:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100:2376
Then select one of the machines (the default one is called default) and:
$ docker-machine ip default
192.168.99.100
Ok, stupid me, I found the answer in the docs for boot2docker
https://docs.docker.com/installation/mac/#container-port-redirection
I needed to use the ip address of the boot2docker vm, rather than the ip of the container, i.e.
$ boot2docker ip
192.168.59.103
and I am able to browse my site from the host at http://192.168.59.103:49159/
I did not need to add any route on the host
To find the IP address of your container, you should need NO additional installs:
docker inspect <container>
This provides a wealth of info. grep it for the IPAddress.
You could use boot2docker port mapping option -L, as described here.
So, in your case it would be
boot2docker ssh -L 0.0.0.0:8080:localhost:8080
and then
docker run -it -p 8080:8080 uwsgi_app:0.2
That way, you do not have to use boot2docker's IP address: you can use localhost or your own IP address (and your docker container can be accessed from outside).
Boot2docker is outdated, but you may still have this problem on Docker for Windows or Mac, even though the same container works on Linux. One symptom is that trying to access a page on the server inside the container gives the error "didn't send any data" as opposed to "could not connect."
If so, it may be because on Win/Mac the container host has its own IP, it's not localhost as it is on linux. Try running Django on IP 0.0.0.0, meaning accept connections from all IPs, like this:
python manage.py runserver 0.0.0.0:8000
Alternatively, if you need to make sure the server only responds to local requests (such as from your local proxy like nginx, apache, or gunicorn) you can use the host IP returned by hostname -i.
And make sure you are using the -p port forwarding option correctly in the docker run command.
Assuming all is well, you should be able to access your server at http://localhost in a browser running on the host machine.
docker build -t {imagename} .
docker build -t api-rest-test .
docker run -dp {localport}:{exposeport} image:name
docker run -dp 8080:8080 api-rest-test:latest
make sure you are using the same port for yourlocalport and exposeport
then you can access your rest service in your local machine http://localhost:8080
[EDIT: original version was ignoring the -P in question]
If you want to get to the containers without having to 'publish' the port (which changes its number)
there is a good run-through here.
The key is this line:
sudo route -n add 172.17.0.0/16 172.16.0.11
which tells the Mac how to route to the private network inside the VirtualBox VM that the Docker containers are on.
Had the same issue and in my case i was using AWS EC2 instance. I was trying with the container IP which did not work. Then I used the actual public IP of the AWS host as the IP, which worked.
How to troubleshoot the issue on hosting application on local host browser
For this launch the container with below command, in my case it was:
[root#centoslab3 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1b81d8a0e3e1 centos:baseweb "/bin/bash" 8 minutes ago Exited (0) 24 seconds ago webtest
[root#centoslab3 ~]# docker run --name=atul -v /root/dockertest:/var/www/html -i -t -p 5000:8000 centos:baseweb /bin/bash
In the httpd configuration:
[root#adb28b08c9ed /]# cd /etc/httpd/conf
[root#adb28b08c9ed conf]# ll
total 52
-rw-r--r--. 1 root root 34419 Sep 19 15:16 httpd.conf
edit the file with the port 8000 in listner and update the container ip and port under Servername.
Restart the httpd service and you are done.
Hope this helps