No such image or container error - docker

I want to setup a rancher server and a rancher agent on the same server.
Here is what i have done for creating server:
docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
Then, I have opened my web-browser on 8080 port.
I have chosen a login/password and enabled access control.
Then i wanted to create a host (agent). Rancher web interface says me to type this command:
docker run -e CATTLE_AGENT_IP=x.x.x.x --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.10 http://nsxxx.ovh.net:8080/v1/scripts/yyyy:1514678400000:zzzz
I have no error message, but I do not see any entry in host section (in rancher web interface).
So I tried to execute a shell on the agent docker container:
docker exec -ti xxxxx /bin/bash
I tried to manually run run.sh script and here is what I see:
Error: No such image or container: nsxxx
I suppose this is because docker containers cannot communicate together, but I have done exactly what is in the documentation...
Thanks for your help

For docker exec your need to replace the xxxxx string with the container id or the name of the container. Both you get from the docker ps command

Related

Jenkins docker running in the container but not launching in web browser

On my Redhat7linux docker host, i have created a jenkins container by pulling the jenkins official image from docker hub and i was able to bring the jenkins container up & running by executing the command:
docker run -d -p 50000:8080 -v $PWD/jenkins:/var/lib/jenkins -t jenkins_master
and i could see the jenkins is up when i checked the logs using the docker logs {containerID} but when i try to launch it in web browser with {hostip}:50000, I couldn't access it as it throws "The site cant be reached", and since my container is running inside a company network, should I either open/enable that port 50000 or do I need to set any proxy in the docker host?
Am I missing something here?
Here are the outputs of the docker command:
The official image provide the following command :
docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
It seems that both ports 8080 and 50000 have to be exposed.
Execute the docker run command to run the container, check the status of your container.
docker container run -p [YOUR PORT]:8080 -v [YOUR
VOLUME]:/var/jenkins_home
--name jenkins-local jenkins/jenkins:lts
you can then access it using localhost:[YOUR PORT]

DNS not working between two linked docker containers - getaddrinfo EAI_AGAIN error

I am attempting to setup a temporary environment where I can execute ATs against a web application. To achieve this I have 3 docker containers:
Container 1: Database (mongo_local)
docker build -t mongo_local ./test/AT/mongo
docker run --name mongo_local -d -p 27017:27017 mongo_local
Container 2 (Web application):
docker run --name mywebapp_local -d -p 4431:4431 --link mongo_local -v /applicationdata:/applicationdata mywebapp
Container 3 (Newman test runner):
docker build -t newman_runner ./test/AT/newman
docker run --name newman_runner --link mywebapp_local newman_runner
The web application can access the database successfully using the following connection string: mongodb://mongo_local:27017/mydb, note that I am able to reference mongo_local, I dont have to specify an IP address for the mongo_local container.
The newman test runner runs postman tests against the web application and all tests execute successfully when I specify the IP address of the mywebapp_local container i.e. 10.0.0.4 in the URL, however if I specify the name mongo_local in the URL it does not work.
Hence https://mywebapp_local/api/v1/method1 does not work but https://10.0.0.4/api/v1/method1 does work.
The error Im getting is
getaddrinfo EAI_AGAIN mywebapp_local mywebapp_local:443 at request ...
I've tried using -add-host in the docker run command and this makes no difference. Is there anything obvious that I'm doing wrong?
As you have it set up, the newman_runner container doesn't --link mongo_local and that's why it can't see it.
Docker has been discouraging explicit inter-container links for a while. If you create a Docker-internal network and attach each container to it
docker network create testnet
docker run --net testnet ...
it will be able to see all of the other containers on the same network by their --name without an explicit --link.

Can I config the hosts file for spring boot docker container?

I develop a spring boot application and build a docker image. To get the app running, I have to add a new line "127.0.0.1:www.hostname.com" to /etc/hosts. I want to make this config automatically when I run docker image. I tried this:
docker run --add-host www.hostname.com:127.0.0.1 xxx/name:1.0
what I expect is:
www.hostname.com:8080 = 127.0.0.1:8080
but it is not work.
Please give some advice. thanks.
If you run the container with --hostname e.g.
docker run -d -ti --hostname myhost --name mytest ubuntu /bin/bash
you add a line 172.17.0.3 www.hostname.com to container's /etc/hosts, where 172.17.0.3 is the private address of the container itself, so it is resolving it internally from the container itself and from the docker docker server too.
If you run the container with --add-host e.g.
docker run -d -ti --add-host www.hostname.com:127.0.0.1 --name mytest ubuntu /bin/bash
you add a line 127.0.0.1 www.hostname.com to container's /etc/hosts, so it is resolving www.hostname.com to 127.0.0.1 internally from the container itself.
It works, at least in Docker version 1.13.0 on Linux.
Otherwise you can change container's /etc/hosts at system boot using sed or other substitution command in CMD or overriding the command at docker run.

How to link to additional container without restarting?

X-post from https://groups.google.com/forum/#!topic/docker-user/A180aHSlQRE
Let's say I run following command to link web container to db container-
docker run -d -P --name web --link db training/webapp python app.py
Now I want my web container to be linked to additional container WITHOUT restarting web container. Is it possible?
No, once started you can't link the container to another container. But you can link a new container to the web container:
docker run -d -P --name myapp --link web <image> <command>
or you can link another web container to the db container:
docker run -d -P --name web2 --link db training/webapp python app.py
Having said that since your first web container is running you can also run:
docker inspect web
to find out the details of that container and see if you'd like to use them in your new container that you create. Another thing you can try is to make your web container interactive so once you started it you can modify it at runtime.
Actually, there is way to overcome this limitation.
What you can do in this situation is to add LINKED_CONTAINER_IP and LINKED_CONTAINER_NAME directly into /etc/hosts file of "HOST CONTAINER" using following procedure:
get the IP address of the running container which you want additionally to link into "host container" using
docker inspect --format '{{ .NetworkSettings.IPAddress }}' CONTAINERID
directly from host
3 then add that ip with the name into /etc/hosts of "HOST CONTAINER" file as following
first get interactive access into shall of running host container
docker exec -it CONTAINERID sh
once you get into shell prompt add line to /etc/hosts file using
echo "LINKED_CONTAINER_IP LINKED_CONTAINER_NAME" >> /etc/hosts
verify using ping LINKED_CONTAINER_NAME
Please note this is temporary solution which works only until any of containers are restarted in which case IP address may change and therefore resolving to LINKED_NAME will not work any more!

Docker in Docker: Port Mapping

I have found a similar thread, but failed to get it to work. So, the use case is
I start a container on my Linux host
docker run -i -t --privileged -p 8080:2375 mattgruter/doubledocker
When in that container, I want to start another one with GAE SDK devserver running.
At that, I need to access a running app from the host system browser.
When I start a container in the container as
docker run -i -t -p 2375:8080 image/name
I get an error saying that 2375 port is in use. I start the app, and can curl 0.0.0.0:8080 when inside both containers (when using another port 8080:8080 for example) but cannot preview the app from the host system, since lohalhost:8080 listens to 2375 port in the first container, and that port cannot be used when launching the second container.
I'm able to do that using the image jpetazzo/dind. The test I have done and worked (as an example):
From my host machine I run the container with docker installed:
docker run --privileged -t -i --rm -e LOG=file -p 18080:8080
jpetazzo/dind
Then inside the container I've pulled nginx image and run it with
docker run -d -p 8080:80 nginx
And from the host environment I can browse the nginx welcome page with http://localhost:18080
With the image you were using (mattgruter/doubledocker) I have some problem running it (something related to log attach).

Resources