I am trying to make a shared folder from a container with Ubuntu installing samba.
It is a test and I want to do it without creating volumes.
So, how could I see the IP of the container to create the folder in Windows?
I've been doing it with docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' containerId but the IP that it returns are only for internal networks to docker
Run the container by mapping a host port and you should be able to access the container instance with HostIP:HostPort
RUN A FTP server in the container and Expose necessary ports, including SSH PORT.
Try accessing the files over FTP
Have you tried the following command?
docker container ps
Check out the ports attribute, this should give you the output you need.
Related
In case of I have a machine that running docker (docker host) and spin up some containers inside this docker host,
I need containers' services be able to talk to each other - container expose ports and they also need to resolve by hostname (e.g: example.com)
container A needs to talk to container B with URL: example.com:3000
I've read this article but not quite sure about "inherit" from docker host, does docker host's /etc/hosts will be appended to container's /etc/hosts that running inside docker host?
https://docs.docker.com/engine/reference/run/#managing-etchosts
How to achieve?
Does this "inherit" has any connect to type of docker container networking https://docs.docker.com/v17.09/engine/userguide/networking/ ?
It does not inherit the host's /etc/hosts file. The file inside your container is updated by docker when using the --add-host parameter or extra_hosts in docker-compose. You can add individual records by using extra_hosts in docker-compose (https://docs.docker.com/compose/compose-file/#extra_hosts).
Although if you're just trying to get 2 containers talking to each other you can alternatively connect them to the same network. In docker-compose you can create what's called an external network and have all your docker-compose files reference it. you will then be able to connect by using the full docker container name (eg. http://project_app_1:3000).
See https://docs.docker.com/compose/compose-file/#external
how is it possible to resolve names defined in Docker host's /etc/hosts in containers?
Containers running in my Docker host can resolve public names (e.g. www.ibm.com) so Docker dns is working fine.
I would like to resolve names from Docker hosts's (e.g. 127.17.0.1 smtp) from containers.
My final goal is to connect to services running in Docker host (e.g. smtp server) from containers. I know I can use the Docker Host IP (127.17.0.1) from containers, but I thought that Docker would have used the Docker host /etc/hosts to build containers's resolve files as well.
I am even quite sure I have seen this working a while ago... but I could be wrong.
Any thoughts?
Giovanni
Check out the --add-host flag for the docker command: https://docs.docker.com/engine/reference/run/#managing-etchosts
$ docker run --add-host="smtp:127.17.0.1" container command
In Docker, /etc/hosts cannot be overwritten or modified at runtime (security feature). You need to use Docker's API, in this case --add-host to modify the file.
For docker-compose, use the extra_hosts option.
For the whole "connect to services running in host" problem, see the discussion in this GitHub issue: https://github.com/docker/docker/issues/1143.
The common approach for this problem is to use --add-host with Docker's gateway address for the host, e.g. --add-host="dockerhost:172.17.42.1". Check the issue above for some scripts that find the correct IP and start your containers.
You can setup on host simple DNS server, and in container setup /etc/resolve.conf to Docker host DNS server.
For example in dnsmasq you can add addn-hosts=/etc/hosts to config file. So container, by using Docker host DNS server, will be able to resolve hosts /etc/hosts.
I want to run a task in some docker containers on different hosts. And I have written a manager app to manage the containers(start task, stop task, get status, etc...) . Once a container is started, it will send an http request to the manager with its address and port, so the manager will know how to manage the container.
Since there may be more than one containers running on a same host, they would be mapped to different ports. To register a container on my manager, I have to know which port each container is mapped to.
How can I get the mapped port inside a docker container?
There's an solution here How do I know mapped port of host from docker container? . But it's not applicable if I run container with -P. Since this question is asked more than 1 year ago, I'm wondering maybe there's a new feature added to docker to solve this problem.
You can also you docker port container_id
The doc
https://docs.docker.com/engine/reference/commandline/port/
examples from the doc
$ docker port test
7890/tcp -> 0.0.0.0:4321
9876/tcp -> 0.0.0.0:1234
$ docker port test 7890/tcp
0.0.0.0:4321
$ docker port test 7890/udp
2014/06/24 11:53:36 Error: No public port '7890/udp' published for test
$ docker port test 7890
0.0.0.0:4321
i share /var/run/docker.sock to container and get self info
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock alpine:latest sh
in container shell
env //get HOSTNAME
curl --unix-socket /var/run/docker.sock http://localhost/containers/3c6b9e44a622/json
the 3c6b9e44a622 is your HOSTNAME
Once a container is started, it will send an http request to the manager with its address and port
This isn't going to be working. From inside a container you cannot figure out to which docker host port a container port is mapped to.
What I can think about which would work and be the closest to what you describe is making the container open a websocket connection to the manager. Such a connection would allow two ways communication between your manager and container while still being over HTTP.
What you are trying to achieve is called service discovery. There are already tools for service discovery that work with Docker. You should pick one of them instead of trying to make your own.
See for instance:
etcd
consul
zookeeper
If you really want to implement your service discovery system, one way to go is to have your manager use the docker event command (or one of the docker client librairies). This would enable your manager to get notified of containers creations/deletions with nothing to do on the container side.
Then query the docker host to figure out the ports that are mapped to your containers with docker port.
I have installed docker on a CentOS machine. Now I am trying to run a MapR sandbox on it. After starting I get this:
Starting MapR Services.................
To manage this node go to: https://172.17.0.13:8443
But I am not able to access this URL from the windows machine in the same network as the CentOS machine.
This is an internal docker network inaccessible outside of the box. In order to access this container you need:
EXPOSE command in container (most likely it is already there)
run container with -p option
If you just specify -p port will be random - you could find it with inspect command, or you could use permanent port -p hostIp:externalPort:8443 where hostIp is address of your docker host.
After that you could access container from network as https://hostIp:externalPort
I have a ubuntu machine which is a VM where I have installed docker in it. I am using this machine from my local windows machine and doing ssh , opening the terminal to the ubuntu machine.
Now , I am going to take a docker image which contains all the necessary softwares for eg: apache installed in it. Later I am going to deploy a sample appication(which is a web applicationP on to it and save it .
Now , I am in a confused mode as in how to check the deployed application if its running properly. i.e., what would be the address of the container which containds the deployed application.
for eg:- If I type http://127.x.x.x which is the address of the ubuntu machine , I am just getting time out .
Can anyone tell me how to verify the deployed application . Also, the printing the output of the program on the console works seemlessly fine , as the output gets printed , only thing I have some doubts is regarding the web application.
There are some possibilities to check whether your app is running.
Remote API
As JimiDini said, one possibility is the Docker remote API. You can use it to see all running containers (which would be your use case, right?), inspect a certain container or start and stop containers. The API is a REST-API with several binding for programming languages (at https://docs.docker.io/reference/api/remote_api_client_libraries/). Some of them are very outdated. To use the Docker remote API from another machine, I needed to open it explicitly:
docker -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d &
Note that the API is open to the world now! In a real scenario you would need to secure it in some way (e.g. see the example at http://java.dzone.com/articles/securing-docker%E2%80%99s-remote-api).
Docker PS
To see all running containers run docker ps on your host. This will list all running containers. If you do not see your app, it is not running. It also shows you the ports your app is exposing. You can also do this via the remote API.
Logs
You can also check the logs. You can run docker attach <container id> to attach to a certain container an see its stdout. You can run also run docker logs <container id> to receive the Docker logs. What I prefer is to write the logs to a certain directory, e.g. all logs to /var/log and mount this folder to my host machine. Then all your logs will end up in /home/ubuntu/docker-logs on your host.
docker run -p 80:8080 -v /home/ubuntu/docker-logs:/var/log:rw my/application
One word to ports and IP
Every container will get its own IP address. You can check this IP address via the remote API or via Docker on the host machine directly. You can also specify a certain host name for the container (by passing the --hostname="test42" to the run command). However, you mostly did not need that.
To access the application in the container, you need to open the port in the container and bind to a port on the host.
In your Dockerfile you need to EXPOSE the port your app runs on:
FROM ubuntu
...
EXPOSE 8080
CMD run-my-app.sh
When you start your container, you need to bind this port to a port of the host:
docker run -p 80:8080 my/application
Now you can access your app on http://localhost:80 or http://127.0.0.1:80.
If you app does not response, check if the container is running by typing docker ps or the remote API. If it is not running, check the logs for the reason.
(Note: If you run your Ubuntu VM in something like VirtualBox and you try to access it from your Windows machine, make sure you opened the ports in VirtualBox too!).
Docker container has a separate IP address. By default it is private (accessible only from the host-machine).
Docker provides all metadata (including IP address) via its API:
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#inspect-a-container
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#monitor-docker-s-events
You can also take a look at a little tool called docker-gen for inspiration. It monitors docker-events and created configuration-files on host machine using templates.
To obtain the ip address of a docker container, if you know its id (a long hex string) or if you named it:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container-id-or-name>
Docker is running its own network and to get information about it you can run the following commands:
docker network ls
docker network inspect <network name>
docker inspect <container id>
In the output, you should be able to find the IP.
But there is also a couple of things you need to be aware of, regarding Dockerfile and docker run command:
when you EXPOSE a port in Dockerfile, the service in the container is not accessible from outside Docker, but from inside other Docker containers
and when you EXPOSE and use docker run -p ... flag, the service in the container is accessible from anywhere, even outside Docker
So for example, if your apache is running on port 8080 you should expose it in Dockerfile and then you can run it as:
docker run -d -p 8080:8080 <image name> and you should be able to access it from your host on HTTP://localhost:8080.
It is an old question/answer but it might help somebody else ;)
working as of 2020
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id