How to Execute cmd or fetch value from host inside container? - docker

Host : ubuntu
Installed Docker kubernetes on it
After logging into one container, can i run some command to fetch some data from host?
example : helm version from host

The whole point with containers is isolation. An app in a container should in almost all cases not know about the environment it is run in. A microservice should communicate with other services using the network, typically http

Related

Access Docker daemon on Host without knowing Host OS

I use docker-compose to spin up a few containers as part of an application I'm developing. One of the containers needs to start a docker swarm service on the host machine. On Docker for Windows and Docker for Mac, I can connect to the host docker daemon using the REST Api by using the "host.docker.internal" DNS name and this works great. However, if I run the same compose file on linux, "host.docker.internal" does not work (yet, seems it may be coming in the next version of docker). To make matters worse, on Linux I can use network mode of "host" to work around the issue but that isn't supported on Windows or Mac.
How can I either:
Create a docker-compose file or structure a containerized application to be slightly different based on the host platform (windows|mac|linux) without having to create multiple docker-compose.yml files or different application code?
Access the host docker daemon in a consistent way regardless of the host OS?
If it matters, the container that is accessing the docker daemon of the host is using the docker python sdk and making api calls to docker over tcp without TLS (this is used for development only).
Update w/ Solution Detail
For a little more background, there's a web application (aspnet core/C#) that allows users to upload a zip file. The zip file contains, among other things, an exported docker image file. There's also an nginx container in front of all of this to allow for ssl termination and load balancing. The web application pulls out the docker image, then using the docker daemon's http api, loads the image, re-tags the image, then pushes it to a private docker repository (which is running somewhere on the developer's network, external to docker). After that, it posts a message to a message queue where a separate python application uses the python docker library to deploy the docker image to a docker swarm.
For development purposes, the applications all run as containers and thus need to interact with docker running on the host machine as a stand alone swarm node. SoftwareEngineer's answer lead me down the right path. I mapped the docker socket from the host into the web application container at first but ran into a limitation of .net core that won't be resolved until .net 5 which is that there's no clean way of doing http over a unix socket.
I worked around that issue by eventually realizing that nginx can reverse proxy http traffic to a unix socket. I setup all containers (including the dynamically loaded swarm service from the zips) to be part of an overlay network to give them all access to each other and allowing me to hit an http endpoint to control the host machine's docker/swarm daemon over http.
The last hurdle I ran into was that nginx couldn't write to the mapped in /var/run/docker.sock file so I modified nginx.conf to allow it to run as root within the container.
As far as I can tell, the docker socket is available at the path /var/run/docker.sock on all systems. I have personally verified this with a recent Linux distro (Ubuntu), Windows 10 Pro running Docker for Windows (2.2.0) with both WSL2 (Ubuntu and Alpine) and the windows cmd (cli) and powershell. From memory, it works with OSX too, and I used to do the same thing in WSL1.
Mapping this into a container is achieved on any terminal with the -v, --volume, or --mount flags. So,
docker container run -v /var/run/docker.sock:/var/run/docker.sock
Mounts the socket into an identical path within the container. This means that you can access the socket using the standard docker client (docker) from within the container with no extra configuration. Using this path inside a Linux container is recommended because the standard location and is likely to be less confusing to anyone maintaining your code in the future (including yourself).

Cloud-init to configure an Ubuntu docker container?

Is it possible to use a cloud-init configuration file to define commands to be executed when a docker container is started?
I'd like to test the provisioning of an Ubuntu virtual machine using a docker container.
My idea is to provide the same cloud-init config file to an Ubuntu docker container.
No. If you want to test a VM setup, you need to use actual virtualization technology. The VM and Docker runtime environments are extremely different and you can't just substitute one technology for the other. A normal Linux VM startup will run a raft of daemons and startup scripts – systemd, crond, sshd, ifconfig, cloud-init, ... – but a Docker container will start none of these and will only run the single process in the container.
If your cloud-init script is ultimately running a docker run command, you can provide an alternate command to that container the same way you could docker run on your development system. But a Docker container won't look to places like the EC2 metadata service to find its own configuration usually, and it'd be unusual for a container to run cloud-init at all.

Run new docker container (service) from another container on some command

Does exist any way to do this:
run one service (container) with main application - server (flask application);
server allows to run another services, them are also flask applications;
but I want to run each new service in separate container ?
For example, I have endpoint /services/{id}/run at the server, each id is some service id. Docker image is the same for all services, each service is running on separate port.
I would like something like this:
request to server - <host>//services/<id>/run -> application at server make some magic command/send message to somewhere -> service with id starts in new container.
I know that at least locally I can use docker-in-docker or simply mount docker socket in container and work with docker inside this container. But I would like to find way to work across multiple machines (each service can run on another machine).
For Kubernetes: I know how create and run pods and deployments, but I can't find how to run new container on command from another container. Can I somehow communicate with k8s from container to run new container?
Generally:
can I run new container from another without docker-in-docker and mounting docker socket;
can I do it with/without Kubernetes?.
Thanks for advance.
I've compiled all of the links that were in the comments under the question. I would advise taking a look into them:
Docker:
StackOverflow control Docker from another container.
The link explaining the security considerations is not working but I've managed to get it with the Webarchive: Don't expose the Docker socket (not even to a container)
Exposing dockerd API
Docker Engine Security
Kubernetes:
Access Clusters Using the Kubernetes API
Kubeflow in the spite of machine learning deployments

Access kubernetes service from a docker container which is run by mesos

I have mesos-master (mesosphere/mesos-master) and mesos-slave (mesosphere/mesos-slave) running inside my Kubernetes cluster.
Mesos slave starts the docker containers (docker is accessed by mounting /usb/bin/docker from host) with my data processing application (short lived, 1-5 min) which needs to access other kubernetes services. So shortly speaking I need to access Kubernetes DNS from a container.
Is it possible to do that?
Thanks
I found only one way:
I am resolving "kube-dns.kube-system" host into an IP address. Then I am injecting "metadata.namespace" into environment variable KUBERNETES_NAMESPACE. and finally I am passing --dns RESOLVED_IP and --dns-search ${KUBERNETES_NAMESPACE}.svc.cluster.local, so a mesos's docker container is able to talk to the services.

how to connect to an application from inside a docker container?

I have created a docker container which is running on a particular VM in azure (or consider any cloud).That container has a java/nodejs/Csharp application running which needs to access Jenkins server which is running in a company network.
So will i be able to access jenkins from that docker container?If no,please provide a solution on how to access.
You can use --network=host option to let your container run in the same network context as the server you're trying to connect to if it's accessible from the container host.
Of course you should specify a specific network or routes if possible.
https://docs.docker.com/engine/reference/run/#network-settings

Resources