How to get docker system ID inside container? - docker

As far as I understand each docker installation has some kind of a unique ID. I can see it by executing docker system info:
$ docker system info
// ... a lot of output
ID: UJ6H:T6KC:YRIL:SIDL:5DUW:M66Y:L65K:FI2F:MUE4:75WX:BS3N:ASVK
// ... a lot of output
The question is if it's possible to get this ID from the container (by executing a code inside container) w/o mapping any volumes, etc?
Edit:
Just to clarify the use case (based on the comments): we're sending telemetry data from docker containers to our backend. We need to identify which containers are sharing the same host. This ID would help us to achieve this goal (it's kind of machine id). If there's any other way to identify the host - it can solve the issue as well.

No - unless you explicitly inject that information in the container(volumes, COPY, environment variable, ARG passed at build time and persisted in a file etc), or you fetch it via a GET request for example, that information is not available inside the docker containers.
You may open a console inside a container and search for all files that contain that ID grep -rnw '/' -e 'the-ID' but nothing will match the search.
On the other hand, any breakout from the container to the host would be a real security concern.
Edit to answer the update on you question:
The docker host has visibility on the containers that are running. A much better approach would be to send the information you need from the host level rather than container level.
You could still send data directly from the containers and use the container ID, which is known inside the container and correlate these telemetry information to the data sent from the docker host.
Yet another option, which is even better in my opinion, is to send that telemetry data to the stdout of the container. This info can easily be collected and send to the telemetry backend on the docker host, from the logging driver.

Often the hostname of the container is the container ID--not the ID you're talking about, but the ID you would use for e.g. docker container exec, so it's a fine identifier.

Related

Access Host Folder from Docker Container without run -v command

I want to share access with my host (Ubuntu) or from an nfs server and a container or image (Ubuntu). I can't use the -v command, since the container is started by a program that only allows the container name and runs it itself. Copying is not possible since the folder is big and the content might change regulary.
The nfs-mount inside of the container does throw the error: "Protocol not supported"(done the same way as on host).
Until now it got the information that a "hardcoded" mount is not possible for images and nfs-mounts might not work with docker.
I'd be open for some "hacky" solutions as well if docker might not support it.
Bind mounts (the docker run -v option) are the only way to do this. It's considered a major design goal and security feature of Docker that containers can't generally access the host filesystem, so it'd be a major bug if there was some way to bypass this isolation.
You need to change the calling code to include the -v option, or rebuild your image to embed the data you need (if it's read-only).

How do you access/pull data from an outside server into a Docker container?

I have run into more and more data scientists who use Docker containers, in order to allow for reproducible analyses.
Question: How do you download/pull data into a Docker container?
If the data is downloadable via a URL, naturally you could add a line like this in the Dockerfile
wget www.server_to_data.org/path/path/myfile.gz
But I have data sitting on a server, whereby users ssh into the server with a key-pair in ~/.ssh/id_rsa.pub. I'm not sure how this could work security-wise.
How does one normally download or access your data in this case?
One could possible mount the server, but I'm not sure how one accesses these within the Container/VM.
For your current situation, where you've got the data on a server, and you're handing out key pairs to people who should have access. If you want to just use that existing infrastructure without changing it. Could be done by setting a volume for the ssh keys in the image and then people running the image would need to start the container with the volume set to their ssh key.
Set a volume in the image with the Dockerfile:
FROM ubuntu
#[RUN your installation process]
VOLUME /home/container_user/.ssh
Run the container with mounting the location of the ssh key to that volume:
docker run -d -v PATH_TO_DRECITORY_HOLDING_SSH_KEY:/home/container_user/.ssh [OTHER OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
Then you can download the data as part of the script that runs when the container is started.
The basic idea is lifted from How can I get my ~/.ssh keys into a docker container running locally?
That said, if we back the question up a little and ask how exactly people are going to be using your image, where the image is going to be stored (public or private repo) and how often the data changes there may be some more user friendly ways to satisfy the need. Also if you allow for docker-compose to be the means by which the container is run there is some other options available to you.

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

What is best practice for sharing database between containers in docker?

Is there anyone knows what is the best practice for sharing database between containers in docker?
What I mean is I want to create multiple containers in docker. Then, these containers will execute CRUD on the same database with same identity.
So far, I have two ideas. One is create an separate container to run database merely. Another one is install database directly on the host machine where installed docker.
Which one is better? Or, is there any other best practice for this requirement?
Thanks
It is hard to answer a 'best practice' question, because it's a matter of opinion. And opinions are off topic on Stack Overflow.
So I will give a specific example of what I have done in a serious deployment.
I'm running ELK (Elasticsearch, Logstash, Kibana). It's containerised.
For my data stores, I have storage containers. These storage containers contain a local fileystem pass through:
docker create -v /elasticsearch_data:/elasticsearch_data --name ${HOST}-es-data base_image /bin/true
I'm also using etcd and confd, to dynamically reconfigure my services that point at the databases. etcd lets me store key-values, so at a simplistic level:
CONTAINER_ID=`docker run -d --volumes-from ${HOST}-es-data elasticsearch-thing`
ES_IP=`docker inspect $CONTAINER_ID | jq -r .[0].NetworkSettings.Networks.dockernet.IPAddress`
etcdctl set /mynet/elasticsearch/${HOST}-es-0
Because we register it in etcd, we can then use confd to watch the key-value store, monitor it for changes, and rewrite and restart our other container services.
I'm using haproxy for this sometimes, and nginx when I need something a bit more complicated. Both these let you specify sets of hosts to 'send' traffic to, and have some basic availability/load balance mechanisms.
That means I can be pretty lazy about restarted/moving/adding elasticsearch nodes, because the registration process updates the whole environment. A mechanism similar to this is what's used for openshift.
So to specifically answer your question:
DB is packaged in a container, for all the same reasons the other elements are.
Volumes for DB storage are storage containers passing through local filesystems.
'finding' the database is done via etcd on the parent host, but otherwise I've minimised my install footprint. (I have a common 'install' template for docker hosts, and try and avoid adding anything extra to it wherever possible)
It is my opinion that the advantages of docker are largely diminished if you're reliant on the local host having a (particular) database instance, because you've no longer got the ability to package-test-deploy, or 'spin up' a new system in minutes.
(The above example - I have literally rebuilt the whole thing in 10 minutes, and most of that was the docker pull transferring the images)
It depends. A useful thing to do is to keep the database URL and password in an environment variable and provide that to Docker when running the containers. That way you will be free to connect to a database wherever it may be located. E.g. running in a container during testing and on a dedicated server in production.
The best practice is to use Docker Volumes.
Official doc: Manage data in containers. This doc details how to deal with DB and container. The usual way of doing so is to put the DB into a container (which is actually not a container but a volume) then the other containers can access this DB-container (the volume) to CRUD (or more) the data.
Random article on "Understanding Docker Volumes"
edit I won't detail much further as the other answer is well made.

How to copy and rename a Docker container?

I have a docker container that I want to use to partition client access to a database. I'd like to be able to have one container per client. If I start multiple copies of the container they all have the same name, the only difference being the port the container is assigned to.
How can I copy/rename the containers in such a way that I can differentiate the container without having to consult a lookup table that matches the assigned port to the client?
The docker rename command is part of Docker 1.5. Link to commit:
docker github
I'm using docker 1.0.1 and the following allows me to rename an image:
docker tag 1cf76 myUserName/imageName:0.1.0
All containers have a uniq name. When you do docker ps You can see that the first column is the ID. You can then manipulate your containers with this ID.
You actually need this ID in order to perform any operation on the container (stop/start/inspect/etc..)
I am unsure of what you are trying to do, but for each client, you can start a new container and then link the container ID with your user ID.
At the moment, there is no container naming within Docker, so you can't name nor rename a container, you only can use its ID.
In future versions, naming for container will be implemented.

Resources