How to copy and rename a Docker container? - docker

I have a docker container that I want to use to partition client access to a database. I'd like to be able to have one container per client. If I start multiple copies of the container they all have the same name, the only difference being the port the container is assigned to.
How can I copy/rename the containers in such a way that I can differentiate the container without having to consult a lookup table that matches the assigned port to the client?

The docker rename command is part of Docker 1.5. Link to commit:
docker github

I'm using docker 1.0.1 and the following allows me to rename an image:
docker tag 1cf76 myUserName/imageName:0.1.0

All containers have a uniq name. When you do docker ps You can see that the first column is the ID. You can then manipulate your containers with this ID.
You actually need this ID in order to perform any operation on the container (stop/start/inspect/etc..)
I am unsure of what you are trying to do, but for each client, you can start a new container and then link the container ID with your user ID.
At the moment, there is no container naming within Docker, so you can't name nor rename a container, you only can use its ID.
In future versions, naming for container will be implemented.

Related

How to get docker system ID inside container?

As far as I understand each docker installation has some kind of a unique ID. I can see it by executing docker system info:
$ docker system info
// ... a lot of output
ID: UJ6H:T6KC:YRIL:SIDL:5DUW:M66Y:L65K:FI2F:MUE4:75WX:BS3N:ASVK
// ... a lot of output
The question is if it's possible to get this ID from the container (by executing a code inside container) w/o mapping any volumes, etc?
Edit:
Just to clarify the use case (based on the comments): we're sending telemetry data from docker containers to our backend. We need to identify which containers are sharing the same host. This ID would help us to achieve this goal (it's kind of machine id). If there's any other way to identify the host - it can solve the issue as well.
No - unless you explicitly inject that information in the container(volumes, COPY, environment variable, ARG passed at build time and persisted in a file etc), or you fetch it via a GET request for example, that information is not available inside the docker containers.
You may open a console inside a container and search for all files that contain that ID grep -rnw '/' -e 'the-ID' but nothing will match the search.
On the other hand, any breakout from the container to the host would be a real security concern.
Edit to answer the update on you question:
The docker host has visibility on the containers that are running. A much better approach would be to send the information you need from the host level rather than container level.
You could still send data directly from the containers and use the container ID, which is known inside the container and correlate these telemetry information to the data sent from the docker host.
Yet another option, which is even better in my opinion, is to send that telemetry data to the stdout of the container. This info can easily be collected and send to the telemetry backend on the docker host, from the logging driver.
Often the hostname of the container is the container ID--not the ID you're talking about, but the ID you would use for e.g. docker container exec, so it's a fine identifier.

How can I reuse a Docker container as a service?

I already have a running container for both postgres and redis in use for various things. However, I started those from the command line months ago. Now I'm trying to install a new application and the recipe for this involves writing out a docker compose file which includes both postgres and redis as services.
Can the compose file be modified in such a way as to specify the already-running containers? Postgres already does a fine job of siloing any of the data, and I can't imagine that it would be a problem to reuse the running redis.
Should I even reuse them? It occurs to me that I could run multiple containers for both, and I'm not sure there would be any disadvantage to that (other than a cluttered docker ps output).
When I set container_name to the names of the existing containers, I get what I assume is a rather typical error of:
cb7cb3e78dc50b527f71b71b7842e1a1c". You have to remove (or rename) that container to be able to reuse that name.
Followed by a few that compain that the ports are already in use (5432, 6579, etc).
Other answers here on Stackoverflow suggest that if I had originally invoked these services from another compose file with the exact same details, I could do so here as well and it would reuse them. But the command I used to start them was somehow never written to my bash_history, so I'm not even sure of the details (other than name, ports, and restart always).
Are you looking for docker-compose's external_links keyword?
external_links allows you reuse already running containers.
According to docker-compose specification:
This keyword links to containers started outside this docker-compose.yml or even outside of Compose, especially for containers that provide shared or common services. external_links follow semantics similar to the legacy option links when specifying both the container name and the link alias (CONTAINER:ALIAS).
And here's the syntax:
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql
You can give name for your container. If there is no container with the given name, then it is the first time to run the image. If the named container is found, restart the container.
In this way, you can reuse the container. Here is my sample script.
containerName="IamContainer"
if docker ps -a --format '{{.Names}}' | grep -Eq "^${containerName}\$"; then
docker restart ${containerName}
else
docker run --name ${containerName} -d hello-world
fi
You probably don't want to keep using a container that you don't know how to create. However, the good news is that you should be able to figure out how you can create your container again by inspecting it with the command
$ docker container inspect ID
This will display all settings, the docker-compose specific ones will be under Config.Labels. For container reuse across projects, you'd be interested in the values of com.docker.compose.project and com.docker.compose.service, so that you can pass them to docker-compose --project-name and use them as the service's name in your docker-compose.yaml.

How to introduce individual version of docker container in ansible script?

Let suppose 4 docker containers are running. they have there respective versions. Now I want to introduce these individual version in ansible script. Each version need to be declared in group_vars (with leatest by default)
so how can I do that ? appreciated if you reply to this post
Containers themselves can be referred to by their container names, plain and simple. You can add whatever you want to the name within the limitations of container names, e.g. docker run -d --name="webapp-container-1462574616" milind/webapp:0.0.10 or whatever, and then that is how you would refer to that specific container anywhere else. For example docker stop webapp-container-1462574616. You refer to images via the version in the image tag, e.g. milind/webapp:0.0.10.

Passing a docker's image generated name to another container in docker-composer

I like the fact that image names are handled by docker-compose and I don't want to use container_name to specify a fixed one. But at the same time I need to pass the generated name to a sibling image's container (based on docker's image!) so that the sibling container can ask the host to create a container based on the recently named image! Does that make any sense?
Clarification
I'm trying to configure a RabbitMQ's docker instance. And it comes in two steps. First installing a plugin and once the plugin is installed, I need to add an exchange based on that. The important part is that these steps need to be run in sequence and not parallel (the exchange requires the plugin). Through my other question, I'm trying to somehow find the RabbitMQ's container name and send a docker exec -it command to it. And once this command returns, I need to run a new instance of Python image to run the rabbitmqadmin script to create an exchange within the RabbitMQ.
I know it sounds complicated but this is the only way I could find to configure a RabbitMQ without making my own image.

Communicating between docker containers

I'm still a newbie and trying to learning the docker concept. I want to read the JSON file present in one Ubuntu container from the another Ubuntu container. How to do this in docker? Note that, I have to send the JSON from the first container through HTTP. Any idea on how to implement this? Any explanation or sample code on this would be really great.
If your first docker container declare a VOLUME, the other can be run with --volumes-from=<first_container>.
That would mount the declared path of the first container into the second one, effectively sharing a file or folder from the first container in the second.
Note that a container which is just created (not docker run, but docker create) is effectively a data volume container, there only to be mounted (--volumes-from) by other containers.
With http, that means the second container must know about the first (and its EXPOSE'd ports)
You will run the second container with --link=alias:firstContainer: that will allow you to contact alias:port, which is actually the url+port of the first container.
See "Communication across links"

Resources