How to add the Env variable to running docker container - docker

I'm running one container with default env variables(like PORTS=1234,1235,1236) already defined from dockerfile
So with help of this while runtime, executing the script to run the naming services on defined ports
Once the container running , i want to start naming service on 1237,1238 along with existing ports, without stopping the existing container.
Let me know if anybody need more info
Please suggest the best approach

The idea behind containers are single processes that run an application and are self contained. However that case doesn't always work and you need to run multiple things in a single container, to achieve the automatic kick off of the services you should create a SCRIPT file, and use the ADD command in the docker file to get it in the system then use the ENTRYPOINT section to execute the said script.
If you are really wanting it to be at runtime instead of on startup of container you can do one of the following.
Add SSH capabilities to the container (bad idea)
Start the container with the -i "interactive" switch and the entry point as a shell environment to allow you to attach/detach to the container (not recommended).

Related

Mandatory command or entrypoint in docker-compose

We just started moving our app to containers so I am very new to container world.
In our container image we only have the base linux image with some rpms installed and some scripts copied to the container. We were thinking that we will not have any command/entrypoint in the image itself. When the container comes up, our deployment job will then run a script inside the container to bring up the services (jetty/hbase/..). i.e. container bringup and services bringup are 2 different steps in the deployment job.
This was working until I was bringing up the container using the docker run/podman run command.
Now, we thought of moving to the docker-compose way. However when I say "docker-compose up" it complaints that "Error: No command specified on command line or as CMD or ENTRYPOINT in this image". i.e. while starting a container using the run command it's ok to not have any CMD or ENTRYPOINT, but while starting a container using docker-compose it's mandatory to provide one, why is that so ?
In order to get past that error, we tried putting some simple CMD in the compose file like, say, /bin/bash. However, with this approach, the container exits immediately. I found many stackoverflow links explaining why this is happening, eg: Why docker container exits immediately. If I put CMD as tail -f /dev/null in the compose file only then the container stays up.
Can you please help clarify what is the right thing to do. As mentioned, our requirement is that we want to bringup container without any services, and then bringup the services separately. Hence we don't have any use case for CMD/ENTRYPOINT.
Container( Image)s should be the thing that you deploy not a thing that you deploy code into; it is considered good practice to have immutable infrastructure (containers, VMs etc.).
Your build process should probably (!?) generate container images. A container image is (sha-256) hashed to uniquely identify it.
Whenever your sources change, you should consider generating a new container image. It is a good idea to label container images so that you can tie a specific image (not image name but tagged version) to a specific build so that you can always determine which e.g. commit resulted in which image version.
Corollary: it is considered bad practice to change container images.
One reason for preferring immutable infrastructure is that you will have reproducible deployments. If you have issues in a container version, you know you didn't change it and you know what build produced it and you know what source was used ...
There are other best practices for containers including that they should contain no state etc. It's old but seems comprehensive 10 thinks to avoid in containers and there are many analogs to The Twelve-Factor App
(Too!?) Often containers use CMD to start their process but, in my experience, it is better to use ENTRYPOINT. Both can be overridden but CMD is trivially overwritten while ENTRYPOINT requires a specific --entrypoint flag. In essence, if you use CMD, your users must remember to also run your process if they want to use command-line args. Whereas, ENTRYPOINT containers act more like running a regular-old binary.

Is it possible to configure a container from docker-compose or some other way?

Is it somehow possible to pass commands for automated container configuration, after the container has been started? For instance, could I pass multiple commands to the container to restart services with systemctl, change network configs, chmod files, etc, from docker-compose?
Thanks.
Yes, you can change the startup script to do what you want. Or you could run an init container or possibly in your dockerfile when you make the container.

Ansible commands on docker containers?

Upto now i had setup my ansible-playbook commands running on AWS EC2 instances.
can i run regular ansible commands like (linefile, apt, pip, etc) on container?
can i add my container-ip to hosts file in container-group and then does the same code works, here if i chanage my main.yml file that has
hosts: ec2-group
to
hosts:contaniers-group
does all commands work?
i am bit beginner into this..please do confirm me i am actually thinking of making docker-compose files from scratch, and run docker-compose commands using ansible.
You can, but it's not really how Docker is designed to be used.
A Docker container is usually a wrapper around a single process. In the standard setup you create an image that has that application built and packaged, and you can just run it without any further setup. It's not usually interesting to run a bare Linux distribution container (which won't have an application installed) or to run an interactive shell as the main container process. Tutorials like Docker's Build and run your image walk through this sequence.
A corollary to this is that containers don't usually have any local state. In the best case any state a container needs is in an external database; if you can't do that then you store local state in a volume that outlives the container.
Finally, it's extremely routine to delete and recreate containers. You need to do this to change some common options; in a cluster environment like Kubernetes this can happen outside your control. When this happens the new container will restart running its default setup, and it won't know about any manual changes the previous container might have had.
So you don't usually want to try to install software directly in a running container, since that will get lost as soon as the container exits. You can, in principle, get a shell in a container (via docker exec) but this is more of a debugging tool than an administration tool. You could make the only process a container runs be an ssh daemon, but anything you start this way will get lost as soon as the container exits (and I've never seen a recipe that correctly and securely sets up credentials to access it).
I'd recommend learning the standard Dockerfile system and running self-contained Docker images over trying to adapt Ansible to this rather different environment.

Need to pass arguments to docker entrypoint.sh during each docker start ( not docker run ). Is something like this possiable?

i have a dockerfile , i can give enviroment variables & arguments during 'docker run' and it is persistance during docker start/stop/restart.
But sometimes i many need to change it, which requires me to make a new container everytime.
Is there a solution to it ?
There are many properties of a container that can only be set at creation time, and the environment variables and command line are among those. You must delete and recreate the container to change these. There isn't a workaround.
If you're just concerned about the length of the docker run command, consider either packaging that command in a shell script or looking at an orchestration tool like Docker Compose. If you change a setting in a docker-compose.yml file and re-run docker-compose up -d, it will make the minimal change required for that (which could include deleting and recreating the container, but it won't touch containers whose current settings are fine).

How to execute docker commands after a process has started

I wrote a Dockerfile for a service (I have a CMD pointing to a script that starts the process) but I cannot run any other commands after the process has started? I tried using '&' to run the process in the background so that the other commands would run after the process has started but it's not working? Any idea on how to achieve this?
For example, consider I started a database server and wanted to run some scripts only after the database process has started, how do I do that?
Edit 1:
My specific use case is I am running a Rabbitmq server as a service and I want to create a new user, make him administrator and delete the default guest user once the service starts in a container. I can do it manually by logging into the docker container but I wanted to automate it by appending these to the shell script that starts the rabbitmq service but that's not working.
Any help is appreciated!
Regards
Specifically around your problem with Rabbit MQ - you can create a rabbitmq.config file and copy that over when creating the docker image.
In that file you can specify both a default_user and default_pass that will be created when an the database is set from scratch see https://www.rabbitmq.com/configure.html
AS for the general problem - you can change the entry point to a script that runs whatever you need and the service you want instead of the run script of the service
I partially understood your question. Based on what I perceived from your question, I would recommend you to mention the Copy command to copy the script you want to run into the dockerfile. Once you build an image and run the container, start the db service. Then exec the container and get into the container, run the script manually.
If you have CMD command in the dockerfile, then it will be overwritten by the command you specify during the execution. So, I don't think you have any other option to run the script unless you don't have CMD in the dockerfile.

Resources