Is it possible to postpone the startup of a container based the availability of a separate HTTP service. For example, only start the container if port 8080 is running?
That sort of application-level service check isn't available in docker-compose. You would need to implement the necessary logic in your docker images.
For example, if you have something that depends on a web service, you could have your CMD run a script that does something like:
while ! curl -sf http://servicehost:8080/; do
sleep 1
done
exec myprogram
Another option is to set a restart policy of always on your containers, and have them fail if the target service isn't available. Docker will continue to restart your container until it keeps running.
Related
I am new to Docker container and my question is how to monitor a process that is running inside a container. For Example, I have a container running apache in it. how would I know if apache process inside container got killed but my container is still running.
How we will ensure specific process inside the container is running,if that process goes down how we will get alert ?
The Dockerfile reference has the answer:
https://docs.docker.com/engine/reference/builder/
More specifically, the HEALTHCHECK directive:
https://docs.docker.com/engine/reference/builder/#healthcheck
Essentially, when your container's entrypoint fails, the container dies:
https://docs.docker.com/engine/reference/builder/#entrypoint
But, in any case, a process running inside a container is also visible from the host's process list, so you can safely use the output of ps aux| grep httpd to monitor your apache's PIDs.
In production , you don't just use docker run , you need to use some container orchestrator like kubernetes where you define the health checks such as liveness and readiness probes and the orchestrator will take care of the rest , it will restart the container if apache fails for some reason.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
I have a docker image, that contains a python file that accepts arguments from command line using sys.stdin(). I can run the image using the following command
cat file.csv | docker run -i -t my_image
It pipes the contents of file.csv to the image, and i get the output as expected.
Now i want to deploy this image to kubernetes. I can run the image on the server using docker without any problems. But if i curl to it, it should send a response back, but i am not getting it because i do not have a web server listening on any port. I went ahead and built a deployment using the following command.
kubectl run -i my_deployment --image=gcr.io/${PROJECT_ID}/my_image:v1 --port 8080
It built the deployment and i can see the pods running. Then i expose it.
kubectl expose deployment my_deployment --type=LoadBalancer --port 80 --target-port 8080
But if i try to access it using the IP assigned using curl,
curl http://allocated_ip
i get a response "connection refused".
How can deploy this docker image as a service on kubernetes and send contents of a file as an input to the service? Do i need a web server for that?
Kubernetes generally assumes the containers it deploys are long-lived and autonomous. If you're deploying something in a Pod, particularly via a Deployment, it should be able to run on its own without any particular inputs. If it immediately exits, Kubernetes will restart it, and you'll quickly wind up in the dreaded CrashLoopBackOff state.
In short: you need to redesign your container to not use stdin and stdout is its primary interface.
Your instinct to add a network endpoint into the service is probably correct, but Kubernetes won't do that on its own. If you rebuild your application to have, say, a Flask server and listen on a port, that's something you can readily deploy to Kubernetes. If the application expects data to come in on stdin and its results to go to stdout, adding the Kubernetes networking metadata won't help anything: in your example if nothing is listening inside the container on port 8080 then a network connection will never go anywhere.
I am assuming Kubernetes is running on premises. I would do the following.
Create a nginx or apache deployment. Using Helm, it is pretty easy with
helm install stable/nginx-ingress
Create a deployment with the port 8080, or whatever you would expose from running it from with docker. The actual deployment would have an API which I could send content via a POST.
Create a service with port 8080 and targetPort 8080. It should be type ClusterIP.
Create a ingress with the hostname, and servicePort of 8080.
Since you are passing the file as argument when running a command, this makes me think that once you have the content on the container you do not need to update the content of the csv.
The best approach to achieve the read operation of that file, would be to ADD that file on your Dockerfile and the open the file using the open function.
You would have a line like
ADD file.csv /home/file.csv
And in your python code something like :
file_in = open(‘home/file.csv’, ‘r’)
Note that if you want to change the file, you would need to update the Dockerfile, build again, push to the registry and re-deploy to GKE. If you do not want to follow this process, you can use a ConfigMap.
Also, if this answers your question make sure to link your same question on serverfault
Here's my scenario.
I have 2 Docker containers:
C1: is a container with Ruby (but it could be anything else) that prepares data files on which it must perform a calculation in Julia language
C2: is a container with Julia (or R, or Octave...), used to perform the calculation, so as to avoid installing Julia on the same system or container that run Ruby code
From the host, obviously, I have no problem doing the processing.
Usually when two containers are linked (or belong to the same network) they communicate with each other via a network exposing some door. In this case Julia does not expose any door.
Can I run a command on C2 from C1 similar to what is done between host and C2?
If so, how?
Thanks!
Technically yes, but that's probably not what you want to do.
The Docker CLI is just an interface to the Docker service, which listens at /var/run/docker.sock on the host. Anything that can be done via the CLI can be done by directly communicating with this server. You can mount this socket into a running container (C1) as a volume to allow that container to speak to its host's docker service. Docker has a few permissions that need to be set to allow this; older versions allow containers to run in "privileged" mode, in which case they're allowed to (amongst other things) speak to /var/run/docker.sock with the authority of the host. I believe newer versions of Docker split this permission system up a bit more, but you'd have to look into this. Making this work in swarm mode might be a little different as well. Using this API at a code level without installing the full Docker CLI within the container is certainly possible (using a library or coding up your own interaction). A working example of doing this is JupyterHub+DockerSpawner, which has one privileged Hub server that instantiates new Notebook containers for each logged in user.
I just saw that you explicitly state that the Julia container has no door/interface. Could you wrap that code in a larger container that gives it a server interface while managing the serverless Julia program as a "local" process within the same container?
I needed to solve the same problem. In my case, it all started when I needed to run some scripts located in another container via cron, I tried the following scenarios with no luck:
Forgetting about the two-containers scenario and place all the logic in one container, so inter-container execution is no longer needed: Turns out to be a bad idea since the whole Docker concept is to execute single tasks in each container. In any case, creating a dockerfile to build an image with both my main service (PHP in my case) and a cron daemon proved to be quite messy.
Communicate between containers via SSH: I then decided to try building an image that would take care of running the Cron daemon, that would be the "docker" approach to solve my problem, but the bad idea was to execute the commands from each cronjob by opening an SSH connection to the other container (in your case, C1 connecting via SSH to C2). It turns out it's quite clumsy to implement an inter-container SSH login, and I kept running into problems with permissions, passwordless logins and port routing. It worked at the end, but I'm sure this would add some potential security issues, and I didn't feel it was a clean solution.
Implement some sort of API that I could call via HTTP requests from one container to another, using something like Curl or Wget. This felt like a great solution, but it ultimately meant adding a secondary service to my container (an Nginx to attend HTTP connections), and dealing with HTTP requisites and timeouts just to execute a shell script felt too much of a hassle.
Finally, my solution was to run "docker exec" from within the container. The idea, as described by scnerd is to make sure the docker client interacts with the docker service in your host:
To do so, you must install docker into the container you want to execute your commands from (in your case, C1), by adding a line like this to your Dockerfile (for Debian):
RUN apt-get update && apt-get -y install docker.io
To let the docker client inside your container interact with the docker service on your host, you need to add /var/run/docker.sock as a volume to your container (C1). With Docker compose this is done by adding this to your docker service "volumes" section:
- /var/run/docker.sock:/var/run/docker.sock
Now when you build and run your docker image, you'll be able to execute "docker exec" from within the docker, with a command like this, and you'll be talking to the docker service on the host:
docker exec -u root C2 /path/your_shell_script
This worked well for me. Since, in my case, I wanted the Cron container to launch scripts in other containers, it was as simple as adding "docker exec" commands to the crontab.
This solution, as also presented by scnerd, might not be optimal and I agree with his comments about your structure: Considering your specific needs, this might not be what you need, but it should work.
I would love to hear any comments from someone with more experience with Docker than me!
I am new to docker, and have been playing with the docker image in hub.docker.com
I want to add ssh service to the container. I could apt-get install the package (after apt-get update), but I don't know how to start it.
Normally (not in this container), we could start the service via a regular command of service ssh start. But I don't know how to do this in the container without out interfering with the ENTRYPOINT and CMD mechanisms.
The dockerfile comes with a docker-entrypoint.sh (see source code here) that pretty much expects to have a line for CMD
CMD ["mysqld"]
I have read some related SO articles, such as these:
How to automatically start a service when running a docker container?
Start sshd automatically with docker container
but they are not directly applicable here due to the interplay of ENTRYPOINT and CMD in the docker file.
What you want to do is not best practice, what you should do is use a user-defined docker network (Here is the documentation).
In your case you would for example:
1. Create a network
docker network create --driver bridge database_network
2. Start the mariadb container
docker run --network=database_network -itd --name=mariadb mariadb
3. Start the ssh container
Here we are using krlmlr/debian-ssh for the example, feel free to use any other image
docker run --network=database_network -itd --name=ssh -p 22:22 -e SSH_KEY="$(cat ~/.ssh/id_rsa.pub)" krlmlr/debian-ssh:wheezy
Then when you connect via port 22 you can connect to the database using mariadb as the hostname, so instead of let's say localhost:3306 you will use mariadb:3306 within the ssh tunnel.
This will even allow you to setup multiple containers with different ssh keys across different ports if your server can handle the load of that many containers.
To answer the question on which would be more effecient between: A) Run one container with both mysqld and sshd in it; and B) Run two containers one for mysqld and one for sshd
The difference in resource usage would be minimal because running ssh within the official mariadb image would require using supervisor or s6 which would be one more process than running the two containers individually. Which means depending on the size of the ssh image, the amount of memory usage may as well be the same. In terms of CPU usage I'm of the opinion that the case would be the same and may actually favor scenario B.
I am running a docker container which contains a node server. I want to attach to the container, kill the running server, and restart it (for development). However, when I kill the node server it kills the entire container (presumably because I am killing the process the container was started with).
Is this possible? This answer helped, but it doesn't explain how to kill the container's default process without killing the container (if possible).
If what I am trying to do isn't possible, what is the best way around this problem? Adding command: bash -c "while true; do echo 'Hit CTRL+C'; sleep 1; done" to each image in my docker-compose, as suggested in the comments of the linked answer, doesn't seem like the ideal solution, since it forces me to attach to my containers after they are up and run the command manually.
This is by design by Docker. Each container is supposed to be a stateless instance of a service. If that service is interrupted, the container is destroyed. If that service is requested/started, it is created. If you're using an orchestration platform like k8s, swarm, mesos, cattle, etc at least.
There are applications that exist to represent PID 1 rather than the service itself. But this goes against the design philosophy of microservices and containers. Here is an example of an init system that can run as PID 1 instead and allow you to kill and spawn processes within your container at will: https://github.com/Yelp/dumb-init
Why do you want to reboot the node server? To apply changes from a config file or something? If so, you're looking for a solution in the wrong direction. You should instead define a persistent volume so that when the container respawns the service would reread said config file.
https://docs.docker.com/engine/admin/volumes/volumes/
If you need to restart the process that's running the container, then simply run a:
docker restart $container_name_or_id
Exec'ing into a container shouldn't be needed for normal operations, consider that a debugging tool.
Rather than changing the script that gets run to automatically restart, I'd move that out to the docker engine so it's visible if your container is crashing:
docker run --restart=unless-stopped ...
When a container is run with the above option, docker will restart it for you, unless you intentionally run a docker stop on the container.
As for why killing pid 1 in the container shuts it down, it's the same as killing pid 1 on a linux server. If you kill init/systemd, the box will go down. Inside the namespace of the container, similar rules apply and cannot be changed.