I have looked at sonata project demo page: http://demo.sonata-project.org
There is something wonderfull on this page: They can start a container from a webpage.
How can we do that ?
What i want to do too is to wait the container ready before redirecting to it.
And how can they automaticly delete the container after 10 minutes ?
Thanks
You can create a frontend API over docker commands as per your customization , in backend docker commands are only running so when you press start button it will run docker run command to start container and so on.. for removing the container you can easily filter docker containers on basis of timestamps : https://docs.docker.com/engine/reference/commandline/system_prune/#filtering.
Related
Newer to Docker and trying to understand how images work. I ran the following command:
sudo docker search hello-world
and it returned this:
docker.io docker.io/carinamarina/hello-world-app This is a sample Python web application,
I then ran:
sudo docker run docker.io/carinamarina/hello-world-app
...and this was the output from the terminal:
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
I don't understand. How can the IP address be 0.0.0.0? I entered that into a browser and got nothing. I tried localhost:5000 and got nothing.
How does one get to see this webapp run?
tl;dr
you need to publish the port to the host network to see the application working
long version:
well good for you to start working with docker
I will start with explaining a little bit about docker then I will explain what is happing over there
First of all, there is a difference between "image" and "container"
Image is the blueprint that is used to create containers from
so you write the definition of the image like (install this, copy that from the host or build that.......etc) in the image file and then you tell docker to build this image and then RUN containers from that image
so if you have like 1 image and you run 2 containers from it they both will have the same instructions( definition )
what happened with you
when you invoke the RUN command first thing you will see is
Unable to find image 'carinamarina/hello-world-app:latest' locally
That's mean that the local docker will not find the image(blueprint) locally with the name docker.io/carinamarina/hello-world-app so it will do the following steps
so it will start pulling the image from the remote registry
then
then it will start extracting the layers of the image
then it will start the container and show the logs from INSIDE CONTAINER
Why it didn't run with you
the application is running inside the container on port 5000
the container has a whole different network than the host that's running on (centos7 machine in your case )
you will have to make a port forwarding between the docker network and the host network so you can USE the application from the HOST
you can read more about that here docker networking
I recommend the following places to start with
let's play with docker
docker tutorial for beggines
I´m new on docker but i know that a docker container should have only one process. But is it possible to run a script inside of a docker container multiple times like by a cronjob?
For example I have a python script which manipulate my database. This process should be done every hour. For that i have created a container based on a file like that:
FROM python:slim
COPY ac.py ac.py
RUN pip install pymongo
CMD [ "python", "./ac.py" ]
If i load this container from my repository and start it on any environment the process is done only one time.
Is there any posibillity to start that like a cronjob (without use ubuntu image inside of my docker container)?
By the way I want to deploy this container in google cloud. Is there any cloud provider who provide a functionality like that?
You could leverage docker swam and create a service that will have restart condition set to any and a delay between restarts set to 1h.
docker service create --restart-condition any --restart-delay 1h myPythonImage:latest
See docker service create reference: https://docs.docker.com/engine/reference/commandline/service_create/#options
My docker container hangs and don't have any idea how to get his back to life? I can't stop or restart it, there happens nothing. I can't even export him.
You could use service docker restart to restart the docker deamon (assuming you are using linux)
You can try these ideas :
Check the problem by looking the logs docker logs $container-name
You can try to create new image from your container with docker commit and create a new container.
You can create a new container from your initial image or your docker-compose.
How can we view container startup logs of docker. (ie: when container is starting up viz boot.log in Jboss for eg. as in what all events are kicking up while container is coming up.)
As of now I can view any event in logs when container comes up, but I cannot find any mechanism to view logs when container is starting up.
Any idea?
Ok, I got a way to do that.
1) First do "docker events&" where you want to run you container.
2) Then run your container like:
docker run -d .... (Full command)
It will generate a Hex Id for container (look out at the end).
(container=f1b76ae5a75a1443c01181de46767gbb03621167d019f5d26d3e5131d9158843511a69, name=bridge, type=bridge)
3) Now go in another window and see the logs:
docker logs (from previous step)
This is especially useful if your container is not coming up properly.
I have two docker images, lets say
Image 1: This has an public API built on Python Flask
Image 2: This some functional tests written Python
I am looking for an option where the API in Image 1 container is posted with a specific param then the Image1 container should trigger a docker run of Image2.
Is this should to trigger a docker run from a docker container.
Thanks
You are talking about using Docker in Docker
Check out this blog post for more info about how it works:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
in short, you need to mount the docker socket as a volume (and now with docker 1.10, its dependencies as well)
then you can run docker in docker.
but it seems like what you are trying to do does not necessarily require that. you should rather look into making your 'worker' API an actual HTTP API that you can run and call an endpoint for to trigger the parametrized work. That way you run a container that waits for work requests and run them, without running a container each time you need a task done.