Why does docker stop immediately after starting and how to prevent it from stopping? - post

I am trying to start a docker container using the following POST request:
Content-Type: application/json
{
"Hostname":"",
"Domainname": "",
"User":"",
"Memory":0,
"MemorySwap":0,
"CpuShares": 512,
"Cpuset": "0,1",
"AttachStdin":true,
"AttachStdout":true,
"AttachStderr":true,
"PortSpecs":6002,
"Tty":false,
"OpenStdin":false,
"StdinOnce":false,
"Env":null,
"Cmd":
[
"python",
"app.py"
],
"Image":"jobinar/smile_webapp",
"Volumes":{
"/tmp": {}
},
"WorkingDir":"",
"NetworkDisabled": false,
"ExposedPorts":{
"5000/tcp": {}
}
}
However, the container immediately stops after starting. How do I configure my request to prevent it from exiting?
I would appreciate a POST request which does this instead of the command-line way.
EDIT: I get a 201 CREATED response with the id of the created container and I can see that the container is created by running by using the docker ps -a command.

If you have upgraded you docker version you habe to delete /var/lib/docker/network on ubuntu

Related

how to bring up failed container

have a container that failed after a long setup and i want to log in (exec bash) at that point instead of executing the slow setup again. Is there any way?
The container is a left over from a docker build process, it is still the FROM ... AS builder stage.
if i try to start it, it will fail right away.
$ docker start -ai 3d35a7f7a7b4
/bin/sh: mvn: command not found
trying to exec anything right away doesn't work either
$ docker start 3d35a7f7a7b4 & docker exec 3d35a7f7a7b4 -it /bin/sh
[1] 403273
3d35a7f7a7b4
unable to upgrade to tcp, received 500
[1]+ Done docker start 3d35a7f7a7b4
more info:
$ docker inspect 3d35a7f7a7b4
[
{
"Id": "3d35a7f7a7b4018ebbbd9aa59356714d7fed291a43752cbcb86dd852c946cc1e",
"Created": "2022-07-06T23:56:37.001004587Z",
"Path": "/bin/sh",
"Args": [
"-c",
"mvn --version"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 127,
"Error": "",
"StartedAt": "2022-07-07T00:02:35.755444447Z",
"FinishedAt": "2022-07-07T00:02:35.75741167Z"
},
"Image": "sha256:4819e2469963fdf531ec5bce5401b7ae7d28cd403528c0109512b5170ef61752",
...
this is not an optimal answer. Here just for documentation (and for people to vote up if it is the best one can do with docker)
docker run can be used on the image of the stopped container, and you can pass the CMD parameter right away. But any other peculiarity of the stopped container will also have to be repeated. e.g. network.
for the example on the question:
host$ docker run -it sha256:4819e2469963fdf531ec5bce5401b7ae7d28cd403528c0109512b5170ef61752 /bin/bash
container# _

How to run podman commands on host from within container

In case of docker, this can be achieved by mounting docker.sock inside container.
But since there is no daemon in podman. What's the replacement for docker.sock?
I want to typically check the podman images presents on host and start a new container.
I'm using Podman with --privileged=true and root.
There is a new API (status: experimental) that was announced in a blog post in January 2020.
[root#fedora31 ~]# podman --version
podman version 1.8.0
[root#fedora31 ~]# podman system service --timeout 500000 unix://root/foobar.sock
This function is EXPERIMENTAL.
As the API is still experimental this might change but right now you could make a query like this:
[root#fedora31 ~]# curl -s --unix-socket /root/foobar.sock http://d/v1.24/images/json | python3 -m json.tool
[
{
"Containers": 0,
"Created": 1572319417,
"Id": "f0858ad3febdf45bb2e5501cb459affffacef081f79eaa436085c3b6d9bd46ca",
"Labels": {
"maintainer": "Clement Verna <cverna#fedoraproject.org>"
},
"ParentId": "",
"RepoDigests": [
"sha256:8fa60b88e2a7eac8460b9c0104b877f1aa0cea7fbc03c701b7e545dacccfb433"
],
"RepoTags": [
"docker.io/library/fedora:latest"
],
"SharedSize": 0,
"Size": 201095865,
"VirtualSize": 201095865,
"CreatedTime": "0001-01-01T00:00:00Z"
},
null
]
[root#fedora31 ~]#
The command python3 -m json.tool was added to pretty-print the JSON output.
I think the UNIX socket can be accessed from inside a container by using the bind-mounting technique (that was mentioned in the question).
According to the man page, the command podman system service also accepts the flag --varlink.
Using Varlink instead of the new API might be a better solution right now as it is more mature but it will be deprecated in the future.

Running Chronos docker image in BRIDGE mode

I've been putting together a POC mesos/marathon system that I am using to launch and control docker images.
I have a Vagrant virtual machine running in VirtualBox on which I run docker, marathon, zookeeper, mesos-master and mesos-slave processes, with everything working as expected.
I decided to add Chronos into the mix and initially I started with it running as a service on the vagrant VM, but then opted to switch to running it in a docker container using the mesosphere/chronos image.
I have found that I can get container image to start and run successfully when I specify HOST network mode for the container, but when I change to BRIDGE mode then I run into problems.
In BRIDGE mode, the chronos framework registers successfully with mesos (I can see the entry on the frameworks page of the mesos UI), but it looks as though the framework itself doesn't know that the registration was successful. The mesos master log if full of messages like:
strong textI1009 09:47:35.876454 3131 master.cpp:2094] Received SUBSCRIBE call for framework 'chronos-2.4.0' at scheduler-16d21dac-b6d6-49f9-90a3-bf1ba76b4b0d#172.17.0.59:37318
I1009 09:47:35.876832 3131 master.cpp:2164] Subscribing framework chronos-2.4.0 with checkpointing enabled and capabilities [ ]
I1009 09:47:35.876924 3131 master.cpp:2174] Framework 20151009-094632-16842879-5050-3113-0001 (chronos-2.4.0) at scheduler-16d21dac-b6d6-49f9-90a3-bf1ba76b4b0d#172.17.0.59:37318 already subscribed, resending acknowledgement
This implies some sort of configuration/communication issue but I have not been able to work out exactly what the root of the problem is. I'm not sure if there is any way to confirm if the acknowledgement from mesos is making it back to chronos or to check the status of the communication channels between the components.
I've done a lot of searching and I can find posts by folk who have encountered the same issue but I haven't found an detailed explanation of what needs to be done to correct it.
For example, I found the following post which mentions a problem that was resolved and which implies the user successfully ran their chronos container in bridge mode, but their description of the resolution was vague. There was also this post but the change suggested did resolve the issue that I am seeing.
Finally there was a post by someone at ILM who had what sound like exactly my problem and the resolution appeared to involve a fix to Mesos to introduce two new environment variables LIBPROCESS_ADVERTISE_IP and LIBPROCESS_ADVERTISE_PORT (on top of LIBPROCESS_IP and LIBPROCESS_PORT) but I can't find a decent explanation of what values should be assigned to any of these variables, so have yet to work out whether the change will resolve the issue I am having.
It's probably worth mentioning that I've also posted a couple of questions on the chronos-scheduler group, but I haven't had any responses to these.
If it's of any help the versions of software I'm running are as follows (the volume mount allows me to provide values of other parameters [e.g. master, zk_hosts] as files, without having to keep changing the JSON):
Vagrant: 1.7.4
VirtualBox: 5.0.2
Docker: 1.8.1
Marathon: 0.10.1
Mesos: 0.24.1
Zookeeper: 3.4.5
The JSON that I am using to launch the chronos container is as follows:
{
"id": "chronos",
"cpus": 1,
"mem": 1024,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mesosphere/chronos",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 4400,
"hostPort": 0,
"servicePort": 4400,
"protocol": "tcp"
}
]
},
"volumes": [
{
"containerPath": "/etc/chronos/conf",
"hostPath": "/vagrant/vagrantShared/chronos",
"mode": "RO"
}
]
},
"cmd": "/usr/bin/chronos --http_port 4400",
"ports": [
4400
]
}
If anyone has any experience of using chronos in a configuration like this then I'd appreciate any help that you might be able to provide in resolving this issue.
Regards,
Paul Mateer
I managed to work out the answer to my problem (with a little help from the sample framework here), so I thought I should post a solution to help anyone else the runs into the same issue.
The chronos service (and also the sample framework) were configured to communicate with zookeeper on the IP associated with the docker0 interface on the host (vagrant) VM (in this case 172.17.42.1).
Zookeeper would report the master as being available on 127.0.1.1 which was the IP address of the host VM that the mesos-master process started on, but although this IP address could be pinged from the container any attempt to connect to specific ports would be refused.
The solution was to start the mesos-master with the --advertise_ip parameter and specify the IP of the docker0 interface. This meant that although the service started on the host machine it would appear as though it had been started on the docker0 ionterface.
Once this was done communications between mesos and the chronos framework started completeing and the tasks scheduled in chronos ran successfully.
Running Mesos 1.1.0 and Chronos 3.0.1, I was able to successfully configure Chronos in BRIDGE mode by explicitly setting LIBPROCESS_ADVERTISE_IP, LIBPROCESS_ADVERTISE_PORT and pinning its second port to a hostPort which isn't ideal but the only way I could find to make it advertise its port to Mesos properly:
{
"id": "/core/chronos",
"cmd": "LIBPROCESS_ADVERTISE_IP=$(getent hosts $HOST | awk '{ print $1 }') LIBPROCESS_ADVERTISE_PORT=$PORT1 /chronos/bin/start.sh --hostname $HOST --zk_hosts master-1:2181,master-2:2181,master-3:2181 --master zk://master-1:2181,master-2:2181,master-3:2181/mesos --http_credentials ${CHRONOS_USER}:${CHRONOS_PASS}",
"cpus": 0.1,
"mem": 1024,
"disk": 100,
"instances": 1,
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "mesosphere/chronos:v3.0.1",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 9900,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp",
"labels": {}
},
{
"containerPort": 9901,
"hostPort": 9901,
"servicePort": 0,
"protocol": "tcp",
"labels": {}
}
],
"privileged": true,
"parameters": [],
"forcePullImage": true
}
},
"env": {
"CHRONOS_USER": "admin",
"CHRONOS_PASS": "XXX",
"PORT1": "9901",
"PORT0": "9900"
}
}

marathon docker jobs hanged in deployment state

Hi I have been successfull so far with simple jobs in marathon but it stuck when i have tried deploying a deocker job in mesos through marathon framework.
I am using a json file as below to deploy a docker job:
{
"id": "pga-docker",
"cpus": 0.2,
"mem": 1024.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "pga",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 80, "hostPort": 6565, "servicePort": 0, "protocol": "tcp" }
]
}
}
}
My pga docker image have no problem when run as container, but through marathon its just not working. Its staying in the deploying state forever.
I am using the below command line:
curl -X POST http://10.141.141.10:8080/v2/apps -d #basic-3.json -H "Content-type: application/json"
But when I run the same image from marathon UI, its working. To run from marathon I used "docker run --publish 6060:80 --name test --rm pga" in the cmd field of the UI new job page.
Any one have idea why this is hanged in the command line approach?
This is what i have found during some trial and error with the json file.
I found that when we run docker image in local system, if we have mentioned an entry point or a cmd then that will execute while running the container. But this is not same for mesos/marathon. my observation is that if I explicitly mentioned cmd in the deployment json then its working fine.
"cmd":"sh pga-setup.sh"
I will love to know if anyone faced a similar issue an solved it by another way.

Mesos cannot deploy container from private Docker registry

I have a private Docker registry that is accessible at https://docker.somedomain.com (over standard port 443 not 5000). My infrastructure includes a set up of Mesosphere, which have docker containerizer enabled. I'm am trying to deploy a specific container to a Mesos slave via Marathon; however, this always fails with Mesos failing the task almost immediately with no data in stderr and stdout of that sandbox.
I tried deploying from an image from the standard Docker Registry and it appears to work fine. I'm having trouble figuring out what is wrong. My private Docker registry does not require password authentication (turned off for debugging this), AND if I shell into the Meso's slave instance, and sudo su as root, I can run a 'docker pull docker.somedomain.com/services/myapp' successfully every time.
Here is my Marathon post data for starting the task:
{
"id": "myapp",
"cpus": 0.5,
"mem": 64.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "docker.somedomain.com/services/myapp:2",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 7000, "hostPort": 0, "servicePort": 0, "protocol": "tcp" }
]
},
"volumes": [
{
"containerPath": "application.yml",
"hostPath": "/var/myapp/application.yml",
"mode": "RO"
}
]
},
"healthChecks": [
{
"protocol": "HTTP",
"portIndex": 0,
"path": "/",
"gracePeriodSeconds": 5,
"intervalSeconds": 20,
"maxConsecutiveFailures": 3
}
]
}
I've been stuck on this for almost a day now, everything I've tried seems to be yielding the same result. Any insights on this would be much appreciated.
My versions:
Mesos: 0.22.1
Marathon: 0.8.2
Docker: 1.6.2
So this turns out to be an issue with volumes
"volumes": [
{
"containerPath": "/application.yml",
"hostPath": "/var/myapp/application.yml",
"mode": "RO"
}
]
Using the root path of the container of the root path may be legal in docker, but Mesos appears not to handle this behavior. Modifying the containerPath to a non-root path resolves this, i.e
"volumes": [
{
"containerPath": "/var",
"hostPath": "/var/myapp",
"mode": "RW"
}
]
If it is a problem between Marathon and the registry, the answer should be in the http logs of your registry. If Marathon connects, there will be an entry. And the Mesos master log should contain a clue as well.
It doesn't really sound like a problem between Marathon and Registry though. Are you sure you have 'docker,mesos' in /etc/mesos-slave/containerizers?
Did you --despite having no authentification-- try to follow Using a Private Docker Repository?
To supply credentials to pull from a private repository, add a .dockercfg to the uris field of your app. The $HOME environment variable will then be set to the same value as $MESOS_SANDBOX so Docker can automatically pick up the config file.

Resources