Mesos cannot deploy container from private Docker registry - docker

I have a private Docker registry that is accessible at https://docker.somedomain.com (over standard port 443 not 5000). My infrastructure includes a set up of Mesosphere, which have docker containerizer enabled. I'm am trying to deploy a specific container to a Mesos slave via Marathon; however, this always fails with Mesos failing the task almost immediately with no data in stderr and stdout of that sandbox.
I tried deploying from an image from the standard Docker Registry and it appears to work fine. I'm having trouble figuring out what is wrong. My private Docker registry does not require password authentication (turned off for debugging this), AND if I shell into the Meso's slave instance, and sudo su as root, I can run a 'docker pull docker.somedomain.com/services/myapp' successfully every time.
Here is my Marathon post data for starting the task:
{
"id": "myapp",
"cpus": 0.5,
"mem": 64.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "docker.somedomain.com/services/myapp:2",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 7000, "hostPort": 0, "servicePort": 0, "protocol": "tcp" }
]
},
"volumes": [
{
"containerPath": "application.yml",
"hostPath": "/var/myapp/application.yml",
"mode": "RO"
}
]
},
"healthChecks": [
{
"protocol": "HTTP",
"portIndex": 0,
"path": "/",
"gracePeriodSeconds": 5,
"intervalSeconds": 20,
"maxConsecutiveFailures": 3
}
]
}
I've been stuck on this for almost a day now, everything I've tried seems to be yielding the same result. Any insights on this would be much appreciated.
My versions:
Mesos: 0.22.1
Marathon: 0.8.2
Docker: 1.6.2

So this turns out to be an issue with volumes
"volumes": [
{
"containerPath": "/application.yml",
"hostPath": "/var/myapp/application.yml",
"mode": "RO"
}
]
Using the root path of the container of the root path may be legal in docker, but Mesos appears not to handle this behavior. Modifying the containerPath to a non-root path resolves this, i.e
"volumes": [
{
"containerPath": "/var",
"hostPath": "/var/myapp",
"mode": "RW"
}
]

If it is a problem between Marathon and the registry, the answer should be in the http logs of your registry. If Marathon connects, there will be an entry. And the Mesos master log should contain a clue as well.
It doesn't really sound like a problem between Marathon and Registry though. Are you sure you have 'docker,mesos' in /etc/mesos-slave/containerizers?

Did you --despite having no authentification-- try to follow Using a Private Docker Repository?
To supply credentials to pull from a private repository, add a .dockercfg to the uris field of your app. The $HOME environment variable will then be set to the same value as $MESOS_SANDBOX so Docker can automatically pick up the config file.

Related

Docker swarm service - running state but no logs

In an existing swarm, I created a service via a docker-compose yaml file using the 'docker stack' command.
When I check the service via 'docker service ls' command, the new service shows up on the list. it shows "0/1" in the REPLICAS column
When I check the service using the command below, it shows 'Running' as the Desired State
docker service ps --no-trunc (service id)
When I check if there is already a corresponding container for the service, I can see none
When I try to access the service via the browser, it seems to be not started.
What is difficult is I cannot see any logs to find the cause of why this is happening
docker service logs (service id)
I figured it may just be slow to start but I waited for about half an hour and it was still in that state. Not sure how can I find out the cause of this without any logs. Can anyone help me on this?
EDIT: Below is the result when I did a docker inspect of the service task
[
{
"ID": "wt2tdoz64j5wmci4gr3q3io2e",
"Version": {
"Index": 3407514
},
"CreatedAt": "2020-08-25T00:58:13.012900717Z",
"UpdatedAt": "2020-08-25T00:58:13.012900717Z",
"Labels": {},
"Spec": {
"ContainerSpec": {
"Image": "my-ui-image:1.8.006",
"Labels": {
"com.docker.stack.namespace": "myservice-stack"
},
"Env": [
"BACKEND_HOSTNAME=somewebsite.com",
"BACKEND_PORT=3421"
],
"Privileges": {
"CredentialSpec": null,
"SELinuxContext": null
},
"Hosts": [
"10.152.30.18 somewebsite.com"
],
"Isolation": "default"
},
"Resources": {},
"Placement": {},
"Networks": [
{
"Target": "lt87emwtgbeztof5k2r1z2v27",
"Aliases": [
"myui_poc2"
]
}
],
"ForceUpdate": 0
},
"ServiceID": "nbskoeofakkgxlgj3utgn45c5",
"Slot": 1,
"Status": {
"Timestamp": "2020-08-25T00:58:13.012883476Z",
"State": "new",
"Message": "created",
"PortStatus": {}
},
"DesiredState": "running"
}
]
If you store your images in private registry then you must be logged in by command docker login and deploy your services by docker stack deploy -c docker-compose.yml your_service --with-registry-auth.
From the docker service ps ... output, you will see a column with the task id. You can get further details of the state of that task by inspecting the task id:
docker inspect $taskid
My guess is that your app is not redirecting it's output to stdout and that's why you don't get any output when doing "docker service logs...".
I would start by looking at this: https://docs.docker.com/config/containers/logging/
How you redirect the apps output to stdout will depend on what language your app is developed in.

How to keep logs for ECS container?

I'm running ECS tasks, and recently the service cpu hit 100% and went down.
I waited for the instance to settle down and sshed-in.
I was looking for logs, but it seemed docker container restarted and logs are all gone (logs when the cpu was high)
Next time, how do I make sure I can see the logs at least to diagnose the problem?
I have the following, hoping to see some logs somewhere (mounted in the host machine)
"mountPoints": [
{
"readOnly": null,
"containerPath": "/var/log/uwsgi",
"sourceVolume": "logs"
}
],
But there's no /var/log/uwsgi in the host machine.
And I probably need syslog and stuff..
As far you current configuration logs totally depend on the path that you define in the volume section.
"mountPoints": [
{
"readOnly": null,
"containerPath": "/var/log/uwsgi",
"sourceVolume": "logs"
}
],
the souces path defined in volume logs logs not /var/log/uwsgi, so you are mounting
/var/log/uwsgi (container path) -> logs volume (host path). you find these logs in path define in logs volume. but better to set something like
{
"readOnly": null,
"containerPath": "/var/log/uwsgi",
"sourceVolume": "volume_name"
}
then volume config
"volumes": [
{
"name": "logs",
"host": {
"sourcePath": "/home/ec2-user/logs"
}
}
]
From documentation
In the task definition volumes section, define a bind mount with name
and sourcePath values.
"volumes": [
{
"name": "webdata",
"host": {
"sourcePath": "/ecs/webdata"
}
}
]
In the containerDefinitions section, define a container with
mountPoints values that reference the name of the defined bind mount
and the containerPath value to mount the bind mount at on the
container.
"containerDefinitions": [
{
"name": "web",
"image": "nginx",
"cpu": 99,
"memory": 100,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"essential": true,
"mountPoints": [
{
"sourceVolume": "webdata",
"containerPath": "/usr/share/nginx/html"
}
]
}
]
bind-mounts-ECS
Now if I come to my suggestion I will go for AWS log driver.
Working in AWS, the best approach is to push all logs to CW, but AWS log driver only pushes container stdout and stderr logs to CW.
Using AWS log driver you do not need to worry about instance and container, you will log in CW and you can stream these logs to ELK as well.
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-wordpress",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
}
using_awslogs

Can you tell me the solution to the change of service ip in mesos + marathon combination?

I am currently posting a docker service with the MESOS + Marathon combination.
This means that the IP address of the docker is constantly changing.
For example, if you put mongodb on marathon, you would use the following code.
port can specify the port that is coming into the host. After a day, the service will automatically shut down and run and the IP will change.
So, when I was looking for a method called mesos dns, when I was studying the docker command, I learned how to find the ip of the service with the alias name by specifying the network alias in the docker.
I thought it would be easier to access without using mesos dns by using this method.
However, in marathon, docker service is executed in json format like below.
I was asked because I do not know how to specify the docker network alias option or the keyword or method.
{
"id": "mongodbTest",
"instances": 1,
"cpus": 2,
"mem": 2048.0,
"container": {
"type": "DOCKER",
"docker": {
"image": "mongo:latest",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 27017,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp"
}
]
},
"volumes": [
{
"containerPath": "/etc/mesos-mg",
"hostPath": "/var/data/mesos-mg",
"mode": "RW"
}
]
}
}

POD Definition - Deploying to DC/OS

I'm new to DC/OS and I have been really struggling trying to deploy a POD. I have tried the simple examples provided in the documentation
but the deployments remain stuck in the deploying stage. There are plenty of resources available so that is not the issue.
I have 3 containers that I need to exist within a virtual network (queue, PDI, API). I have included my definition file that starts with a single container deployment and once I can successfully deploy I will add 2 additional containers to the definition. I have been looking at this example but have been unsuccessful.
I have successfully deployed the containers one at a time through Jenkins. All 3 images have been published and exist in the docker registry (Jfrog). I have included an example of my marathon.json for one of those successful deployments. I would appreciate any feedback that can help. The service is stuck in a deployed stage so I'm unable to drill down and see the logs via the command line or UI.
containers.image = pdi-queue
artifactory server = repos.pdi.com:5010/pdi-queue
1 Container POD Definition - (Error: Stuck in Deployment Stage)
{
"id":"/pdi-queue",
"containers":[
{
"name":"simple-docker",
"resources":{
"cpus":1,
"mem":128,
"disk":0,
"gpus":0
},
"image":{
"kind":"DOCKER",
"id":"repos.pdi.com:5010/pdi-queue",
"portMappings":[
{
"hostPort": 0,
"containerPort": 15672,
"protocol": "tcp",
"servicePort": 15672
}
]
},
"endpoints":[
{
"name":"web",
"containerPort":80,
"protocol":[
"http"
]
}
],
"healthCheck":{
"http":{
"endpoint":"web",
"path":"/"
}
}
}
],
"networks":[
{
"mode":"container",
"name":"dcos"
}
]
}
Marathon.json - (No Error: Successful deployment)
{
"id": "/pdi-queue",
"backoffFactor": 1.15,
"backoffSeconds": 1,
"container": {
"portMappings": [
{"containerPort": 15672, "hostPort": 0, "protocol": "tcp", "servicePort": 15672, "name": "health"},
{"containerPort": 5672, "hostPort": 0, "protocol": "tcp", "servicePort": 5672, "name": "queue"}
],
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "repos.pdi.com:5010/pdi-queue",
"forcePullImage": true,
"privileged": false,
"parameters": []
}
},
"cpus": 0.1,
"disk": 0,
"healthChecks": [
{
"gracePeriodSeconds": 300,
"intervalSeconds": 60,
"maxConsecutiveFailures": 3,
"portIndex": 0,
"timeoutSeconds": 20,
"delaySeconds": 15,
"protocol": "MESOS_HTTP",
"path": "/"
}
],
"instances": 1,
"maxLaunchDelaySeconds": 3600,
"mem": 512,
"gpus": 0,
"networks": [
{
"mode": "container/bridge"
}
],
"requirePorts": false,
"upgradeStrategy": {
"maximumOverCapacity": 1,
"minimumHealthCapacity": 1
},
"killSelection": "YOUNGEST_FIRST",
"unreachableStrategy": {
"inactiveAfterSeconds": 300,
"expungeAfterSeconds": 600
},
"fetch": [],
"constraints": [],
"labels": {
"traefik.frontend.redirect.entryPoint": "https",
"traefik.frontend.redirect.permanent": "true",
"traefik.enable": "true"
}
}
I may not know the answer to the issues you are running into but I think I may be able to share some pointers to help debug this.
First of all, if you are unable to view logs from the DC/OS UI, you can also go to <cluster_url>/mesos and find the simple_docker task under Completed Tasks . It would show up as TASK_FAILED. Click on the Sandbox link on the right and then check stderr and stdout files for the task. There might be some clues there as to why it failed.
Another place to look can be to note the Agent IP from the Mesos UI where the task failed. SSH into the node and run sudo journalctl -u dcos-mesos-slave to see agent logs and try to find the logs corresponding to the failing task
One difference between the running the application as a Pod and a the App definition you shared is that your app definition is using DOCKER as the containerizer for the task while Pods use MESOS containerizer.
I noticed that you are using a private docker registry for your docker images. One possibility is that if your private registry's certificate is not trusted by Mesos but docker is configured already to trust it:
<copy the certificate(s) to /var/lib/dcos/pki/tls/certs>
cd /var/lib/dcos/pki/tls/certs
for file in *.crt; do ln -s \"$file\" \"$(openssl x509 -hash -noout -in \"$file\")\".0; done
This would need to be done on each agent node.
If its not a certificate issue, it could be docker registry credential issues. If the docker registry you are using requires authentication then you can specify docker credential at install time (assuming advanced install method) using : https://docs.mesosphere.com/1.11/installing/production/advanced-configuration/configuration-reference/#cluster-docker-credentials

Running Chronos docker image in BRIDGE mode

I've been putting together a POC mesos/marathon system that I am using to launch and control docker images.
I have a Vagrant virtual machine running in VirtualBox on which I run docker, marathon, zookeeper, mesos-master and mesos-slave processes, with everything working as expected.
I decided to add Chronos into the mix and initially I started with it running as a service on the vagrant VM, but then opted to switch to running it in a docker container using the mesosphere/chronos image.
I have found that I can get container image to start and run successfully when I specify HOST network mode for the container, but when I change to BRIDGE mode then I run into problems.
In BRIDGE mode, the chronos framework registers successfully with mesos (I can see the entry on the frameworks page of the mesos UI), but it looks as though the framework itself doesn't know that the registration was successful. The mesos master log if full of messages like:
strong textI1009 09:47:35.876454 3131 master.cpp:2094] Received SUBSCRIBE call for framework 'chronos-2.4.0' at scheduler-16d21dac-b6d6-49f9-90a3-bf1ba76b4b0d#172.17.0.59:37318
I1009 09:47:35.876832 3131 master.cpp:2164] Subscribing framework chronos-2.4.0 with checkpointing enabled and capabilities [ ]
I1009 09:47:35.876924 3131 master.cpp:2174] Framework 20151009-094632-16842879-5050-3113-0001 (chronos-2.4.0) at scheduler-16d21dac-b6d6-49f9-90a3-bf1ba76b4b0d#172.17.0.59:37318 already subscribed, resending acknowledgement
This implies some sort of configuration/communication issue but I have not been able to work out exactly what the root of the problem is. I'm not sure if there is any way to confirm if the acknowledgement from mesos is making it back to chronos or to check the status of the communication channels between the components.
I've done a lot of searching and I can find posts by folk who have encountered the same issue but I haven't found an detailed explanation of what needs to be done to correct it.
For example, I found the following post which mentions a problem that was resolved and which implies the user successfully ran their chronos container in bridge mode, but their description of the resolution was vague. There was also this post but the change suggested did resolve the issue that I am seeing.
Finally there was a post by someone at ILM who had what sound like exactly my problem and the resolution appeared to involve a fix to Mesos to introduce two new environment variables LIBPROCESS_ADVERTISE_IP and LIBPROCESS_ADVERTISE_PORT (on top of LIBPROCESS_IP and LIBPROCESS_PORT) but I can't find a decent explanation of what values should be assigned to any of these variables, so have yet to work out whether the change will resolve the issue I am having.
It's probably worth mentioning that I've also posted a couple of questions on the chronos-scheduler group, but I haven't had any responses to these.
If it's of any help the versions of software I'm running are as follows (the volume mount allows me to provide values of other parameters [e.g. master, zk_hosts] as files, without having to keep changing the JSON):
Vagrant: 1.7.4
VirtualBox: 5.0.2
Docker: 1.8.1
Marathon: 0.10.1
Mesos: 0.24.1
Zookeeper: 3.4.5
The JSON that I am using to launch the chronos container is as follows:
{
"id": "chronos",
"cpus": 1,
"mem": 1024,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mesosphere/chronos",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 4400,
"hostPort": 0,
"servicePort": 4400,
"protocol": "tcp"
}
]
},
"volumes": [
{
"containerPath": "/etc/chronos/conf",
"hostPath": "/vagrant/vagrantShared/chronos",
"mode": "RO"
}
]
},
"cmd": "/usr/bin/chronos --http_port 4400",
"ports": [
4400
]
}
If anyone has any experience of using chronos in a configuration like this then I'd appreciate any help that you might be able to provide in resolving this issue.
Regards,
Paul Mateer
I managed to work out the answer to my problem (with a little help from the sample framework here), so I thought I should post a solution to help anyone else the runs into the same issue.
The chronos service (and also the sample framework) were configured to communicate with zookeeper on the IP associated with the docker0 interface on the host (vagrant) VM (in this case 172.17.42.1).
Zookeeper would report the master as being available on 127.0.1.1 which was the IP address of the host VM that the mesos-master process started on, but although this IP address could be pinged from the container any attempt to connect to specific ports would be refused.
The solution was to start the mesos-master with the --advertise_ip parameter and specify the IP of the docker0 interface. This meant that although the service started on the host machine it would appear as though it had been started on the docker0 ionterface.
Once this was done communications between mesos and the chronos framework started completeing and the tasks scheduled in chronos ran successfully.
Running Mesos 1.1.0 and Chronos 3.0.1, I was able to successfully configure Chronos in BRIDGE mode by explicitly setting LIBPROCESS_ADVERTISE_IP, LIBPROCESS_ADVERTISE_PORT and pinning its second port to a hostPort which isn't ideal but the only way I could find to make it advertise its port to Mesos properly:
{
"id": "/core/chronos",
"cmd": "LIBPROCESS_ADVERTISE_IP=$(getent hosts $HOST | awk '{ print $1 }') LIBPROCESS_ADVERTISE_PORT=$PORT1 /chronos/bin/start.sh --hostname $HOST --zk_hosts master-1:2181,master-2:2181,master-3:2181 --master zk://master-1:2181,master-2:2181,master-3:2181/mesos --http_credentials ${CHRONOS_USER}:${CHRONOS_PASS}",
"cpus": 0.1,
"mem": 1024,
"disk": 100,
"instances": 1,
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "mesosphere/chronos:v3.0.1",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 9900,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp",
"labels": {}
},
{
"containerPort": 9901,
"hostPort": 9901,
"servicePort": 0,
"protocol": "tcp",
"labels": {}
}
],
"privileged": true,
"parameters": [],
"forcePullImage": true
}
},
"env": {
"CHRONOS_USER": "admin",
"CHRONOS_PASS": "XXX",
"PORT1": "9901",
"PORT0": "9900"
}
}

Resources