I'm trying to pass default parameters such as volumes or envs to my docker container, which I create through Marathon and Apache Mesos. It is possible through arguments passed to mesos-slave. I've put in /etc/mesos-slave/default_container_info file with JSON content (mesos-slave read this file and put it as its arguments):
{
"type": "DOCKER",
"volumes": [
{
"host_path": "/var/lib/mesos-test",
"container_path": "/tmp",
"mode": "RW"
}
]
}
Then I've restarted mesos-slave and create new container in marathon, but I can not see mounted volume in my container. Where I could do mistake? How can I pass default values to my containers in other way?
This will not work for you. When you schedule task on Marathon with docker, Marathon creates TaskInfo with ContainerInfo and that's why Mesos do not fill your default.
From the documentation
--default_container_info=VALUE JSON-formatted ContainerInfo that will be included into any ExecutorInfo that does not specify a ContainerInfo
You need to add volumes to every Marathon task you have or create RunSpecTaskProcessor that will augment all tasks with your volumes
Related
I have a very simple docker-compose.yml:
version: '2.4'
services:
containername:
image: ${DOCKER_IMAGE}
volumes:
- ./config:/root/config
I'm using a remote staging server accessed via ssh:
docker context create staging --docker "host=ssh://ubuntu#staging.example.com"
docker context use staging
However, I see unexpected results with my volume after I docker-compose up:
docker-compose --context staging up -d
docker inspect containername
...
"Mounts": [
{
"Type": "bind",
"Source": "/Users/crummy/code/.../config",
"Destination": "/root/config",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
...
It seems the expansion of ./config to a full path happens on the machine docker-compose is running on. Not the machine Docker is running on.
I can fix this problem by hardcoding the entire path: /home/ubuntu/config:/root/config. But this makes my docker-compose file a little less flexible than it could be. Is there a way to get the dot expansion to occur on the remote machine?
No, the docs say that:
You can mount a relative path on the host, which expands relative to the directory of the Compose configuration file being used. Relative paths should always begin with . or ..
I believe that happens for two reasons:
There's no easy way and objective way that the docker-compose can find out how to expand . in this context, as there's no way to know what . would mean for the ssh client (home? same folder?).
Even though the docker cli is using a different context, the expansion is done by the docker-compose tool, that's unaware about the context switch.
Even using environment variables might pose a problem, since the env expansion would also happen in the machine you're running the docker-compose command.
I have a project that runs multiple GO services using ECS. For example, I have 3 containers, A, B, C, and then container D holds config and stuff for the other containers. Is there a way that when container D is updated or restarted I can then restart the other containers within ECS so they can use the new data from container D?
I was thinking of having a pub/sub type thing and telling the other containers a new version had been released but I was thinking there must be an easier way that doesn't involve any extra code.
I'd also like to do this for my local docker-compose stack if possible.
In ECS you can define all containers A,B,C and D within the same task if there are dependencies between them (having all of them in your task definition).
If one container is going down within the task - all task containers are stopped (a new task is started instead). The deployment is in the task level - so deploying a new task will replace all the containers within it.
If you want more flexibility (having separate deployment for A,B,C) you can create 3 different tasks, each one has two containers - one of A/B/C and D (as a sidecar):
Task_A - running both containers A & D.
Task_B - running both containers B & D.
Task_C - running both containers C & D.
In docker-compose you can run docker-compose up --abort-on-container-exit so when one container is down everything goes down as well.
After a bit of searching, I found you can add container dependencies within the task definition JSON.
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html
This allows you to add dependencies for the different containers running.
For example:
"containerDefinitions": [
{
"name": "app",
"image": "application_image",
"portMappings": [
{
"containerPort": 9080,
"hostPort": 9080,
"protocol": "tcp"
}
],
"essential": true,
"dependsOn": [
{
"containerName": "container-a",
"condition": "HEALTHY"
}
]
},
Very useful.
I have a Sumologic log collector which is a generic log collector. I want the log collector to see logs and a config file from a different container. How do I accomplish this?
ECS containers can mount volumes so you would define
{
"containerDefinitions": [
{
"mountPoints": [
{
"sourceVolume": "logs",
"containerPath": "/tmp/clogs/"
},
}
],
"volumes": [
{
"name": "logs",
}
]
}
ECS also has a nice UI you can click around to set up the volumes at the task definition level, and then the mounts at the container level.
Once that's set up, ECS will mount a volume at the container path, and everything inside that path will be available to all other containers that mount the volume.
Further reading:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
I am attempting to run a task with persistent storage. The task executes a docker image which creates a directory and copies a file into it. However, when the task definition mounts a volume to the created directory, the file is lost.
For brevity, the relevant lines Dockerfile are:
RUN mkdir /root/createdDir
COPY ./myFile.txt /root/createdDir/myFile.txt
And the relevant fields of the task definition JSON are:
{
"containerDefinitions":[
{
...,
"mountPoints": [
{
"readOnly": null,
"containerPath": "/root/createdDir",
"sourceVolume": "shared"
}
],
"image": "myImage"
}]
"volumes": [
{
"name": "shared",
"host": {
"sourcePath": null
}
}]
}
When the task is run, the file can no longer be found. If I run the task without adding a mount point to the container, the file is still there.
When trying to do this locally using docker-compose, I can use the same Dockerfile and in the docker-compose.yml file add the following specification to the service volumes: shared:/root/createdDir, where shared is a volume also declared in the docker-compose.yml with a local driver.
The behavior of mounting a volume into an existing directory on the container can be confusing. It is consistent with the general behavior of Linux's mount:
The previous contents (if any) and owner and mode of dir become invisible.
Avoid doing this whenever possible, because it can lead to hard-to-find issues when the volume and the container have files with same names.
I have a spring boot application with some properties as below in my application.properties.
server.ssl.keyStore=/users/admin/certs/appcert.jks
server.ssl.keyStorePassword=certpwd
server.ssl.trustStore=/users/admin/certs/cacerts
server.ssl.trustStorePassword=trustpwd
Here the cert paths are hardcoded to some path. But, I dont want to hard code this as the path will not be known in Mesos or Kubernetes world.
I have a docker file as follows.
FROM docker.com/base/jdk1.8:latest
MAINTAINER Application Engineering [ https://docker.com/ ]
RUN mkdir -p /opt/docker/svc
COPY application/weather-service.war /opt/docker/svc/
CMD java -jar /opt/docker/svc/weather-service.war --spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml
Here, I can use the volume mount option in kubernetes so as to place the application.proerties.
How can i achieve the same thing for cert files in the application.properties?
Here, the cert props are optional for few applications and mandatory for few applications.
I need options to integrate within the docker images and having the cert files outside the docker image.
Approach 1. Within the docker image
Remove the property "server.ssl.keyStore" from application.properties and pass it as a environment variable like the below one.
CMD java -jar /opt/docker/svc/weather-service.war
--spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks
Now, the cert should be places in secrets and use the volume mount options with kubernetes.
Approach 2. No need to have -Dserver.ssl.keyStore=/certs/appcert.jks in the docker file, but still remove the property "server.ssl.keyStore" from application.properties and do as follows.
a. Create secret
kubectl create secret generic svc-truststore-cert
--from-file=./cacerts
b. Create one env variable as below.
{
"name": "JAVA_OPTS",
"value": "-Dserver.ssl.trustStore=/certs/truststore/cacerts"
}
c. Create Volume mounts under container for pod.
"volumeMounts": [
{
"name": "truststore-cert",
"mountPath": "/certs/truststore"
}
]
d. Create a volume under spec.
{
"name": "truststore-cert",
"secret": {
"secretName": "svc-truststore-cert",
"items": [
{
"key": "cacerts",
"path": "cacerts"
}
]
}
}
Approach 3.
Using Kubernetes Persistent Volume.
Created a persistent volume on Kubernetes .
Mount the volume to the Pod of each microservice (changes in the pod script file). Mounted file system accessible via - '/shared/folder/certs' path.
CMD java -jar /opt/docker/svc/weather-service.war
--spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks
I have taken the second approach. Is this correct? Is there any other better approach?
Thanks
Yes, the second approach is the best one, and it is the only way if you are storing some sensitive data like certificates, keys, etc. That topic is covered in the documentation.
Moreover, you can encrypt your secrets to add another level of protection.