How do I pass docker parameters such as `--cap-add=XXX` to my docker instances running in BlueData? - bluedata

I would like to run a container with --cap-add=IPC_LOCK.
According to the BlueData 3.7 release notes, IPC_LOCK is supported:
HAATHI-13547: Docker configuration now includes default IPC_LOCK capability for all deployed containers. IPC_LOCK is the feature otherwise known as memlock, required by certain customer applications. The permitted capabilities of Docker containers as expressed on the docker invocation command line when instantiating a container now includes this value explicitly.
How do I pass docker parameters such as --cap-add=XXX to my docker instances running in BlueData?

You need to modify /opt/bluedata/common-install/bd_mgmt/releases/1/sys.config. Look for allowed_docker_caps which already has a list of allowed capabilities. A block of comments above that tuple list all the capabilities allowed. You can choose from them. You have to change the file on all hosts and restart each bd_mgmt after updating the file.
Note that all new clusters created after the change will inherit these settings.

Related

How to use puppet to configure the docker daemon on service start up

I am using puppet to configure a docker instance. Below is a code snippet that starts docker on the instance.
service { 'docker':
ensure => running,
name => 'docker',
provider => 'systemd',
enable => true,
require => [ File['/root/.docker/config.json'], File['/etc/sysconfig/docker'], Package['docker-ce'] ]
}
According to the docker documentation, you can pass in arguments to set different configurations when starting the docker daemon.
for example dockerd --icc=false will start docker and apply the config change for icc.
I know I can add config changes to a daemon.json file and have docker pick that up, but I want to figure out how to make the config changes live in the puppet code.
So how can I specify config changes like --icc=false when starting docker the way I am in the puppet code above??
So how can I specify config changes like --icc=false when starting docker the way I am in the puppet code above??
You can't. The resource declaration you present ensures that the Docker daemon is running, but it does not directly execute dockerd, and therefore provides no mechanism for passing arguments to the daemon binary. It does specifically manage the daemon via systemd, however, so you could do what you describe by having Puppet manage the corresponding systemd unit file, but that's not meaningfully different than managing daemon.json (via Puppet) instead.
It is absolutely normal, by the way, to manage the configuration (file) of a service and the run state of that service via different Puppet resources. Usually one also manages the package providing the service, too, wrapping all that up into a module. In fact, there are several pre-built Docker modules available already, including one built and maintained by Puppet, Inc., itself.

Does the Docker message: "Ignoring unsupported options: restart" mean the restart policy is ignored?

Using docker stack deploy, I can see the following message:
Ignoring unsupported options: restart
Does it mean that restart policies are not in place?
Do they have to be specified outside the compose file?
You can see this message for example with the
Joomla compose file available at the bottom of that page.
To start the compose file:
sudo docker swarm init
sudo docker stack deploy -c stackjoomla.yml joomla
A Compose YAML file is used by both docker-compose tool, for local (single-host) dev and test scenarios, and Swarm Stacks, for production multi-host concerns.
There are many settings in the Compose file which only work in one tool or the other (docker-compose up vs. docker stack deploy) because some settings are specific to dev and others specific to production clusters. It's OK that they are there, and you'll see warnings in either tool when there are settings included that the specific tool will ignore. This is commonly seen for build: settings (which are docker-compose only) and deploy: settings (which are Swarm Stacks only).
The whole goal here is a single file you can use in both tools, and the relevant sections of the compose file are used in that scenario, while the rest are ignored.
All of this can be referenced for the individual setting in the compose file documentation. If you're often working in Compose YAML, I recommend always having a tab open on this page, as I've referenced it almost daily for years, as the spec keeps changing (we're on 3.4+ now).
docker-compose does not restart containers by default, but it can if you set the single-setting restart: as documented here. But that setting doesn't work for Swarm Stacks. It will show up as a warning in a docker stack deploy to remind you that the setting will not take effect in a Swarm Stack.
Swarm Stacks use the restart_policy: under the deploy: setting, which gives finer control with multiple sub-settings. Like all Stack's, the defaults don't have to be specified in the compose file, and you'll see their default settings documented on that docs page.
There is a list on that page of the settings that won't work in a Swarm Stack, but it looks incomplete as the restart: setting should be there too. I'll submit a PR to fix that.
Also, in the Joomla example you pointed us too, that README seems out of date as well, as it includes links: in the compose example, which are depreciated as of Compose version 2, and not needed anymore (because all containers on a custom virtual network can reach each other now).
If you docker-compose up your application on a Docker host in standalone mode, all that Compose will do is start containers. It will not monitor the state of these containers once they are created.
So it is up to you to ensure that your application will still work if a container dies. You can do this by setting a restart-policy.
If you deploy an application into a Docker swarm with docker stack deploy, things are different.
A stack is created that consists of service specifications.
Docker swarm then makes sure that for each service in the stack, at all times the specified number of instances is running. If a container fails, swarm will always spawn a new instance in order to match the service specification again. In this context, a restart-policy does not make any sense and the corresponding setting in the compose file is ignored.
If you want to stop the containers of your application in swarm mode, you either have to undeploy the whole stack with docker stack rm <stack-name> or scale the service to zero with docker service scale <service-name>=0.

Binding ports when running Docker images in Singularity

I am currently working on a distributed graph processing platform which maintains an Akka cluster inside of docker containers and have recently been granted access to a large cluster to test this. Unfortunately, this cluster does not run docker, only singularity.
This did not initially seem an issue as singularity supports docker images, however, due to the nature of the Akka cluster, I have to past several environment variables and bind several ports. As an example, a 'Partition Manager' within the system would be run with the following command:
docker run -p $PM0Port:2551 --rm -e "HOST_IP=$IP" -e "HOST_PORT=$PM0Port" -v $entityLogs:/logs/entityLogs $Image partitionManager $PM0ID $NumberOfPartitions $ZooKeeper
From looking through the Singularity documentation I can see that I can create a 'Singularity' file and specify the environment variables, but there doesn't seem to be any documentation on binding custom ports. Nor does it explain how I could pass arguments to the default entrypoint (The project is compiled with 'sbt docker:publish' so I am not sure exactly where this would be to reassign it).
Even if this was the solution, as there are multiple actor types (and several instances of each) it appears specifying environment variables and ports in a document would require templating, creating the files at run time, and building an image for each individual actor.
I am sure I have completely missed a page somewhere which would nicely translate this docker command into the equivalent singularity, but I just can't find it.
There is no network isolation in Singularity, so there is no need to map any port. If the process inside the container binds to an IP:port, it will be immediately reachable on the host.

docker remote api set env

How to overwrite environment variables inside container after creation with remote API? I see no such option in container update method description. But docker itself is doing that when linking containers (source) to provide port and host variables:
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
...
I need to provide the same variables of other infrastructure elements which are not managed by docker. And each time I run a container this variables could be different.
I think it should look like:
initialize container's dependencies.
Create container itself.
Run container's dependencies.
Get dependencies parameters (IP, ports, etc).
Configure container environment (as I thought with container update).
Run container.
Steps from 3 to 6 could be repeated many times for one instance.

Packaging an app in docker that can be configured at run time

I have packaged a web app I've been working on as a docker image.
I want to be able to start the image with some configuration, like this is the url of the couchdb server to use, etc.
What is the best way of supplying configuration? My app relies on env variables can I set these at run time?
In addition to setting environment variables during docker run (using -e/--env and --env-file) as you already discovered, there are other options available:
Using --link to link your container to (for instance) your couchdb server. This will work if your server is also a container (or if you use an ambassador container to another server). Linking containers will make some environment variables available, including server IP and port, that your script can use. This will work if you only need to set references to services.
Using volumes. Volumes defined in the Dockerfile can be mapped to host folders, so you can use them to access configuration files, for instance. This is useful for very complex configurations.
Extending the image. You can create a new image based on your original and ADD custom configuration files or ENV entries. This is the least flexible option but is useful in complex configuration to simplify the launching, specially when the configuration is mostly static (probably a bad idea for services/hostnames, but can work for frameworks that can be configured differently for dev/production). Can be combined with any of the above.
It seems docker supports setting env variables - should have read the manual!
docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
http://docs.docker.com/reference/commandline/cli/#run

Resources